text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
### Streaming Support
**Streaming is no longer supported in Chart Studio Cloud.<br>Streaming is still available as part of [Chart Studio Enterprise](https://plot.ly/products/on-premise/). Additionally, [Dash](https://plot.ly/products/dash/) supports streaming, as demonstrated by the [Dash Wind Streaming example](https://github.com/plotly/dash-wind-streaming).**
### Getting Started with Streaming
```
import numpy as np
import plotly.plotly as py
import plotly.tools as tls
import plotly.graph_objs as go
```
Before you start streaming, you're going to need some [stream tokens](https://plot.ly/settings/api). You will need **one unique stream token for every `trace object` ** you wish to stream to. Thus if you have two traces that you want to plot and stream, you're going to require two unique stream tokens. Notice that more tokens can be added via the settings section of your Plotly profile: https://plot.ly/settings/api

Now in the same way that you set your credentials, as shown in [Getting Started](https://plot.ly/python/getting-started/), you can add stream tokens to your credentials file.
```
stream_ids = tls.get_credentials_file()['stream_ids']
print stream_ids
```
You'll see that `stream_ids` will contain a list of the stream tokens we added to the credentials file.
#### An Example to Get You Started
Now that you have some stream tokens to play with, we're going to go over how we're going to put these into action.
There are two main objects that will be created and used for streaming:
- Stream Id Object
- Stream link Object
We're going to look at these objects sequentially as we work through our first streaming example. For our first example, we're going to be streaming random data to a single scatter trace, and get something that behaves like the following:

##### Stream Id Object
The `Stream Id Object` comes bundled in the `graph_objs` package. We can then call help to see the description of this object:
```
help(go.Stream)
```
As we can see, the `Stream Id Object` is a dictionary-like object that takes two parameters, and has all the methods that are assoicated with dictionaries.
We will need one of these objects for each of trace that we wish to stream data to.
We'll now create a single stream token for our streaming example, which will include one scatter trace.
```
# Get stream id from stream id list
stream_id = stream_ids[0]
# Make instance of stream id object
stream_1 = go.Stream(
token=stream_id, # link stream id to 'token' key
maxpoints=80 # keep a max of 80 pts on screen
)
```
The `'maxpoints'` key sets the maxiumum number of points to keep on the plotting surface at any given time.
More over, if you want to avoid the use of these `Stream Id Objects`, you can just create a dictionary with at least the token parameter defined, for example:
```
stream_1 = dict(token=stream_id, maxpoints=60)
```
Now that we have our `Stream Id Object` ready to go, we can set up our plot. We do this in the same way that we would any other plot, the only thing is that we now have to set the stream parameter in our trace object.
```
# Initialize trace of streaming plot by embedding the unique stream_id
trace1 = go.Scatter(
x=[],
y=[],
mode='lines+markers',
stream=stream_1 # (!) embed stream id, 1 per trace
)
data = go.Data([trace1])
# Add title to layout object
layout = go.Layout(title='Time Series')
# Make a figure object
fig = go.Figure(data=data, layout=layout)
```
#### Stream Link Object
The Stream Link Object is what will be used to communicate with the Plotly server in order to update the data contained in your trace objects. This object is in the `plotly.plotly` object, an can be reference with `py.Stream`
```
help(py.Stream) # run help() of the Stream link object
```
You're going to need to set up one of these stream link objects for each trace you wish to stream data to.
<br>Below we'll set one up for the scatter trace we have in our plot.
```
# We will provide the stream link object the same token that's associated with the trace we wish to stream to
s = py.Stream(stream_id)
# We then open a connection
s.open()
```
We can now use the Stream Link object `s` in order to `stream` data to our plot.
<br>As an example, we will send a time stream and some random numbers:
```
# (*) Import module keep track and format current time
import datetime
import time
i = 0 # a counter
k = 5 # some shape parameter
# Delay start of stream by 5 sec (time to switch tabs)
time.sleep(5)
while True:
# Current time on x-axis, random numbers on y-axis
x = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
y = (np.cos(k*i/50.)*np.cos(i/50.)+np.random.randn(1))[0]
# Send data to your plot
s.write(dict(x=x, y=y))
# Write numbers to stream to append current data on plot,
# write lists to overwrite existing data on plot
time.sleep(1) # plot a point every second
# Close the stream when done plotting
s.close()
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'python_streaming', 'python/streaming-tutorial/', 'Plotly Streaming',
'Plotly Streaming', name='Plotly Streaming',
title = 'Plotly Streaming',
redirect_from = 'python/streaming-line-tutorial/',
language='python',
layout='user-guide',
ipynb= '~notebook_demo/80')
```
| github_jupyter |
# Running Neurokernel on NVIDIA Jetson Embedded Platform
In this notebook, we show step by step how to run Neurokernel on [Jetson TK1 Embedded Development Kit](http://www.nvidia.com/object/jetson-tk1-embedded-dev-kit.html). It can be applied to the latest [Jetson TX1 platform](http://www.nvidia.com/object/jetson-tx1-dev-kit.html) with minor modification. Before we start, you must have a Jetson TK1 Embedded Development Kit and have enrolled in [NVIDIA Embedded Developer Program](https://developer.nvidia.com/embedded-developer-program).
To install Jetson Development Pack (JetPack), you also need access to a Ubuntu OS. Follow [these steps](http://docs.nvidia.com/jetpack-l4t/2_0/index.html#developertools/mobile/jetpack/jetpack_l4t/2.0/jetpack_l4t_install.htm) to install JetPack. This notebook assumes that JetPack is already installed on your TK1 Kit.
<img src='files/files/tk1.png' />
## Installing Neurokernel on Jetson TK1
Please execute the following commands in terminal on the TK1. Note that executing command at a time is preferred to ensure complete installation.
We first maximize CPU performance to make running the installlation faster. For the 4 main CPU cores to always run at max performance until reboot:
```
sudo su
echo 0 > /sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/enable
echo 1 > /sys/devices/system/cpu/cpu0/online
echo 1 > /sys/devices/system/cpu/cpu1/online
echo 1 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu3/online
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
exit
```
Update system and install necessary libraries:
```
sudo apt-mark hold xserver-xorg-core
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bash-completion command-not-found
sudo apt-get install emacs24 libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev
sudo apt-get install libgdbm-dev libc6-dev libbz2-dev liblapack-dev libhdf5-dev libxml2-dev libxslt1-dev
sudo apt-get install python-dev tmux gfortran git
```
Install Latest Python:
```
mkdir packages
cd packages
wget https://www.python.org/ftp/python/2.7.11/Python-2.7.11.tgz
tar -xvf Python-2.7.11.tgz
cd Python-2.7.11
./configure --prefix=/home/ubuntu/opt
make
make install
```
Add path:
```
cd
echo -e "export PATH=/home/ubuntu/opt/bin:\$PATH\n"\
"export LD_LIBRARY_PATH=/home/ubuntu/opt/lib:\$LD_LIBRARY_PATH\n"\
"export PYTHONPATH=/home/ubuntu/opt/lib/python2.7/site-packages:\$PYTHONPATH" | tee -a ~/.bashrc
source .bashrc
```
Install Open-mpi:
```
cd packages
wget http://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.1.tar.gz
tar -xvf openmpi-1.10.1.tar.gz
cd openmpi-1.10.1
./configure --with-cuda=/usr/local/cuda --with-threads=posix\
--disable-mca-dso --prefix=/home/ubuntu/opt
make -j 4
make install
```
Install virtualenv:
```
cd ~/packages
wget https://pypi.python.org/packages/source/v/virtualenv/virtualenv-13.1.2.tar.gz
tar -xvf virtualenv-13.1.2.tar.gz
cd virtualenv-13.1.2
python setup.py install
cd
virtualenv NK
source NK/bin/activate
```
Install required Python packages
```
pip install numpy scipy ipython cython numexpr pycuda
pip install bidict pandas networkx mpi4py h5py dill lxml markupsafe shutilwhich ply psutil futures twiggy matplotlib
pip install sphinx sphinx_rtd_theme
```
This step will take a lot of time. If some installation fails with error message: No matching distribution found, you can install it manually by replacing link_to_package with the url for the package in the following command:
```
pip install -Iv link_to_package
```
Install Neurokernel:
```
cd
git clone https://github.com/neurokernel/neurokernel.git
cd neurokernel
git pull
python setup.py develop
```
Test Neurokernel by running one of the examples
```
cd ~/neurokernel/examples/intro/data
python gen_generic_lpu.py -s 0 -l lpu_0 generic_lpu_0.gexf.gz generic_lpu_0_input.h5
python gen_generic_lpu.py -s 1 -l lpu_1 generic_lpu_1.gexf.gz generic_lpu_1_input.h5
cd ../
python intro_demo.py --gpu_dev 0 0 --log file
```
You can ignore warning messages like these:
---------------------------------------------------------
The call to cuMemHostRegister failed.
Host: tegra-ubuntu
cuMemHostRegister return value: 801
Memory Pool: smcuda
---------------------------------------------------------
---------------------------------------------------------
Sorry! You were supposed to get help about:
cuMemHostRegister during init failed
from the file:
help-mpi-common-cuda.txt
But I couldn't find that topic in the file. Sorry!
---------------------------------------------------------
or
983 more processes have sent help message help-mpi-common-cuda.txt / cuIpcGetMemHandle failed
This is caused by the fact that cudaHostRegister is not supported in armv7 devices.
Inspect 'neurokernel.log' after the example finishes execution. If there is no error in the log file, the installation is complete.
| github_jupyter |
```
import torch
import numpy as np
import torch.nn.functional as F
import torch.nn
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
use_cuda = torch.cuda.is_available()
class E2EBlock(torch.nn.Module):
'''E2Eblock.'''
def __init__(self, in_planes, planes,example,bias=False):
super(E2EBlock, self).__init__()
self.d = example.size(3)
self.cnn1 = torch.nn.Conv2d(in_planes,planes,(1,self.d),bias=bias)
self.cnn2 = torch.nn.Conv2d(in_planes,planes,(self.d,1),bias=bias)
def forward(self, x):
a = self.cnn1(x)
b = self.cnn2(x)
return torch.cat([a]*self.d,3)+torch.cat([b]*self.d,2)
```
BrainNetCNN Network for fitting Gold-MSI on LSD dataset
```
class BrainNetCNN(torch.nn.Module):
def __init__(self, example, num_classes=10):
super(BrainNetCNN, self).__init__()
self.in_planes = example.size(1)
self.d = example.size(3)
self.e2econv1 = E2EBlock(1,32,example,bias=True)
self.e2econv2 = E2EBlock(32,64,example,bias=True)
self.E2N = torch.nn.Conv2d(64,1,(1,self.d))
self.N2G = torch.nn.Conv2d(1,256,(self.d,1))
self.dense1 = torch.nn.Linear(256,128)
self.dense2 = torch.nn.Linear(128,30)
self.dense3 = torch.nn.Linear(30,2)
def forward(self, x):
out = F.leaky_relu(self.e2econv1(x),negative_slope=0.33)
out = F.leaky_relu(self.e2econv2(out),negative_slope=0.33)
out = F.leaky_relu(self.E2N(out),negative_slope=0.33)
out = F.dropout(F.leaky_relu(self.N2G(out),negative_slope=0.33),p=0.5)
out = out.view(out.size(0), -1)
out = F.dropout(F.leaky_relu(self.dense1(out),negative_slope=0.33),p=0.5)
out = F.dropout(F.leaky_relu(self.dense2(out),negative_slope=0.33),p=0.5)
out = F.leaky_relu(self.dense3(out),negative_slope=0.33)
return out
```
Loader for GoldMSI-LSD77 dataset
```
#behavdir = "/Users/nicolasfarrugia/Documents/recherche/git/Gold-MSI-LSD77/behav"
behavdir = "/home/nfarrugi/campus/data_lsd/behav"
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import normalize
import os
import torch.utils.data.dataset
class GoldMSI_LSD_Dataset(torch.utils.data.Dataset):
def __init__(self, directory=behavdir,mode="train",transform=False,class_balancing=False):
"""
Args:
directory (string): Path to the dataset.
mode (str): train = 90% Train, validation=10% Train, train+validation=100% train else test.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.directory = directory
self.mode = mode
self.transform = transform
x = np.load(os.path.join(directory,"X_y_lsd77_static_tangent.npz"))['X']
y_all = np.load(os.path.join(directory,"X_y_lsd77_static_tangent.npz"))['y']
y_2=y_all[:,[3,4]]
y = normalize(y_2,axis=0)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33,random_state=42)
if self.mode=="train":
x = X_train
y = y_train
elif self.mode=="validation":
x = X_test
y = y_test
elif mode=="train+validation":
x=x
y=y
else:
x=x
y=y
self.X = torch.FloatTensor(np.expand_dims(x,1).astype(np.float32))
#self.X = torch.FloatTensor(x.astype(np.float32))
self.Y = torch.FloatTensor(y.astype(np.float32))
print(self.mode,self.X.shape,(self.Y.shape))
def __len__(self):
return self.X.shape[0]
def __getitem__(self, idx):
sample = [self.X[idx], self.Y[idx]]
if self.transform:
sample[0] = self.transform(sample[0])
return sample
trainset = GoldMSI_LSD_Dataset(mode="train")
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10,shuffle=True, num_workers=1)
testset = GoldMSI_LSD_Dataset(mode="validation")
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False, num_workers=1)
```
Training
```
net = BrainNetCNN(trainset.X)
if use_cuda:
net = net.cuda()
#net = torch.nn.DataParallel(net, device_ids=[0])
cudnn.benchmark = True
### Weights initialization for the dense layers using He Uniform initialization
### He et al., http://arxiv.org/abs/1502.01852
def init_weights_he(m):
#https://keras.io/initializers/#he_uniform
print(m)
if type(m) == torch.nn.Linear:
fan_in = net.dense1.in_features
he_lim = np.sqrt(6) / fan_in
m.weight.data.uniform_(-he_lim,he_lim)
# print(m.weight)
net.apply(init_weights_he)
momentum = 0.9
lr = 0.00001
wd = 0.0005 ## Decay for L2 regularization
#wd = 0
preds,y_true,loss_test = test()
mae_1 = mae(preds[:,0],y_true[:,0])
pears_1 = pearsonr(preds[:,0],y_true[:,0])
print("Init Network")
print("Test Set : MAE for Engagement : %0.2f %%" % (100*mae_1))
print("Test Set : pearson R for Engagement : %0.2f, p = %0.4f" % (pears_1[0],pears_1[1]))
mae_2 = mae(preds[:,1],y_true[:,1])
pears_2 = pearsonr(preds[:,1],y_true[:,1])
print("Test Set : MAE for Training : %0.2f %%" % (100*mae_2))
print("Test Set : pearson R for Training : %0.2f, p = %0.4f" % (pears_2[0],pears_2[1]))
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(),lr=lr,momentum=momentum,nesterov=True,weight_decay=wd)
def train(epoch):
net.train()
train_loss = 0
correct = 0
total = 0
running_loss = 0.0
for batch_idx, (inputs, targets) in enumerate(trainloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
optimizer.zero_grad()
inputs, targets = Variable(inputs), Variable(targets)
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
#if batch_idx % 10 == 9: # print every 10 mini-batches
# print('Training loss: %.6f' % ( running_loss / 10))
# running_loss = 0.0
#_, predicted = torch.max(outputs.data, 1)
#total += targets.size(0)
#correct += predicted.eq(targets.data).cpu().sum()
return running_loss/batch_idx
def test():
net.eval()
test_loss = 0
correct = 0
total = 0
running_loss = 0.0
preds = []
ytrue = []
for batch_idx, (inputs, targets) in enumerate(testloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
#with torch.no_grad():
inputs, targets = Variable(inputs), Variable(targets)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.data[0]
preds.append(outputs.data.cpu().numpy())
ytrue.append(targets.data.cpu().numpy())
# print statistics
running_loss += loss.data[0]
#if batch_idx % 5 == 4: # print every 5 mini-batches
# print('Test loss: %.6f' % ( running_loss / 5))
# running_loss = 0.0
#_, predicted = torch.max(outputs.data, 1)
#total += targets.size(0)
#correct += predicted.eq(targets.data).cpu().sum()
return np.vstack(preds),np.vstack(ytrue),running_loss/batch_idx
# Save checkpoint.
#acc = 100.*correct/total
```
Run Epochs of training and testing
```
from sklearn.metrics import mean_absolute_error as mae
from scipy.stats import pearsonr
nbepochs = 100
allloss_train = []
allloss_test= []
allmae_test1 = []
allpears_test1 = []
allmae_test2 = []
allpears_test2 = []
for epoch in range(nbepochs):
loss_train = train(epoch)
allloss_train.append(loss_train)
preds,y_true,loss_test = test()
allloss_test.append(loss_test)
print("Epoch %d" % epoch)
mae_1 = mae(preds[:,0],y_true[:,0])
pears_1 = pearsonr(preds[:,0],y_true[:,0])
allmae_test1.append(mae_1)
allpears_test1.append(pears_1[0])
print("Test Set : MAE for Engagement : %0.2f %%" % (100*mae_1))
print("Test Set : pearson R for Engagement : %0.2f, p = %0.4f" % (pears_1[0],pears_1[1]))
mae_2 = mae(preds[:,1],y_true[:,1])
pears_2 = pearsonr(preds[:,1],y_true[:,1])
allmae_test2.append(mae_2)
allpears_test2.append(pears_2[0])
print("Test Set : MAE for Training : %0.2f %%" % (100*mae_2))
print("Test Set : pearson R for Training : %0.2f, p = %0.4f" % (pears_2[0],pears_2[1]))
from matplotlib import pyplot as plt
%matplotlib inline
plt.plot(allloss_train)
plt.plot(allloss_test)
plt.plot(allmae_test1)
plt.plot(allmae_test2)
plt.plot(allpears_test1)
plt.plot(allpears_test2)
```
Run this to save the model
```
import datetime
mystring = datetime.datetime.now().strftime("%m-%d-%H-%M")
filename_pt = mystring + "_model.pt"
filename_stats = mystring + "_stats.npz"
torch.save(net,filename_pt)
np.savez_compressed(filename_stats,test_losses = allloss_train,train_losses = allloss_train,mae_training = allmae_test2,
mae_eng = allmae_test1,pears_eng = allpears_test1,pears_train = allpears_test2)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re
import json
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
import pickle
%matplotlib inline
documents=[]
with open('data/resume/dataset_Skills.json','r') as skills:
data=json.load(skills)
for i in range(0,1450):
documents.append((data['0'][str(i)],'Skills'))
with open('data/resume/dataset_Education.json','r') as educ:
data=json.load(educ)
for i in range(0,1697):
documents.append((data['0'][str(i)],'Education'))
with open('data/resume/dataset_Experience.json','r') as exp:
data=json.load(exp)
for i in range(0,1749):
documents.append((data['0'][str(i)],'Experience'))
df=pd.DataFrame(documents,columns=['text','class'])
df.groupby('class').text.count().plot.bar(ylim=0)
plt.show()
words = stopwords.words("english")
df['cleaned'] = df['text'].apply(lambda x: " ".join([i for i in x.split() if i not in words]))
df.head()
vectorizer = TfidfVectorizer(min_df= 3, stop_words="english", sublinear_tf=True, norm='l2', ngram_range=(1, 3))
final_features = vectorizer.fit_transform(df['cleaned']).toarray()
final_features.shape
df = df.sample(frac=1).reset_index(drop=True)
#first we split our dataset into testing and training set:
# this block is to split the dataset into training and testing set
X = df['cleaned']
Y = df['class']
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.25)
# instead of doing these steps one at a time, we can use a pipeline to complete them all at once
pipeline = Pipeline([('vect', vectorizer),
('chi', SelectKBest(chi2, k=1200)),
('clf', RandomForestClassifier())])
# fitting our model and save it in a pickle for later use
model = pipeline.fit(X_train, y_train)
with open('RandomForest.pickle', 'wb') as f:
pickle.dump(model, f)
ytest = np.array(y_test)
# confusion matrix and classification report(precision, recall, F1-score)
print(classification_report(ytest, model.predict(X_test)))
print(confusion_matrix(ytest, model.predict(X_test)))
str="""Work Experience"""
print(model.predict([str]))
```
| github_jupyter |
```
# import libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
plt.style.use('ggplot')
%matplotlib inline
survey_2018 = pd.read_csv('./resources/04_Kaggle_Survey_2018.csv')
survey_2018 = survey_2018.drop([0],axis=0)
survey_2018.head(2)
total_2018 = survey_2018['Time from Start to Finish (seconds)'].count()
total_2018
databases = survey_2018[['Q33_Part_1',
'Q33_Part_2',
'Q33_Part_3',
'Q33_Part_4',
'Q33_Part_5',
'Q33_Part_6',
'Q33_Part_7',
'Q33_Part_8',
'Q33_Part_9',
'Q33_Part_10',
'Q33_Part_11']]
databases
gov_website = databases.count()["Q33_Part_1"]
uni_research = databases.count()["Q33_Part_2"]
non_profit = databases.count()["Q33_Part_3"]
aggregator = databases.count()["Q33_Part_4"]
my_own = databases.count()["Q33_Part_5"]
publicly_released = databases.count()["Q33_Part_6"]
google_search = databases.count()["Q33_Part_7"]
google_dataset = databases.count()["Q33_Part_8"]
github = databases.count()["Q33_Part_9"]
none = databases.count()["Q33_Part_10"]
other = databases.count()["Q33_Part_11"]
gov_website = (gov_website / total_2018) *100
uni_research = (uni_research / total_2018) *100
non_profit = (non_profit / total_2018) *100
aggregator = (aggregator / total_2018) *100
my_own = (my_own / total_2018) *100
publicly_released = (publicly_released / total_2018) *100
google_search = (google_search / total_2018) *100
google_dataset = (google_dataset / total_2018) *100
github = (github / total_2018) *100
none = (none / total_2018) *100
other = (other / total_2018) *100
databases = pd.DataFrame({'Databases':['Government websites',
'University research group websites',
'Non-profit research group websites',
'Dataset aggregator/platform (Socrata, Kaggle,etc)',
'I collect my own data (web-scraping, etc.)',
'Publicly released data from private companies',
'Google Search',
'Google Dataset Search',
'GitHub','None, I do not work w/public data',
'Other'],
'Responses':[gov_website, uni_research, non_profit,aggregator, my_own,publicly_released,
google_search,google_dataset,github,none,other]})
databases = databases.sort_values('Responses')
databases
ax = databases[['Responses']].plot(kind='barh',
figsize=(10,7), color=['dodgerblue', 'slategray'], fontsize=13);
ax.set_alpha(.7)
ax.set_title("2018 Preferred Databases",fontsize=18)
ax.set_xlabel("Total Responses", fontsize=18)
ax.set_ylabel("Databases", fontsize=18)
ax.set_xticks([0, 10, 20, 30, 40, 50])
ax.set_yticklabels(databases['Databases'])
# set individual bar lables using above list
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+1, i.get_y()+.20, \
str(round((i.get_width()), 2))+'%', fontsize=11, color='dimgrey')
fig = ax.get_figure()
# invert for largest on top
#ax.invert_yaxis()
ax.legend(loc='lower right')
plt.show()
fig.tight_layout()
fig.savefig("preferred_databases.png")
```
| github_jupyter |
```
import sys
import wandb
import pandas as pd
import numpy as np
from pprint import pprint
def mean_and_std(df):
agg = np.stack(df.to_numpy(), axis=0)
return np.mean(agg, axis=0), np.std(agg, axis=0)
download_root = "."
def get_sweep_regression_df_all(sweep_id, allow_crash=False):
api = wandb.Api()
sweep = api.sweep("ngruver/physics-uncertainty-exps/{}".format(sweep_id))
results = []
for run in sweep.runs:
config = pd.Series(run.config)
if not allow_crash and not "finished" in str(run):
continue
if "finished" in str(run):
summary = pd.Series(run.summary)
else:
history = run.history()
summary = pd.Series({k: history[k].to_numpy()[-1] for k,v in history.items()})
results.append(pd.concat([config,summary]))
return pd.concat(results,axis=1).T
df = get_sweep_regression_df_all("v96kirjy",allow_crash=True)
df2 = get_sweep_regression_df_all("pexiwka8",allow_crash=True)
df = pd.concat((df,df2))
df["model_type"].unique()
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
reader_friendly_dict = {
"NN": "NODE",
"MechanicsNN": "NODE + SO",
"HNN": "HNN"
}
sns.set_style('whitegrid')
colors = ["#00abdf", "#00058A", "#6A0078", (96/255,74/255,123/255), "#8E6100"]
sns.set_palette(sns.color_palette(colors))
filtered = df[df['model_type'].isin(['HNN','NN','MechanicsNN'])].copy()
filtered["model_type"] = filtered["model_type"].apply(lambda s: reader_friendly_dict[s])
filtered["dataset"]=filtered["system_type"]+filtered["num_bodies"].astype(str)
filtered["Rollout Error"] = filtered["test_gerr"].astype(float)
filtered["Energy Violation"] = filtered["test_Herr"].astype(float)
filled_markers = ('o', 'v', '^', '<', '>', '8', 's', 'p', '*', 'h', 'H', 'D', 'd', 'P', 'X')[:len(filtered["dataset"].unique())]
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
matplotlib.rcParams.update({'font.size': 14})
fig, ax = plt.subplots(1, 1, figsize=(4.5,3.5))
sns.scatterplot(data=filtered,x='Rollout Error',y='Energy Violation',hue='model_type', ax=ax)#,style="dataset",markers=filled_markers)
ax.get_legend().remove()
plt.yscale('log')
plt.xscale('log')
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='lower center', ncol=3)
fig.subplots_adjust(bottom=0.3)
plt.savefig('energy_conservation_loglog.pdf', bbox_inches='tight')
plt.show()
from sklearn import datasets, linear_model, metrics
regr = linear_model.LinearRegression()
regr.fit(np.log(filtered["Rollout Error"][:,None]), np.log(filtered["Energy Violation"]))
y_pred = regr.predict(np.log(filtered["Rollout Error"][:,None]))
y_true = np.log(filtered["Energy Violation"])
residuals = y_true-y_pred
filtered["residuals"] = residuals/np.log(filtered["Energy Violation"]).std()
metrics.r2_score(y_true, y_pred)
plt.figure(figsize=(4, 3))
order = sorted(filtered["dataset"].unique())
order = order[5:]+order[:5]
plot =sns.barplot(y="residuals",hue="model_type",x="dataset",data=filtered,order=order)
plt.setp(plot.get_xticklabels(), rotation=30)
plt.xlabel('')
plt.savefig('energy_conservation_residuals.pdf', bbox_inches='tight')
df = get_sweep_regression_df_all("kj4ke9i2",allow_crash=True)
filtered = df#df[df['model_type'].str.fullmatch('|'.join(['HNN','NN','MechanicsNN','SecondOrderNN']))]
filtered["dataset"]=filtered["system_type"].apply(lambda s: s.replace("Pendulum", " ")) +filtered["num_bodies"].astype(str)
filtered["SymReg strength"] = 1/filtered["alpha"].astype(float)
order = sorted(filtered["dataset"].unique())
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
matplotlib.rcParams.update({'font.size': 18})
fig, ax = plt.subplots(1, 1, figsize=(6.75,5.25))
plot = sns.barplot(data=filtered, x="dataset", y='test_gerr', hue="SymReg strength", order=order, palette="rocket",ax=ax)
ax.get_legend().remove()
ax.grid(False)
plt.yscale('log')
plt.xlabel('')
plt.ylabel("Rollout Error")
handles, labels = ax.get_legend_handles_labels()
leg = fig.legend(handles, labels, loc='lower center', ncol=6, prop={'size': 12}, title="$\\alpha=$")#, fontsize=45)
fig.subplots_adjust(bottom=0.2, left=-.15)
plt.savefig('state_err_reg.pdf', bbox_inches='tight')
plt.show()
plt.close()
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import FuncFormatter
filtered = df
filtered["dataset"]=filtered["system_type"].apply(lambda s: s.replace("Pendulum", " ")) +filtered["num_bodies"].astype(str)
filtered["Symplectic Error"] = np.log10(filtered["Train_symreg"].astype(float))
filtered["Rollout Error"] = np.log10(filtered["test_gerr"].astype(float))
filtered["SymReg strength"] = 1/filtered["alpha"].astype(float)
order = sorted(filtered["dataset"].unique())
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
matplotlib.rcParams.update({'font.size': 14})
palette = sns.color_palette("Paired", desat=0.8)[4:]
g = sns.lmplot(data=filtered,x="Symplectic Error",y='Rollout Error',hue="dataset",hue_order=order, legend_out=True, height=4, aspect=1.3, palette=palette)
ax = g.axes[0,0]
ax.set_xticks(np.arange(-10,4,2))
ax.set_yticks(np.arange(-4,1))
formatter = lambda x, pos: f'{10. ** x:g}'
ax.get_xaxis().set_major_formatter(FuncFormatter(formatter))
ax.get_yaxis().set_major_formatter(FuncFormatter(formatter))
ax.grid(False)
ax.tick_params(axis='both', which='major', labelsize=14)
ax.set_xlabel("Symplectic Error", labelpad=10)
legend = g.legend
legend.set_title("")
plt.savefig('state_err_reg_value.pdf', bbox_inches='tight')
plt.show()
import sys
import wandb
import pandas as pd
import numpy as np
from pprint import pprint
def mean_and_std(df):
agg = np.stack(df.to_numpy(), axis=0)
return np.mean(agg, axis=0), np.std(agg, axis=0)
download_root = "."
import json
def get_sweep_tabular(sweep_id, allow_crash=True):
api = wandb.Api()
sweep = api.sweep("ngruver/physics-uncertainty-exps/{}".format(sweep_id))
results = []
for run in sweep.runs:
# print(run)
config = pd.Series(run.config)
if not allow_crash and not "finished" in str(run):
continue
if "finished" in str(run):
# print(run.summary)
summary = pd.Series(run.summary)
else:
history = run.history()
summary = pd.Series({k: history[k].to_numpy()[-1] for k,v in history.items()})
for f in run.files():
if not f.name.endswith(summary['H_err_vec']['path'].split('/')[-1]):
continue
f.download(root=".", replace=True)
with open(f.name) as fd:
data = np.array(json.load(fd)['data'])
print(f.name)
config = pd.Series(run.config)
logherrs=data
ic = np.arange(logherrs.shape[0])[:,None]
ic =ic+ np.zeros_like(logherrs)
T = np.linspace(0,1,ic.shape[-1])[None,:]+np.zeros_like(logherrs)
df = pd.DataFrame({'logherr':logherrs.reshape(-1),'ics':ic.reshape(-1),'T':T.reshape(-1)})
c = config.to_frame()
for att in c.T.columns:
df[att]=config[att]
results.append(df)
return pd.concat(results)
df = df_all = get_sweep_tabular("j3sjkwvo",False)
df["dataset"]=df["system_type"]+df["num_bodies"].astype(str)
mean = df.groupby(['model_type','dataset','T']).mean()['logherr'].reset_index()
std = df.groupby(['model_type','dataset','T']).std()['logherr'].reset_index()
mean['std'] = std['logherr']
mean['std'] = np.exp(mean['std'])
mean['logherr']=np.exp(mean['logherr'])
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
colors = ["#00abdf", "#00058A", "#6A0078", (96/255,74/255,123/255), "#8E6100"]
sns.set_palette(sns.color_palette(colors))
matplotlib.rcParams.update({'font.size': 18})
fig1, f1_axes = plt.subplots(ncols=3, nrows=2, constrained_layout=True,figsize=(8,6),sharex=True,sharey=True)
datasets = [f'ChainPendulum{i}' for i in (2,3,4)]+[f'SpringPendulum{i}' for i in (2,3,4)]
for i,ds in enumerate(datasets):
dfs = mean[mean['dataset']==ds]
#print(dfs[dfs['T']==1])
dfhnn = dfs[dfs['model_type']=='HNN']
dfnn = dfs[dfs['model_type']=='NN']
ax = f1_axes[i//3,i%3]
ax.plot(dfhnn['T'],dfhnn['logherr'],label="HNN")
ax.fill_between(dfhnn['T'],dfhnn['logherr']/dfhnn['std'],dfhnn['logherr']*dfhnn['std'],alpha=.2)
ax.plot(dfnn['T'],dfnn['logherr'],label="NODE")
ax.fill_between(dfnn['T'],dfnn['logherr']/dfnn['std'],dfnn['logherr']*dfnn['std'],alpha=.2)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(bottom=5e-6, top=1e1)
ax.grid(True)
ax.tick_params(axis=u'both', which=u'both',length=0)
if i//3==0:
ax.set_title(f"{i+2} link")
#ax.title(ds.split('P')[0])
fig1.text(1.01, 0.72, 'Chain', ha='center', va='center', rotation='vertical')
fig1.text(1.01, 0.3, 'Spring', ha='center', va='center', rotation='vertical')
fig1.text(-0.005, 0.5, 'Energy Error', ha='center', va='center', rotation='vertical')
fig1.text(0.54, 0, 'Rollout Time T', ha='center', va='center')
plt.legend()
plt.tight_layout()
plt.show()
fig1.savefig('energy_growth.pdf', bbox_inches='tight')
```
| github_jupyter |
# Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf).
Face recognition problems commonly fall into two categories:
- **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
- **Face Recognition** - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
**In this assignment, you will:**
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
Let's load the required packages.
```
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
```
## 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u></center></caption>
Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.
## 1 - Encoding face images into a 128-dimensional vector
### 1.1 - Using an ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).
The key things you need to know are:
- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
- It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
```
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
```
** Expected Output **
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2**: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other
- The encodings of two images of different persons are very different
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 3**: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
### 1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
- A is an "Anchor" image--a picture of a person.
- P is a "Positive" image--a picture of the same person as the Anchor image.
- N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
- $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$.
Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.
**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.
For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
```
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis = -1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis = -1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
## 2 - Loading the trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
```
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
```
Here're some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 4**:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
## 3 - Applying the model
Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.
However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.
So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a **Face verification** system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.
### 3.1 - Face Verification
Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use `img_to_encoding(image_path, model)` which basically runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
```
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
```
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from image_path
2. Compute the distance about this encoding and the encoding of the identity image stored in the database
3. Open the door if the distance is less than 0.7, else do not open.
As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
```
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
```
Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
```
verify("images/camera_0.jpg", "younes", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's younes, welcome home!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
```
verify("images/camera_2.jpg", "kian", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
### 3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!
To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.
**Exercise**: Implement `who_is_it()`. You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.
- Compute L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
```
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
```
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
```
who_is_it("images/camera_0.jpg", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**it's younes, the distance is 0.659393**
</td>
<td>
(0.65939283, 'younes')
</td>
</tr>
</table>
You can change "`camera_0.jpg`" (picture of younes) to "`camera_1.jpg`" (picture of bertrand) and see the result.
Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore!
You've now seen how a state-of-the-art face recognition system works.
Although we won't implement it here, here're some ways to further improve the algorithm:
- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.
- Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
<font color='blue'>
**What you should remember**:
- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
Congrats on finishing this assignment!
### References:
- Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)
- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf)
- The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
- Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
| github_jupyter |
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
import torch
from scipy.io import loadmat
from tqdm import tqdm_notebook as tqdm
%matplotlib inline
use_cuda = torch.cuda.is_available()
device = torch.device('cuda:0' if use_cuda else 'cpu')
# Add new methods here.
# methods = ['hesaff', 'hesaffnet', 'delf', 'delf-new', 'superpoint', 'd2-net', 'd2-net-trained']
# names = ['Hes. Aff. + Root-SIFT', 'HAN + HN++', 'DELF', 'DELF New', 'SuperPoint', 'D2-Net', 'D2-Net Trained']
# colors = ['black', 'orange', 'red', 'red', 'blue', 'purple', 'purple']
# linestyles = ['-', '-', '-', '--', '-', '-', '--']
methods = ['hesaff', 'hesaffnet', 'delf', 'delf-new', 'superpoint', 'lf-net', 'd2-net', 'd2-net-ms', 'd2-net-trained', 'd2-net-trained-ms']
names = ['Hes. Aff. + Root-SIFT', 'HAN + HN++', 'DELF', 'DELF New', 'SuperPoint', 'LF-Net', 'D2-Net', 'D2-Net MS', 'D2-Net Trained', 'D2-Net Trained MS']
colors = ['black', 'orange', 'red', 'red', 'blue', 'brown', 'purple', 'green', 'purple', 'green']
linestyles = ['-', '-', '-', '--', '-', '-', '-', '-', '--', '--']
# Change here if you want to use top K or all features.
# top_k = 2000
top_k = None
n_i = 52
n_v = 56
dataset_path = 'hpatches-sequences-release'
lim = [1, 15]
rng = np.arange(lim[0], lim[1] + 1)
def mnn_matcher(descriptors_a, descriptors_b):
device = descriptors_a.device
sim = descriptors_a @ descriptors_b.t()
nn12 = torch.max(sim, dim=1)[1]
nn21 = torch.max(sim, dim=0)[1]
ids1 = torch.arange(0, sim.shape[0], device=device)
mask = (ids1 == nn21[nn12])
matches = torch.stack([ids1[mask], nn12[mask]])
return matches.t().data.cpu().numpy()
def benchmark_features(read_feats):
seq_names = sorted(os.listdir(dataset_path))
n_feats = []
n_matches = []
seq_type = []
i_err = {thr: 0 for thr in rng}
v_err = {thr: 0 for thr in rng}
for seq_idx, seq_name in tqdm(enumerate(seq_names), total=len(seq_names)):
keypoints_a, descriptors_a = read_feats(seq_name, 1)
n_feats.append(keypoints_a.shape[0])
for im_idx in range(2, 7):
keypoints_b, descriptors_b = read_feats(seq_name, im_idx)
n_feats.append(keypoints_b.shape[0])
matches = mnn_matcher(
torch.from_numpy(descriptors_a).to(device=device),
torch.from_numpy(descriptors_b).to(device=device)
)
homography = np.loadtxt(os.path.join(dataset_path, seq_name, "H_1_" + str(im_idx)))
pos_a = keypoints_a[matches[:, 0], : 2]
pos_a_h = np.concatenate([pos_a, np.ones([matches.shape[0], 1])], axis=1)
pos_b_proj_h = np.transpose(np.dot(homography, np.transpose(pos_a_h)))
pos_b_proj = pos_b_proj_h[:, : 2] / pos_b_proj_h[:, 2 :]
pos_b = keypoints_b[matches[:, 1], : 2]
dist = np.sqrt(np.sum((pos_b - pos_b_proj) ** 2, axis=1))
n_matches.append(matches.shape[0])
seq_type.append(seq_name[0])
if dist.shape[0] == 0:
dist = np.array([float("inf")])
for thr in rng:
if seq_name[0] == 'i':
i_err[thr] += np.mean(dist <= thr)
else:
v_err[thr] += np.mean(dist <= thr)
seq_type = np.array(seq_type)
n_feats = np.array(n_feats)
n_matches = np.array(n_matches)
return i_err, v_err, [seq_type, n_feats, n_matches]
def summary(stats):
seq_type, n_feats, n_matches = stats
print('# Features: {:f} - [{:d}, {:d}]'.format(np.mean(n_feats), np.min(n_feats), np.max(n_feats)))
print('# Matches: Overall {:f}, Illumination {:f}, Viewpoint {:f}'.format(
np.sum(n_matches) / ((n_i + n_v) * 5),
np.sum(n_matches[seq_type == 'i']) / (n_i * 5),
np.sum(n_matches[seq_type == 'v']) / (n_v * 5))
)
def generate_read_function(method, extension='ppm'):
def read_function(seq_name, im_idx):
aux = np.load(os.path.join(dataset_path, seq_name, '%d.%s.%s' % (im_idx, extension, method)))
if top_k is None:
return aux['keypoints'], aux['descriptors']
else:
assert('scores' in aux)
ids = np.argsort(aux['scores'])[-top_k :]
return aux['keypoints'][ids, :], aux['descriptors'][ids, :]
return read_function
def sift_to_rootsift(descriptors):
return np.sqrt(descriptors / np.expand_dims(np.sum(np.abs(descriptors), axis=1), axis=1) + 1e-16)
def parse_mat(mat):
keypoints = mat['keypoints'][:, : 2]
raw_descriptors = mat['descriptors']
l2_norm_descriptors = raw_descriptors / np.expand_dims(np.sum(raw_descriptors ** 2, axis=1), axis=1)
descriptors = sift_to_rootsift(l2_norm_descriptors)
if top_k is None:
return keypoints, descriptors
else:
assert('scores' in mat)
ids = np.argsort(mat['scores'][0])[-top_k :]
return keypoints[ids, :], descriptors[ids, :]
if top_k is None:
cache_dir = 'cache'
else:
cache_dir = 'cache-top'
if not os.path.isdir(cache_dir):
os.mkdir(cache_dir)
errors = {}
for method in methods:
output_file = os.path.join(cache_dir, method + '.npy')
print(method)
if method == 'hesaff':
read_function = lambda seq_name, im_idx: parse_mat(loadmat(os.path.join(dataset_path, seq_name, '%d.ppm.hesaff' % im_idx), appendmat=False))
else:
if method == 'delf' or method == 'delf-new':
read_function = generate_read_function(method, extension='png')
else:
read_function = generate_read_function(method)
if os.path.exists(output_file):
print('Loading precomputed errors...')
errors[method] = np.load(output_file, allow_pickle=True)
else:
errors[method] = benchmark_features(read_function)
np.save(output_file, errors[method])
summary(errors[method][-1])
```
# Plotting
```
plt_lim = [1, 10]
plt_rng = np.arange(plt_lim[0], plt_lim[1] + 1)
plt.rc('axes', titlesize=25)
plt.rc('axes', labelsize=25)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [(i_err[thr] + v_err[thr]) / ((n_i + n_v) * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Overall')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylabel('MMA')
plt.ylim([0, 1])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
plt.legend()
plt.subplot(1, 3, 2)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [i_err[thr] / (n_i * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Illumination')
plt.xlabel('threshold [px]')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylim([0, 1])
plt.gca().axes.set_yticklabels([])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
plt.subplot(1, 3, 3)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [v_err[thr] / (n_v * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Viewpoint')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylim([0, 1])
plt.gca().axes.set_yticklabels([])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
if top_k is None:
plt.savefig('hseq.pdf', bbox_inches='tight', dpi=300)
else:
plt.savefig('hseq-top.pdf', bbox_inches='tight', dpi=300)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "-1"
import numpy as np
import torch
import pandas as pd
from tqdm.auto import tqdm
from matplotlib import pyplot as plt
import seaborn as sns
```
# Problem setup
We solve the simpler problem where we search for a sparse set of dictionary items $d_i$ that sum up to a given signal $s$ as
$$
s=\sum\limits_i\alpha_id_i,\,\|\alpha\|_0\to\min
$$
We use model data with a random signal from sine waves, and we want to decompose it into a Fourier basis. We try two methods:
1. $l_1$ regularization. Here we just relax the problem to $\|\alpha\|_1\to\min$ and solve a regularized equation $\|s-\alpha_id_i\|_2^2+q\|\alpha\|_1\to\min$. This is lasso linear regression
2. Matching pursuit. Here we use the algorithm that greedely calculates the best matching dictionary element using the scalar product, and then adds it to the decomposition.
We compare the resulting $l_0$-sparsity of the two solutions and their efficiency
```
x = np.linspace(0, 1, 100)
frequencies = np.arange(0, 30)
args = np.repeat(frequencies[:, np.newaxis], len(x), axis=1)
args = np.multiply(x, args)
# the dictionary
D = np.vstack((np.cos(args), np.sin(args)))
print("Dictionary shape", D.shape)
# the signal
n_sig = 3
sig_coeff = np.random.randn(n_sig) + 1
sig_item = np.random.choice(len(D), n_sig, replace=False)
signal = np.dot(sig_coeff, D[sig_item, :])
plt.figure()
plt.plot(x, signal, label="Signal")
for _ in range(2):
idx = np.random.choice(len(D))
plt.plot(x, D[idx, :], label=f"Dictionary item {idx}")
plt.legend()
plt.show()
# converting to pytorch
Dt = torch.tensor(D, dtype=torch.float32)
st = torch.tensor(signal, dtype=torch.float32)
```
# $l_1$-regularization
```
# the coefficient
alpha = torch.nn.Linear(in_features=len(D), out_features=1, bias=False)
aw = list(alpha.parameters())[0]
opt = torch.optim.Adam(alpha.parameters())
q = 1e-3
for _ in tqdm(range(50000)):
opt.zero_grad()
loss = torch.nn.MSELoss()(alpha(Dt.T).flatten(), st) + q * torch.norm(aw.flatten(), p=1)
loss.backward()
opt.step()
loss
awnp = aw.detach().numpy()
sns.heatmap(awnp)
plt.hist(awnp.flatten())
print("l1 gives indices", np.where(np.abs(awnp.flatten()) > 0.1)[0])
print("ground truth indices", sig_item)
```
# Matching pursuit
```
def scalar_product(D, idx, signal):
"""Find the scalar product between a dictionary item and a signal."""
return np.dot(D[idx, :], signal)
def cos_angle(D, idx, signal):
"""Cos of the angle between dictionary item and the signal."""
return scalar_product(D, idx, signal) / (1e-10 + np.linalg.norm(signal) * np.linalg.norm(D[idx, :]))
def max_scalar_product(D, signal):
"""Find index with maximal scalar product between a dictionary item and the signal."""
products = [cos_angle(D, idx, signal) for idx in range(len(D))]
# print(products, np.max(np.abs(products)))
return np.argmax(np.abs(products))
# current signal
signal_c = np.array(signal)
for _ in range(10):
idx_max = max_scalar_product(D, signal_c)
prod_max = scalar_product(D, idx_max, signal_c)
d_max = D[idx_max, :]
signal_c -= d_max * prod_max / np.linalg.norm(d_max) ** 2
print(np.linalg.norm(signal_c), prod_max, idx_max)
```
| github_jupyter |
```
""" 1. Export your Postman Collection, then make an instance of the Postman runner. """
from pyclinic.postman import Postman
from rich import print
collection_path = "./tests/examples/deckofcards.postman_collection.json"
runner = Postman(collection_path)
""" 2. Did you see the warnings that were printed above?
That's because there are variables, like {{deck_id}}, that are not defined in the collection!
So calling the functions as-is will fail... You should consider when, where and how you use variables in Postman to solve this.
But, let's see which variables we _do_ have defined.
"""
runner.show_variables()
""" 3. An empty dictionary means that there are zero variables defined...
Let's see what folders and functions we have.
"""
runner.show_folders()
""" 4. We have a lot and they're ready to be used!
A few things to note:
- The Root folder has all of the functions available since it's the top-level
* However, notice how create_shuffled_deck is only defined at the Root, just like in the Postman Collection
- In Postman, Folder11 was actually called Folder 1.1 and was a subfolder of Folder 1
* To make things easier to work with, we flatten all the folders and normalize the names
- Each other folder shows which functions belong to it
* This makes it easy to execute doing something like runner.Folder2.list_cards_in_piles()
* Just like it shows in the Example Usage
- You'll get errors if you try to use a function that doesn't exist in a folder
* We're about to see an error because we're missing the {{deck_id}} variable...
"""
# runner.Folder11.list_cards_in_piles() # This will fail because list_cards_in_piles is not defined in Folder11
response = runner.Folder11.draw_cards()
print("STATUS CODE:", response.status_code)
print(response.text)
""" 5. Notice how we're working with a response. This comes from the requests library.
Using type hinting, we will get more intellisense. Now we can work with the response a lot more.
Let's use an auto-generated function that we know will work without needing variables.
"""
from requests import Response
response: Response = runner.Root.create_shuffled_deck()
print(response.json())
""" 6. AMAZING! Let's try drawing cards again, but this time we'll pass in a {{deck_id}} so it works. """
create_response = runner.Root.create_shuffled_deck()
deck_id = create_response.json().get("deck_id")
response = runner.Folder11.draw_cards({"deck_id": deck_id})
print("STATUS CODE:", response.status_code)
print(response.json())
""" 7. SUCCESS! If you wanted this to be an Automated Test with pytest, it would look something like this: """
def test_draw_cards():
runner = Postman(collection_path)
create_response = runner.Root.create_shuffled_deck()
deck_id = create_response.json().get("deck_id")
# Pass in User Variables with a flat dictionary
response = runner.Folder11.draw_cards({"deck_id": deck_id})
body = response.json()
assert response.ok
assert body["success"] is True
assert len(body["cards"]) == 2, "By default, two cards should be drawn"
""" 8. Using your Environment and Global Variables.
If your Variables are at the Environment and/or Global scope in Postman,
then export them as files and pass in the file paths when instantiating the Postman runner.
* In this cell's output, observe that there are 4 functions that use the {{USER_ID}} variable,
yet the value is None or empty.
* We then see the Variables Dictionary and, sure enough, there are a lot of empty value fields.
"""
from tests import utils
GLOBAL_PATH = utils.WORKSPACE_GLOBAL_VARIABLES_PATH
ENV_PATH = utils.BOOKSTORE_ENV_VARIABLES_PATH
COLLECTION_PATH = utils.build_example_path(utils.BOOKSTORE_PATH)
user_variables = {"USERNAME": "Carlos Kidman"}
runner = Postman(COLLECTION_PATH, ENV_PATH, GLOBAL_PATH, user_variables)
runner.show_variables()
```
# Postman Variables
It's important to understand when, where and how to use Postman's Variables. It'll make your automation much easier to work with.
## Order of Operations
When working with PyClinic, there are 4 levels of scopes that will override each other if there are matching Variables:
1. Global
2. Environment
3. Collection
4. User
In the above order, each scope's Variables are loaded and override any matching keys that exist.
For example, if Global and Environment both have a USERNAME Variable, then the Environment Scope's value will win.
That means that any User Variables will ultimately win and have their values be set.
* NOTE: Only matching values are overriden. Otherwise, the Variables dictionary is just updated with the new key-value pairs.
```
""" 9. Create your own service functions.
Although PyClinic aims to bootstrap your Project for automation and test automation,
you will most likely need to create your own service functions for more control and power.
The requests library that PyClinic uses is wonderful for this!
You should already have this info in Postman, and help from API docs and your team,
but you can also get help from PyClinic.
"""
# What if we wanted to change the number of decks that are created (maybe to 3 or n)?
# We can use the .help() method on the function to get more info.
runner = Postman(collection_path)
runner.Root.create_shuffled_deck.help()
""" 10. Write a function using the info we have.
Looking at the info above, there's really only two pieces we need:
- The URL
- The HTTP Method
"""
import requests
def create_decks(count=1):
method = "GET"
url = f"http://deckofcardsapi.com/api/deck/new/shuffle/?deck_count={count}"
response = requests.request(method, url)
return response.json()
# Instead of 52 cards for a single deck, we should now have 3 decks with 156 cards total
print(create_decks(3))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from Show import *
```
# Importing information
```
#Import pics information
data_info = pd.read_csv('dataset_images_minitest.csv',sep='\t')
# Information about data_info
print("Data size is:", len(data_info))
print("Columns:", *data_info.columns)
print("Categories:", *data_info.category.unique())
display(data_info.sample(5))
```
# Preparing the images
```
from keras import backend as K # Keras backend information file.
from Image import * #Image generator extension
img_dest_size = (150,150)
#Standart image configuration
if K.image_data_format() == 'channels_first':
input_shape = (3, *img_dest_size)
else:
input_shape = (*img_dest_size, 3)
#Creating the generators
datagen = ImageDataGeneratorPlus(rescale=1. / 255,horizontal_flip=True)
#adding the images from pandas
generator=datagen.flow_from_pandas(data_info,'pics/',target_size=img_dest_size)
```
## Generating from batches
```
import time
n_classes = generator.num_classes
x = []
y = []
start = time.time()
generator.reset()
for xb,yb in generator:
x+= list(xb)
y+= list(yb)
dt = time.time()-start
if (generator.batch_index==0) or dt>5*60:
print(dt)
break
x = np.asarray(x)
y = np.asarray(y)
#saving the data
import pickle
with open("x.pickle","wb") as f:
pickle.dump(x, f)
with open("y.pickle","wb") as f:
pickle.dump(y, f)
generator.class_indices
#loading the data
import pickle
with open("X_train.pickle","rb") as f:
x_train = pickle.load(f)
with open("y_train.pickle","rb") as f:
y_train = pickle.load(f)
class_indices = {'graduation': 0, 'meeting': 1, 'picnic': 2}
c_i = {int(value):key for key,value in class_indices.items()}
x.shape
```
# Visualisando
```
import matplotlib.pyplot as plt
%matplotlib inline
n = np.random.randint(0,len(x_train))
print(c_i[np.dot(y_train[n],[0,1,2]).astype('int')])
print(y_train[n])
plt.imshow(x_train[n])
plt.show()
from collections import Counter
train_classes_division = Counter(np.dot(y_train,[0,1,2]).astype('int'))
test_classes_division = Counter(np.dot(y_test,[0,1,2]).astype('int'))
print('Os dados estão divididos em')
total = sum(train_classes_division.elements())
for n,qty in train_classes_division.items():
print('\t{:10}:{:>3}%'.format(c_i[n],(qty*100)//total))
```
# Montando a rede
```
img_dest_size = x.shape[1:3]
epochs = 10
batch_size = 64
#Standart image configuration
if K.image_data_format() == 'channels_first':
input_shape = (3, *img_dest_size)
else:
input_shape = (*img_dest_size, 3)
from keras.models import Sequential # for a sequential model
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
#Model definition
def build_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape,activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
#model.add(Dropout(0.9))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
history = model.fit(x, y, batch_size=batch_size, epochs=epochs,validation_split=0.3,verbose=0)
#https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_vec = range(1,len(acc)+1)
plt.figure(figsize=(20,10))
plt.subplot(121)
# "bo" is for "blue dot"
plt.plot(epochs_vec, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs_vec, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(epochs_vec, acc, 'bo', label='Training acc')
plt.plot(epochs_vec, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
Nitidamente há um overfitting nas primeiras épocas.
# Cuva ROC
```
#Calculating the ROC curve
#create y_test
y_testc = y_test
print('Prediction')
#Create the prediction
y_score = model.predict(x_test)
print('finish')
#Based on https://www.dlology.com/blog/simple-guide-on-how-to-generate-roc-plot-for-keras-classifier/
from scipy import interp
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc
# Plot linewidth.
lw = 2
n_classes=3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_testc[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_testc.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(1)
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower left", bbox_to_anchor=(1, 0))
plt.show()
```
| github_jupyter |
# Compute Word Vectors using TruncatedSVD in Amazon Food Reviews.
Data Source: https://www.kaggle.com/snap/amazon-fine-food-reviews
The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.
Number of reviews: 568,454 Number of users: 256,059 Number of products: 74,258 Timespan: Oct 1999 - Oct 2012 Number of Attributes/Columns in data: 10
Attribute Information:
1. index
2. Id
3. ProductId - unique identifier for the product
4. UserId - unqiue identifier for the user
5. ProfileName
6. HelpfulnessNumerator - number of users who found the review helpful
7. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not
8. Score - rating between 1 and 5
9. Time - timestamp for the review
10. Summary - brief summary of the review
11. Text - text of the review
12. ProcessedText - Cleaned & Preprocessed Text of the review
**Objective: Perform following tasks on Amazon Food reviews:**<br>
**Task 1. Sample 25000 reviews then find top 10000 words corresponding to top 10000 Inverse Document Frequency(IDF) values.**<br>
**Task 2. Compute co-occurrence matrix on those 10000 words.** <br>
**Task 3. Find optimal value of number of components(reduced dimensions) using maximum variance.**<br>
**Task 4. Apply TruncatedSVD using optimal value of number of components.**<br>
**Task 5. Cluster words using K-Means.**<br>
[Q] How to determine if a review is positive or negative?
[Ans] We could use the Score/Rating. A rating of 4 or 5 could be cosnidered a positive review. A review of 1 or 2 could be considered negative. A review of 3 is nuetral and ignored. This is an approximate and proxy way of determining the polarity (positivity/negativity) of a review.
Loading the data
SQLite Database
In order to load the data, We have used the SQLITE dataset as it easier to query the data and visualise the data efficiently. Here as we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score id above 3, then the recommendation wil be set to "positive". Otherwise, it will be set to "negative".
```
import sqlite3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
import string
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from sklearn.preprocessing import StandardScaler
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.cluster import KMeans
from sklearn.utils.extmath import randomized_svd
connection = sqlite3.connect('FinalAmazonFoodReviewsDataset.sqlite')
data = pd.read_sql_query("SELECT * FROM Reviews", connection)
data.head()
print(data.shape)
print(data["Score"].value_counts())
stop = set(stopwords.words("english")) #set of stopwords
sno = nltk.stem.SnowballStemmer("english")
print(stop)
def cleanhtml(sentence): #function to clean htmltags
cleanr = re.compile("<.*?>")
cleantext = re.sub(cleanr, " ", sentence)
return cleantext
def cleanpunc(sentence): #function to clean the word of any punctuation or special characters
cleaned = re.sub(r'[?|!|\'|"|#]',r'',sentence)
cleaned = re.sub(r'[.|,|)|(|\|/]',r' ',cleaned)
return cleaned
#Code for removing stop-words from 'Text' column
i = 0
final_string = []
s = ""
for sentence in data["Text"].values:
filteredSentence = []
EachReviewText = ""
sentenceHTMLCleaned = cleanhtml(sentence)
for eachWord in sentenceHTMLCleaned.split():
for sentencePunctCleaned in cleanpunc(eachWord).split():
if (sentencePunctCleaned.isalpha()) & (len(sentencePunctCleaned)>2):
if sentencePunctCleaned.lower() not in stop:
sentenceLower = sentencePunctCleaned.lower()
s = (sno.stem(sentenceLower))
filteredSentence.append(s)
EachReviewText = ' '.join(filteredSentence)
final_string.append(EachReviewText)
data["CleanedText"] = final_string
data.head()
```
## Task 1. Sample 25000 reviews then find top 10000 words corresponding to top 10000 Inverse Document Frequency(IDF) values.
```
#taking 25000 random samples
Data = data.sample(n = 25000)
Data.head()
print(Data.shape)
print(Data["Score"].value_counts())
TFIDF_Vec= TfidfVectorizer(ngram_range=(1,1), stop_words = "english")
TFIDF_Count = TFIDF_Vec.fit_transform(Data["CleanedText"].values)
TFIDF_Count.shape
features = TFIDF_Vec.get_feature_names()
idfValues = TFIDF_Vec.idf_
d = dict(zip(features, 11 - idfValues))
sortedDict = sorted(d.items(), key = lambda d: d[1], reverse = True)
#here we are sorting a dictionary where first value(keys) are words and second value(values) are IDF values. There is a 'key'
#argument in sorted function takes a function which will be used to determine according to what values to sort by. Here, we have
#given an anonymous function which takes the data followed by colon then d[1] means in our data second value is idf values so we
#are telling the sorted function to sort the dictionary according to idf values.
sortedDict = sortedDict[0:10000]
#taking top 10000 words corresponding to top 10000 Inverse Document Frequency(IDF) values.
len(sortedDict)
for i in range(10):
print(sortedDict[i])
for i in range(10):
print(sortedDict[i][0])
wordList_idf = []
for i in range(len(sortedDict)):
wordList_idf.append(sortedDict[i][0])
len(wordList_idf)
```
## Task 2. Compute co-occurrence matrix on those 5000 words.
```
Data["CleanedText"].head()
sent = Data["CleanedText"].iloc[3]
sent
len(sent.split())
#checking for any empty text
cnt = 0
for i in Data["CleanedText"]:
cnt += 1
if len(i.split()) == 0:
print(cnt)
def co_occurrence(sentence_array, window_size, word_list):
co_occ = np.zeros((len(word_list), len(word_list)), dtype = int)
for i in sentence_array:
for word in i.split():
if word in word_list:
row = word_list.index(word) #this will give index of a word in word_list array
wordIndexInSentence = i.split().index(word) #this will give index of a word in sentence 'i'
window_left = wordIndexInSentence - window_size
if window_left < 0:
window_left = 0
window_right = wordIndexInSentence + window_size
if window_right > len(i.split()):
window_right = len(i.split())
for context_word in i.split()[window_left:window_right]:
if context_word in word_list:
column = word_list.index(context_word)
co_occ[row][column] += 1
return co_occ
#this is a function to create co-occurrence matrix of all the words which will be passed into "word_list" argument.
#basically this function takes three arguments:
#First: "sentence_series"(numpy ndarray) which is an array which should contain all the reviews/sentences.
#Second: "window_size"(integer) this determines the context size upto which you may want to find the co-occurring words.
#Third: "word_list"(list) this should contain list of words which you may want to find as co-occurring.
#it returns co-occurrence matrix which is a square matrix and each row and column corresponds to a word as defined in
#"word_list"
sent_series = Data["CleanedText"].values
print(type(sent_series))
print(sent_series.shape)
print(len(wordList_idf))
co_occur_matrix = co_occurrence(sent_series, 5, wordList_idf)
print(co_occur_matrix)
print(co_occur_matrix.shape)
```
## Task 3. Find optimal value of number of components(reduced dimensions) using maximum variance.
```
k = [i for i in range(20,241,20)]
components = []
total_var = []
for j in k:
svd = TruncatedSVD(n_components=j, n_iter=10)
svd.fit(co_occur_matrix)
var_perc = sum(svd.explained_variance_ratio_)
components.append(j)
total_var.append(var_perc)
xy = list(zip(components, total_var))
xy
plt.figure(figsize = (14, 12))
plt.plot(components, total_var)
plt.title("Number of Components VS Total Explained Variance", fontsize=25)
plt.xlabel("Number of Components", fontsize=25)
plt.ylabel("Total Explained Variance", fontsize=25)
plt.grid(linestyle='-', linewidth=0.5)
```
**We can see from the graph that we are getting approximately 91% variance at number of components equal to 140. It further means that we are preserving 91% of data even by reducing our dimension from 5000 to 140. Therefore, we are considering our number of components to be 140**
## Task 4. Apply TruncatedSVD using optimal value of number of components.
```
svd = TruncatedSVD(n_components = 140, n_iter = 10)
svd.fit(co_occur_matrix)
var_perc = sum(svd.explained_variance_ratio_)
print("Percentage of variance explained = "+str(var_perc * 100)+"%")
U, Sigma, VT = randomized_svd(co_occur_matrix, n_components = 140, n_iter = 10)
U.shape
```
## Task 5. Cluster words using K-Means.
```
Data_Std = StandardScaler(with_mean = False).fit_transform(U)
print(Data_Std.shape)
print(type(Data_Std))
#taking number of cluster = 1000
KMeans_Apply = KMeans(n_clusters=1000, init = "k-means++", max_iter = 100, n_jobs = -1).fit(Data_Std)
Cluster_indices = {i: np.where(KMeans_Apply.labels_ == i) for i in range(KMeans_Apply.n_clusters)}
```
### Checking for similarity of words in clusters manually
```
#checking for review 981
for i in Cluster_indices[981][0]:
print(wordList_idf[i])
```
**Now in cluster number 981, the above words are related like: understand, sorri, hesit, disagre, error, edit, written, convinc etc**
```
#checking for review 954
for i in Cluster_indices[954][0]:
print(wordList_idf[i])
```
**Now in cluster number 954, the above words are related like: parent, childhood, cute, sister, memori, cigarett, nausea, butterscotch,peppermint etc**
```
#checking for review 925
for i in Cluster_indices[925][0]:
print(wordList_idf[i])
```
**Now in cluster number 925, the above words are related like: upset, relax, stress, diseas, symptom, antibiot, heal, thyroid, immun etc**
```
#checking for review 904
for i in Cluster_indices[904][0]:
print(wordList_idf[i])
```
**Now in cluster number 904, the above words are related like: disgust, gross, spoil, yuck, crap, pungent, harsh, stink, fragranc, lavend etc**
| github_jupyter |
Results:
- we subsetted Ag1000g P2 (1142 samples) zarr to the positions of the amplicon inserts
- a total of 1417 biallelic SNPs were observed in all samples, only one amplicon (29) did not have variation
- we performed PCA directly on those SNPs without LD pruning
- PCA readily splits Angolan samples `AOcol` and general gambiae vs coluzzii
- populations `GW`, `GAgam`, `FRgam` and some outliers of `CMgam` can be separated
- overall, resolution of clusters is less complete than in whole-genome dataset: https://github.com/malariagen/ag1000g-phase2-data-paper/blob/master/notebooks/figure_PCA.ipynb
- impacts of individual amplicons are different. Highest impact amplicons are 28, 60, 7.
```
%run common.ipynb
# long population names
samples['population'] = samples.population.replace(pop_labels)
# concatenate biallelic site nalts into single array
ampl_flt = dict()
for ampl in callset:
flt = callset[ampl]['biallelic'][:]
nalt = callset[ampl]['NALT'][:]
ampl_flt[ampl] = nalt[flt]
ampl_flt_nalts = np.concatenate([ampl_flt[ampl] for ampl in callset])
ampl_flt_nalts.shape
```
## Alternative PCA for Kenya
```
# country_filter = samples.population.isin(['KE','GW'])
# country_filter.value_counts()
# samples = samples[country_filter]
# read all SNPs
# ampl_snps = dict()
# for ampl in callset:
# ampl_snps[ampl] = callset[ampl]['genotype']
# cat_snps = allel.GenotypeChunkedArray(np.concatenate(list(ampl_snps.values())))
# subset to countries
# cat_snps = cat_snps[:, country_filter]
# recalculate biallelic nalts
# ac = cat_snps.count_alleles()
# flt = (ac.max_allele() == 1) & (ac[:, :2].min(axis=1) > 1)
# ampl_flt_nalts = cat_snps[flt].to_n_alt()
```
## PCA
```
# skip ld_prune, straight to PCA
coords, model = allel.pca(ampl_flt_nalts, n_components=20, scaler='patterson')
fig, ax = plt.subplots()
ax.plot(model.explained_variance_ratio_, 'go')
ax.set_xlabel("principal component")
ax.set_ylabel("variance explained")
plt.xticks(np.arange(0,20, 1));
# add first 12 pc values to samples table
for component in range(12):
samples['pc{}'.format(component + 1)] = coords[:, component]
samples.head()
fig, axs = plt.subplots(2, 2, figsize=(12, 12))
for i, ax in enumerate(axs.flatten()):
comp1 = i*2 + 1
comp2 = i*2 + 2
pc_var1 = model.explained_variance_ratio_[comp1] * 100
pc_var2 = model.explained_variance_ratio_[comp2] * 100
legend = ('full' if i==3 else False)
g = sns.scatterplot(data=samples,
x='pc{}'.format(comp1),
y='pc{}'.format(comp2),
hue='population',
style='m_s',
palette=pop_colors,
legend=legend,
ax=ax);
# attempt to place legend outside
# g.legend(loc='center left', bbox_to_anchor=(1.25, 0.5), ncol=1)
ax.set_xlabel('PC{} ({:.2f}%)'.format(comp1, pc_var1))
ax.set_ylabel('PC{} ({:.2f}%)'.format(comp2, pc_var2));
plt.tight_layout()
```
## Impact of individual amplicons on PCA
```
# extract PCA component coefficients
components = pd.DataFrame(model.components_.T,
columns=range(1,21)) #.abs()
components.head()
# match variants to amplicons
var_ampl = list()
for ampl in callset:
nvar = ampl_flt[ampl].shape[0]
var_ampl.extend([ampl] * nvar)
len(var_ampl)
components['ampl'] = var_ampl
fig, axs = plt.subplots(1, 2, figsize=(10, 12))
sns.heatmap(components.groupby('ampl').mean().iloc[:, :12],
center=0, cmap='coolwarm', cbar=False, ax=axs[0])
sns.heatmap(components.groupby('ampl').std().iloc[:, :12],
center=0, cmap='coolwarm', cbar=False, ax=axs[1])
for i in range(2):
axs[i].set_xlabel('PC');
axs[0].set_title('Component mean')
axs[1].set_title('Component std');
```
| github_jupyter |
Problems: 8, 12, 18
```
%matplotlib inline
import numpy as np
import scipy.stats as st
import pandas as pd
import statsmodels.api as sm
import statsmodels.stats.api as sms
import statsmodels.formula.api as smf
import statsmodels.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
from statsmodels import stats
```
## 10.8
### a.
```
Image(filename='/Users/kevin/Dropbox/School/STA-580/ch10hw/Scatterplot of RESI-Y-X234 vs RESI-X1-234.png')
Image(filename='/Users/kevin/Dropbox/School/STA-580/ch10hw/Scatterplot of RESI-Y-X134 vs RESI-X2-134.png')
Image(filename='/Users/kevin/Dropbox/School/STA-580/ch10hw/Scatterplot of RESI-Y-X124 vs RESI-X3-124.png')
Image(filename='/Users/kevin/Dropbox/School/STA-580/ch10hw/Scatterplot of RESI-Y-X123 vs RESI-X4-123.png')
```
### b.
These plots show that $X_1$, $X_2$ and $X_4$ are appropriate in the model and that $X_3$ is not. This is because those three plots show a linear relationship with nonzero slope but the graph of $e(Y \vert X_1,X_2,X_4)$ doesn't show any relationship.
## 10.12
### a. & b.
SAS influence output:
| Obs | Residual | RStudent | Hat Diag | Cov Ratio | DFFITS | DFBETAS - Intercept | DFBETAS-X1 | DFBETAS-X2 | DFBETAS-X3 | DFBETAS-X4 |
|-----|----------|----------|----------|-----------|---------|---------------------|------------|------------|------------|------------|
| 1 | -1.0357 | -0.9399 | 0.0621 | 1.0744 | -0.2419 | -0.2142 | 0.0565 | 0.1839 | 0.0567 | -0.0757 |
| 2 | -1.5138 | -1.3926 | 0.0745 | 1.0161 | -0.395 | 0.018 | -0.2556 | 0.0008 | -0.2731 | 0.1606 |
| 3 | -0.5911 | -0.577 | 0.1953 | 1.2987 | -0.2843 | -0.2318 | -0.1553 | 0.2364 | 0.1008 | -0.0115 |
| 4 | -0.1336 | -0.1191 | 0.0391 | 1.1109 | -0.024 | 0.004 | 0.0079 | -0.0141 | -0.0026 | 0.0156 |
| 5 | 0.3133 | 0.2782 | 0.0311 | 1.097 | 0.0498 | 0.011 | 0.0249 | -0.0001 | 0.0064 | -0.0317 |
| 6 | -3.1872 | -3.0721 | 0.0748 | 0.6385 | -0.8735 | 0.1951 | -0.5649 | -0.1767 | -0.6172 | 0.4482 |
| 7 | -0.5384 | -0.4817 | 0.0433 | 1.0997 | -0.1025 | -0.0128 | 0.0213 | -0.0211 | -0.0477 | 0.0629 |
| 8 | 0.2363 | 0.2312 | 0.2022 | 1.3345 | 0.1164 | -0.0142 | -0.0072 | 0.003 | 0.0955 | 0.0126 |
| 9 | 1.9892 | 1.8756 | 0.1009 | 0.945 | 0.6284 | 0.5021 | -0.2294 | -0.4548 | -0.3981 | 0.4023 |
| 10 | 0.1058 | 0.0938 | 0.0273 | 1.0978 | 0.0157 | -0.003 | -0.0048 | 0.0043 | -0.0034 | 0.0069 |
| 11 | 0.0231 | 0.0209 | 0.0659 | 1.1438 | 0.0056 | -0.0041 | -0.0002 | 0.004 | 0.0016 | 0.0004 |
| 12 | -0.3371 | -0.3011 | 0.0419 | 1.1085 | -0.063 | 0.0085 | 0.0462 | -0.0245 | 0.0135 | -0.0143 |
| 13 | 0.7179 | 0.6398 | 0.0337 | 1.076 | 0.1194 | 0.0335 | -0.0683 | 0.0198 | -0.0438 | -0.0317 |
| 14 | -0.3924 | -0.3536 | 0.0582 | 1.1251 | -0.0879 | 0.0267 | -0.0026 | -0.0138 | 0.009 | -0.056 |
| 15 | -0.201 | -0.1809 | 0.0565 | 1.13 | -0.0443 | 0.0067 | 0.0314 | -0.0136 | 0.0104 | -0.0197 |
| 16 | -0.8149 | -0.7446 | 0.0788 | 1.1179 | -0.2177 | 0.0098 | 0.1552 | -0.036 | 0.0784 | -0.1235 |
| 17 | 0.1017 | 0.0904 | 0.0338 | 1.1053 | 0.0169 | 0.0083 | -0.0098 | -0.0056 | -0.0015 | 0.0077 |
| 18 | -1.7591 | -1.6155 | 0.0632 | 0.9612 | -0.4196 | -0.0347 | -0.0498 | 0.1303 | 0.0248 | -0.3461 |
| 19 | -1.2101 | -1.1075 | 0.0736 | 1.0635 | -0.3122 | -0.0655 | -0.0975 | 0.1297 | 0.1178 | -0.2324 |
| 20 | -0.6343 | -0.5628 | 0.0261 | 1.0742 | -0.0922 | -0.0261 | 0.0456 | -0.0152 | 0.0232 | 0.0301 |
| 21 | -0.366 | -0.3273 | 0.0441 | 1.1098 | -0.0703 | 0.0424 | 0.0258 | -0.05 | -0.0265 | -0.0029 |
| 22 | 0.2886 | 0.2583 | 0.046 | 1.115 | 0.0567 | -0.0154 | -0.0329 | 0.036 | -0.007 | -0.0147 |
| 23 | -0.0932 | -0.0833 | 0.0449 | 1.1182 | -0.0181 | 0.0024 | 0.0067 | -0.0101 | -0.0018 | 0.012 |
| 24 | 0.2339 | 0.2083 | 0.0371 | 1.1064 | 0.0409 | 0.0103 | -0.0159 | 0.0086 | -0.0109 | -0.021 |
| 25 | -0.8533 | -0.7723 | 0.0604 | 1.093 | -0.1958 | 0.05 | -0.0848 | -0.0922 | -0.0255 | 0.1476 |
| 26 | -2.1239 | -1.9783 | 0.074 | 0.8948 | -0.5593 | -0.2615 | 0.0525 | 0.2343 | -0.2548 | -0.0338 |
| 27 | 0.466 | 0.4166 | 0.0424 | 1.103 | 0.0877 | 0.0026 | 0.0616 | 0.0068 | 0.0126 | -0.05 |
| 28 | -0.574 | -0.5118 | 0.0362 | 1.0894 | -0.0992 | 0.0225 | -0.0702 | -0.0234 | -0.0221 | 0.0348 |
| 29 | -1.0688 | -0.956 | 0.0341 | 1.0411 | -0.1795 | -0.0284 | 0.0515 | -0.0155 | -0.0961 | 0.054 |
| 30 | -0.1977 | -0.1764 | 0.0399 | 1.1106 | -0.036 | 0.0015 | -0.0209 | 0.0059 | -0.0025 | -0.0149 |
| 31 | -1.1217 | -1.0054 | 0.0367 | 1.0373 | -0.1962 | -0.1397 | 0.0902 | 0.069 | 0.1115 | -0.0036 |
| 32 | -0.1739 | -0.1547 | 0.0352 | 1.1056 | -0.0295 | -0.016 | 0.0163 | 0.0093 | 0.0183 | -0.0135 |
| 33 | -1.0301 | -0.9242 | 0.0407 | 1.0525 | -0.1903 | 0.0551 | 0.1243 | -0.1096 | 0.0189 | -0.0025 |
| 34 | -0.091 | -0.0816 | 0.0503 | 1.1246 | -0.0188 | -0.0072 | -0.0146 | 0.0068 | 0.0013 | 0.0053 |
| 35 | 0.2151 | 0.1921 | 0.0425 | 1.1131 | 0.0405 | -0.0037 | 0.0291 | 0.0066 | 0.0058 | -0.0207 |
| 36 | 0.7848 | 0.6976 | 0.0275 | 1.0637 | 0.1173 | -0.0261 | 0.0677 | 0.0353 | 0.0457 | -0.054 |
| 37 | 1.0839 | 0.9841 | 0.0618 | 1.0681 | 0.2527 | 0.1791 | -0.066 | -0.1694 | -0.0029 | 0.1157 |
| 38 | -2.1325 | -1.9932 | 0.0798 | 0.897 | -0.587 | -0.2919 | -0.1748 | 0.4009 | 0.2402 | -0.4389 |
| 39 | -0.1855 | -0.1648 | 0.032 | 1.1018 | -0.03 | -0.0085 | 0.0107 | -0.0053 | 0.004 | 0.0164 |
| 40 | -1.1204 | -1.0182 | 0.0627 | 1.0643 | -0.2633 | -0.107 | -0.1756 | 0.0729 | 0.0388 | 0.1282 |
| 41 | -0.0128 | -0.0115 | 0.0609 | 1.1377 | -0.0029 | -0.0026 | 0.0009 | 0.0018 | 0.0017 | -0.0002 |
| 42 | 2.5009 | 2.323 | 0.0514 | 0.7958 | 0.5407 | 0.2537 | 0.3713 | -0.2068 | -0.122 | -0.1716 |
| 43 | -1.5828 | -1.4861 | 0.1084 | 1.0365 | -0.5182 | -0.5074 | 0.0843 | 0.4107 | 0.2943 | -0.0911 |
| 44 | 0.9296 | 0.8378 | 0.0513 | 1.075 | 0.1948 | 0.0111 | 0.1213 | 0.0214 | -0.0012 | -0.1244 |
| 45 | 0.3942 | 0.3541 | 0.0521 | 1.1178 | 0.083 | 0.064 | 0.0004 | -0.0458 | 0.003 | -0.019 |
| 46 | 0.1172 | 0.1044 | 0.0377 | 1.1095 | 0.0207 | -0.0028 | -0.0136 | 0.0095 | -0.0055 | 0.0003 |
| 47 | 0.8153 | 0.7425 | 0.0725 | 1.1106 | 0.2076 | 0.201 | -0.0127 | -0.1623 | -0.1195 | 0.0298 |
| 48 | 1.6059 | 1.4737 | 0.0671 | 0.993 | 0.3953 | 0.3515 | -0.1035 | -0.2304 | -0.2104 | -0.0217 |
| 49 | 0.5579 | 0.5004 | 0.0477 | 1.1035 | 0.112 | -0.0475 | -0.0538 | 0.0849 | 0.0073 | -0.0402 |
| 50 | 0.4947 | 0.4448 | 0.053 | 1.1134 | 0.1052 | -0.0421 | -0.0517 | 0.0777 | -0.0036 | -0.0349 |
| 51 | 0.2076 | 0.1874 | 0.0628 | 1.1374 | 0.0485 | -0.0176 | -0.0292 | 0.0343 | -0.0028 | -0.0133 |
| 52 | -0.032 | -0.0286 | 0.0443 | 1.1179 | -0.0062 | 0.0001 | 0.0045 | -0.0017 | 0.0023 | -0.0018 |
| 53 | 1.1558 | 1.1241 | 0.1792 | 1.1974 | 0.5252 | -0.0196 | -0.024 | -0.0243 | 0.418 | 0.049 |
| 54 | 0.2343 | 0.2172 | 0.1115 | 1.1988 | 0.077 | -0.0399 | -0.0171 | 0.0566 | 0.0552 | -0.0466 |
| 55 | -1.0735 | -0.9584 | 0.0303 | 1.0368 | -0.1695 | -0.0201 | 0.0983 | -0.0514 | 0.0562 | 0.036 |
| 56 | 1.0596 | 0.9508 | 0.0402 | 1.0484 | 0.1945 | -0.0316 | 0.1265 | 0.0523 | 0.0407 | -0.112 |
| 57 | -0.2617 | -0.2345 | 0.0486 | 1.1189 | -0.053 | -0.0025 | -0.0266 | 0.0115 | 0.0146 | -0.026 |
| 58 | 1.0317 | 0.9211 | 0.0313 | 1.0427 | 0.1657 | 0.1081 | -0.072 | -0.0619 | -0.0043 | 0.0044 |
| 59 | -0.346 | -0.3076 | 0.0331 | 1.0981 | -0.0569 | 0.0172 | -0.0177 | -0.0108 | -0.0001 | -0.0213 |
| 60 | 0.2034 | 0.1819 | 0.045 | 1.1164 | 0.0395 | -0.0019 | -0.0164 | 0.0193 | 0.0035 | -0.0255 |
| 61 | 0.918 | 0.9672 | 0.3037 | 1.4422 | 0.6387 | -0.0554 | 0.0242 | -0.0076 | 0.5457 | 0.0038 |
| 62 | 2.9441 | 2.7841 | 0.0579 | 0.6936 | 0.6903 | 0.2758 | -0.3335 | -0.2595 | 0.0627 | 0.4051 |
| 63 | 2.4597 | 2.2791 | 0.0491 | 0.8039 | 0.5178 | -0.0749 | 0.285 | -0.0056 | -0.084 | 0.1759 |
| 64 | 1.8591 | 1.7407 | 0.0939 | 0.9673 | 0.5603 | -0.0622 | 0.0805 | -0.0554 | -0.1463 | 0.4265 |
| 65 | 1.4518 | 1.3764 | 0.1291 | 1.083 | 0.5299 | -0.0423 | -0.0271 | -0.0695 | -0.1181 | 0.4612 |
| 66 | -0.4839 | -0.4355 | 0.055 | 1.1165 | -0.1051 | 0.0083 | -0.0789 | 0.006 | 0.0103 | -0.0108 |
| 67 | -0.7563 | -0.6773 | 0.0422 | 1.082 | -0.1421 | 0.0229 | -0.1 | -0.0056 | -0.0004 | -0.012 |
| 68 | 2.0114 | 1.8418 | 0.0482 | 0.8998 | 0.4144 | -0.2059 | 0.1598 | 0.1433 | 0.0899 | 0.1034 |
| 69 | 0.0786 | 0.0718 | 0.0855 | 1.168 | 0.022 | 0.0208 | -0.0044 | -0.0154 | -0.0121 | 0.0011 |
| 70 | 0.009893 | 0.008825 | 0.0405 | 1.1136 | 0.0018 | 0.0002 | -0.0008 | 0.0007 | -0.0002 | -0.001 |
| 71 | 1.7669 | 1.6028 | 0.0404 | 0.9409 | 0.3289 | 0.0065 | 0.24 | 0.0174 | 0.0061 | -0.1452 |
| 72 | -0.4639 | -0.4158 | 0.0473 | 1.1087 | -0.0927 | -0.0214 | 0.0442 | 0.0186 | -0.0233 | -0.0467 |
| 73 | -0.5104 | -0.4563 | 0.0421 | 1.1001 | -0.0957 | 0.0557 | -0.0328 | -0.0522 | -0.0199 | 0.002 |
| 74 | -0.1064 | -0.0941 | 0.0242 | 1.0943 | -0.0148 | 0.0006 | 0.0077 | -0.0065 | 0.0016 | 0.0041 |
| 75 | 1.2094 | 1.1012 | 0.0641 | 1.0537 | 0.2882 | -0.0302 | 0.2329 | 0.0288 | 0.0117 | -0.1094 |
| 76 | -0.2611 | -0.2324 | 0.0356 | 1.1039 | -0.0446 | -0.0032 | 0.0217 | -0.0169 | 0.0104 | 0.0195 |
| 77 | -0.6275 | -0.5626 | 0.0461 | 1.0967 | -0.1237 | 0.0057 | 0.0512 | -0.0609 | 0.0144 | 0.0723 |
| 78 | 0.9101 | 0.8302 | 0.074 | 1.1023 | 0.2347 | 0.0413 | 0.1945 | -0.0372 | 0.0406 | -0.1133 |
| 79 | -0.5508 | -0.4924 | 0.0413 | 1.0966 | -0.1022 | 0.0498 | -0.0487 | -0.0358 | -0.0479 | -0.0068 |
| 80 | -2.0302 | -1.9232 | 0.1072 | 0.9408 | -0.6664 | -0.0734 | 0.0616 | 0.1945 | 0.2231 | -0.6077 |
| 81 | -0.9068 | -0.8095 | 0.0336 | 1.0586 | -0.151 | 0.0757 | -0.0525 | -0.0825 | -0.0196 | 0.0257 |
Find the Bonferroni critical value:
```
alpha = 0.1
bonf = st.t.cdf(1-alpha/(2*81),81-5-1)
bonf
```
If the absolute value of a studentized deleted residual is greater than 0.8396, we conclude the case is an outlier. There are almost 30 outliers according to this method, so I'm not going to list them all.
The diagonal elements of the hat matrix are listed in the fourth column above. We compare them to $\bar h = \frac{p}{n}$:
```
p = 5
n = 81
h_bar = p/n
h_bar
```
Once again, there are many observations that meet this criteria of being considered an outlier. Since the textbook's guidelines seem to be identifying far too many outliers, I'm going to see what Minitab considers to be unusual observations:
Fits and Diagnostics for Unusual Observations
Obs Y-rate Fit SE Fit 95% CI Resid Std Resid Del Resid HI
3 10.500 11.091 0.502 (10.090, 12.092) -0.591 -0.58 -0.58 0.195321
6 10.500 13.687 0.311 (13.068, 14.306) -3.187 -2.91 -3.07 0.074806
8 16.500 16.264 0.511 (15.246, 17.282) 0.236 0.23 0.23 0.202186
42 15.500 12.999 0.258 (12.486, 13.512) 2.501 2.26 2.32 0.051383
61 16.500 15.582 0.626 (14.334, 16.830) 0.918 0.97 0.97 0.303671
62 19.250 16.306 0.274 (15.761, 16.851) 2.944 2.67 2.78 0.057922
63 17.750 15.290 0.252 (14.789, 15.792) 2.460 2.22 2.28 0.049087
Obs Cook’s D DFITS
3 0.02 -0.284280 X
6 0.14 -0.873549 R
8 0.00 0.116414 X
42 0.06 0.540651 R
61 0.08 0.638721 X
62 0.09 0.690332 R
63 0.05 0.517814 R
R Large residual
X Unusual X
Looking through this list of unusual observations, I note that observations 3, 6, 8, and 61 have a leverage value greater than $\bar h$. Minitab indicates that all of these except observation 6 have an unusual X value. I also note that observations 6, 42, 61, 62 and 63 have studentized deleted residuals greater than the Bonferroni critical value (for part (a)).
I'm now going to attempt to use python to identify outliers.
```
# Load in the data
df_cp = pd.read_table('/Users/kevin/Dropbox/School/STA-580/ch6hw/CH06PR18.txt',
sep='\s*', index_col=False, engine='python',
names=['Y-rate', 'X1-age', 'X2-expenses', 'X3-vacancy', 'X4-footage'])
df_cp.index += 1 # I want pandas to use a 1-based index so that index = observation number
df_cp.head()
Y = df_cp['Y-rate']
X = df_cp[['X1-age', 'X2-expenses', 'X3-vacancy', 'X4-footage']]
X = sm.add_constant(X)
model = sm.OLS(Y, X)
res = model.fit()
influence = res.get_influence()
df_influence = influence.summary_frame()
df_influence.head()
# Let's find all the observations for which the studentized deleted residual is greater than the Bonferroni critical value
outliers_bonf = []
for row in df_influence.itertuples():
if np.abs(row.student_resid) > bonf:
outliers_bonf.append(row.Index)
print(outliers_bonf)
# Now let's find all the observations for which the hat matrix diagonal is greater than p/n
outliers_hat = []
for row in df_influence.itertuples():
if row.hat_diag > h_bar:
outliers_hat.append(row.Index)
print(outliers_hat)
```
### c.
Doing this in Minitab was a mess so I'm going to attempt it in python. This means I need to learn matrix algebra in python. I'm following along with http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html.
```
from scipy import linalg
# don't forget the constant
X_new = np.array([[1], [10], [12.00], [0.05], [350000]]) # vector
X_new
X_new.T # transpose of X_new
h_new_new = X_new.T.dot(linalg.inv(X.T.dot(X))).dot(X_new) # (10.29), p. 400
h_new_new = h_new_new[0][0] # extract value from 1x1 array
h_new_new
```
To be thorough, let's get the range of leverage values:
```
(df_influence.hat_diag.min(), df_influence.hat_diag.max())
```
$h_{\text{new,new}} = 0.0529$ is well within the range of leverage values. Therefore, this estimate will _not_ involve a hidden extrapolation.
### d.
We're interested in the influence of cases 3, 8, 53 and 61 with respect to X and cases 6 and 62 with respect to Y. The DFFITS, DFBETAS and Cook's distance values can be seen in the SAS output table above or in the filtered table below.
#### DFFIT
Let's start by considering DFFIT values. Kutner et al. suggest (p. 401) that we consider a case influential if $DFFIT > 2\sqrt{\frac{p}{n}}$ for large data sets. I don't know if 81 observations is a large data set but I'm going to assume it is. Let's calculate that critical value:
```
dffit_crit = 2*np.sqrt(p/n)
dffit_crit
# Let's make a list of the cases under consideration
outliers = [3, 6, 8, 53, 61, 62]
outliers = [x-1 for x in outliers] # need to subtract one from each to use df_influence.iloc and get correct cases
df_influence_outliers = df_influence.iloc[outliers]
df_influence_outliers
```
Cases 6, 53, 61 and 62 have $\left |DFFIT\right| > 0.497$, with case 6 being the most extreme. These four cases are certainly not the only cases that meet this criteria, however, so we must consider the other measures of influence.
#### Cook's Distance
Let's check the Cook's distance values against the corresponding F-distribution.
```
# First, we need the corresponding F distribution
f_cp = st.f(p, n-p) # f_cp will be F(5, 76)
# Now, let's compare each Cook's Distance value against
# the F cumulative density function
for row in df_influence_outliers.itertuples():
percentile = f_cp.ppf(row.cooks_d) # find the percentile
print((row.Index, percentile))
```
I am beginning to doubt that the example on p. 404 of Kutner et al. is correct. They seem to have used the cumulative density function whereas I think they should have been using the inverse density function in order to get a percentile. Looking at the numbers above, it appears that observation 6 is by far the most influential on all fitted values.
William G. Jacoby from Michigan State University recommends (http://polisci.msu.edu/jacoby/icpsr/regress3/lectures/week3/11.Outliers.pdf) plotting the Cooks's distance values because the relative distances are more important than comparing to some critical value. I agree with this since there seems to be quite a bit of disagreement over what that critical value should be. So here's that plot of Cook's distance values vs. case index number.
```
sns.jointplot(x=df_influence.index, y=df_influence.cooks_d)
```
This confirms that case 6 is by far the most influential.
#### DFBETAS
Kutner et al. recommends comparing the absolute value of DFBETA values against $\frac{2}{\sqrt n}$.
```
dfb_crit = 2/np.sqrt(n)
dfb_crit
```
Comparing the DFBETAS against this critical value, it appears that cases 6 and 62 have the strongest influence on the greatest number of regression coefficients.
#### Conclusion
Cases 6 and 62 should be considered for removal first. It may also be worth considering cases 53 and 61, especially if the effect of $X_3$ (vacancy) is important. In light of the comments on considering groups of outlying cases together (Kutner et al. p. 406, "Some Final Comments"), cases 61, 62 and 53 should probably be considered together since they are close to each other with respect to most of these measuers of influence.
### e.
So, I already have the fitted values from the regression including all cases. I need to fit another regression without the influential cases and then add back in a fitted value for the X values for the missing case. Because this is time-intensive, I'm only going to consider the most influential case, case 6.
```
# Fit the regression without case 6
Y_reduced = df_cp.drop(6)['Y-rate'] # drop the 6th case
X_reduced = df_cp.drop(6)[['X1-age', 'X2-expenses', 'X3-vacancy', 'X4-footage']]
X_reduced = sm.add_constant(X_reduced)
res_reduced = sm.OLS(Y_reduced, X_reduced).fit() # fit an ord. least squares regression to reduced data set
res_reduced.params
# convert fits to a data type I can modify
fits_reduced = res_reduced.fittedvalues.to_dict()
# calculate the fit for the case we removed and store it with the rest of the fits
par = res_reduced.params
fits_reduced[6] = par['const'] + 15*par['X1-age'] + 9.45*par['X2-expenses'] + 0.24*par['X3-vacancy'] + 101385*par['X4-footage']
# Ensure we have a fit for the 6th case
fits_reduced[6]
# convert original fits to dictionary data type also
fits = res.fittedvalues.to_dict()
# Ensure both dicts have length n
(len(fits), len(fits_reduced))
# Calculate the average absolute percent difference
sum = 0
for x in range(1,82):
sum += np.abs((fits_reduced[x] - fits[x])/fits[x])
aapd = sum*100/n
aapd
```
The average absolute percent difference is only 0.56% for the most influential case. It's not even worth considering the other cases. I think the potential outlier cases we've been considering do not exert undue influence on inferences and therefore remedial action is not required.
### f.
The index plot of Cook's distance values is displayed above in part (D). As I noted then, case 6 is the most influential by far with $D_6=0.137$. There are many cases with $0.02 < D_i < 0.10$, but as we've seen, none of them are influential enough to warrant remedial measures.
## 10.18
### a.
```
sns.pairplot(df_cp)
```
Correlation: X1-age, X2-expenses, X3-vacancy, X4-footage
X1-age X2-expenses X3-vacancy
X2-expenses 0.389
0.000
X3-vacancy -0.253 -0.380
0.023 0.000
X4-footage 0.289 0.441 0.081
0.009 0.000 0.474
Cell Contents: Pearson correlation
P-Value
The scatter plot matrix and correlation matrix show that there are _not_ strong pairwise linear associations between any of the predictor variables. Expenses ($X_2$) and square footage ($X_4$) are the only variables that appear to have any visual linear relationship but $r$ is only 0.441 and p-value = 0+. The p-value for correlation between $X_4$ and $X_3$ is high (0.474) but $r$ is only 0.081, so there's not a strong relationship.
### b.
Regression Analysis: X1-age versus X2-expenses, X3-vacancy, X4-footage
S R-sq R-sq(adj) R-sq(pred)
6.07049 19.38% 16.24% 12.52%
Regression Analysis: X2-expenses versus X1-age, X3-vacancy, X4-footage
S R-sq R-sq(adj) R-sq(pred)
2.05090 39.33% 36.96% 33.93%
Regression Analysis: X3-vacancy versus X1-age, X2-expenses, X4-footage
S R-sq R-sq(adj) R-sq(pred)
0.119211 24.45% 21.50% 13.45%
Regression Analysis: X4-footage versus X1-age, X2-expenses, X3-vacancy
S R-sq R-sq(adj) R-sq(pred)
93560.4 29.21% 26.46% 23.63%
I'm using (10.41).
```
r2s = [0.1938, 0.3933, 0.2445, 0.2921] #R^2 values from above
for index, r2 in enumerate(r2s, start=1):
print(('X{i}'.format(i=index), 1/(1-r2))) # calculate VIF values
```
All of these $(VIF)_k$ values are close to 1, meaning that multicollinearity is _not_ an issue for this data. This agrees with my finding in part (a).
| github_jupyter |
# Index
- Server & Client Architecture
- URL
- Get & Post
- Internet
- OSI 7 Layer
- cookie & session & cache
- Web Status Code
- Web Language & Framework
- Spider & Bot & Scraping & Crawling
#### Server & Client Architecture
- Client
- 브라우져를 통해 서버에 데이터를 요청
- Server
- Client가 데이터를 요청하면 요청에 따라 데이터를 전송 (HTML, CSS ,JS로 읽음)
#### URL
- Unifor Resource Lacator
- Example)
- http://news.naver.com:80/main/read.nhn?mode=LSD&mid=shm&sid1=105&oid=001&aid=0009847211#da_727145
- https:// - Protocol
- news - Sub Domain
- naver.com - Domain
- 80 - Port
- /main/ - path
- read.nhn - page
- ?mode=LSD&mid=shm&sid1=105&oid=001&aid=0009847211#da_727145 - query
- #da_727145 - fragment
#### Get & Post
- Get
- URL에 데이터가 포함된다 -> 데이터가 노출
- 길이 제한이 있음
- Post
- Body에 데이터가 포함된다 -> 데이터가 숨겨짐
#### Internet
- 인터넷이란?
- 인터넷은 컴퓨터로 연결하여 TCP/IP (Transmission Control Protocol/Internet Protocol)라는 통신 프로토콜을 이용해 정보를 주고받는 컴퓨터 네트워크
- 우리는 인터넷을 어떻게 사용하고 있을까?
- 헤저케이블을 이용하여 전세계 서버에 접속하여 인터넷을 사용하고 있다.
- 무선 인터넷
- 선이 아니라 주파수를 매체로 사용한다.
- 우리가 사용하는 단말기만 무선이다.
- 한국의 해저케이블은 부산에 연결됨.
#### OSI 7 Layer
- Open Systems Interconnection Reference Model
- 국제표준화기구(ISO)에서 개발한 모델로, 컴퓨터 네트워크 프로토콜 디자인과 통신을 계층으로 나누어 설명한 것이다.
- 하위 계층으로 갈수록 페이로드 증가
- Layer1 : Application: Network Process to Aplication (Data)
- Layer2 : Presentation:Data representation and Encryption (Data)
- Layer3 : Session: Interhost communication (Data)
- Layer4 : Transport: End-To-End connections and Reliability (Segements)
- Layer5 : Network: Path Determination and IP (Logical addressing) (Packets)
- Layer6 : Data Link: Mac and LLC (Physical Addressing) (Frames)
- Layer7 : Physical: Media, Signal, and Binary Transmission (Bits)
#### Cookie & Session & Cache
- Cookie
- Client에 저장하는 문자열 데이터로 도메인 별로 따로 저장
- 로그인 정보, 내가 봤던 상품 정보, 팝업 다시보지 않음
- 하나의 클라이언트에 300개, 도메인당 20개, 쿠키 하나당 4kbyte
- Session
- Server에 저장하는 객체 데이터, 브라우져와 연결시 Session ID 생성
- Session ID를 Cookie에 저장함으로 로그인 연결 유지
- 같은 브라우져로 같은 서버에 접속하면 Session ID가 같음
- 로그인 연결 정보, 원하는 객체 데이터
- Cache
- Client나 Server의 메모리에 저장하여 빠르게 데이터를 가져오는 목적의 저장소
- RAM 영역에 데이터를 담고 있고 그 저장 공간을 Cache라고 합니다.
- HTTP code 300 (Redirection)의 경우에는 Brower Cache를 이용.
#### HTTP Status Code
- Server와 Client가 데이터를 주고 받으면 주고 받은 결과로 상태 코드를 확인할 수 있다.
- 2xx - success
- 3xx - redirection (browser cache)
- 4xx - request error
- 5xx - server error
- http://bit.ly/2nlZM8L (HTTP상태 코드)
#### Web Language & Framework
- Client
- HTML
- CSS - less, sass
- Javascript - vue.js, react.js, angelar.js, backborn.js
- Server
- Python - Django, Flask
- Django - Packages 가 많은 대신, 느리다
- Flask - 빠른 대신, Deep하게 개발하려면, Library가 부족하기에, 개발이 힘듦
- Java - Spring
- Ruby - Rails
- Javascript - Nodejs (Express)
- Scala - Play
#### Scraping & Crawling & Spider & Bot
- Scraping
- Data를 수집하는 작업
- Crawling (Scraping의 상위 개념이라고 보면됨)
- 여러페이지의 특정 데이터들을 수집하고 분류하는 작업
- Spider or Web Crawler
- Web data를 수집하는 소프트웨어
- Bot
- 인터넷 상에서 자동화된 작업을 실행하는 소프트웨어
| github_jupyter |
```
# Install TensorFlow
# !pip install -q tensorflow-gpu==2.0.0-beta1
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
# More imports
from tensorflow.keras.layers import Input, SimpleRNN, GRU, LSTM, Dense, Flatten, GlobalMaxPool1D
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD, Adam
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
### build the dataset
# This is a nonlinear AND long-distance dataset
# (Actually, we will test long-distance vs. short-distance patterns)
# Start with a small T and increase it later
T = 10
D = 1
X = []
Y = []
def get_label(x, i1, i2, i3):
# x = sequence
if x[i1] < 0 and x[i2] < 0 and x[i3] < 0:
return 1
if x[i1] < 0 and x[i2] > 0 and x[i3] > 0:
return 1
if x[i1] > 0 and x[i2] < 0 and x[i3] > 0:
return 1
if x[i1] > 0 and x[i2] > 0 and x[i3] < 0:
return 1
return 0
for t in range(5000):
x = np.random.randn(T)
X.append(x)
y = get_label(x, -1, -2, -3) # short distance
# y = get_label(x, 0, 1, 2) # long distance
Y.append(y)
X = np.array(X)
Y = np.array(Y)
N = len(X)
# Try a linear model first - note: it is classification now!
i = Input(shape=(T,))
x = Dense(1, activation='sigmoid')(i)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the network
r = model.fit(
X, Y,
epochs=100,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too - should be around 50%
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now try a simple RNN
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
# x = LSTM(5)(i)
x = SimpleRNN(5)(i)
# x = GRU(5)(i)
# method 2
# x = LSTM(5, return_sequences=True)(i)
# x = GlobalMaxPool1D()(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
# optimizer='rmsprop',
# optimizer='adam',
optimizer=Adam(lr=0.01),
# optimizer=SGD(lr=0.1, momentum=0.9),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=200,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now change to the long distance problem
# Start with a small T and increase it later
T = 10
D = 1
X = []
Y = []
for t in range(5000):
x = np.random.randn(T)
X.append(x)
y = get_label(x, 0, 1, 2) # long distance
Y.append(y)
X = np.array(X)
Y = np.array(Y)
N = len(X)
# Now test our Simple RNN again
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = SimpleRNN(5)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=200,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now test our LSTM
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = LSTM(5)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=200,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Make the problem harder by making T larger
T = 20
D = 1
X = []
Y = []
for t in range(5000):
x = np.random.randn(T)
X.append(x)
y = get_label(x, 0, 1, 2) # long distance
Y.append(y)
X = np.array(X)
Y = np.array(Y)
N = len(X)
# Now test our Simple RNN again
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = SimpleRNN(5)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=200,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now test our LSTM
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = LSTM(5)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=200,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now test our GRU
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = GRU(5)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=400,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Make the problem harder by making T larger
T = 30
D = 1
X = []
Y = []
for t in range(5000):
x = np.random.randn(T)
X.append(x)
y = get_label(x, 0, 1, 2) # long distance
Y.append(y)
X = np.array(X)
Y = np.array(Y)
N = len(X)
# Now test our LSTM
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 1
x = LSTM(15)(i)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=400,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Now try a LSTM with Global Max Pooling
inputs = np.expand_dims(X, -1)
# make the RNN
i = Input(shape=(T, D))
# method 2
x = LSTM(5, return_sequences=True)(i)
x = GlobalMaxPool1D()(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(i, x)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy'],
)
# train the RNN
r = model.fit(
inputs, Y,
epochs=100,
validation_split=0.5,
)
# Plot the loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
```
| github_jupyter |
### Preprocessing
```
# import relevant statistical packages
import numpy as np
import pandas as pd
# import relevant data visualisation packages
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# import custom packages
from sklearn.linear_model import LinearRegression
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
# import data
url = "/Users/arpanganguli/Documents/Professional/Finance/ISLR/Datasets/Hitters.csv"
Hitters = pd.read_csv(url)
Hitters.head()
# clean data
print(Hitters.shape)
Hitters = Hitters.dropna()
Hitters.shape
Hitters.head()
# converting categorical data into dummy variable
Hitters_1 = pd.get_dummies(Hitters, drop_first=True, columns=['League', 'Division', 'NewLeague'])
Hitters_1.head()
```
### Principal Components Regression
```
from sklearn.preprocessing import StandardScaler, scale
import warnings
warnings.filterwarnings('ignore')
X = Hitters_1.drop(columns = ['Salary', 'Names'])
y = Hitters_1.Salary
pca = PCA()
X_scaled = pca.fit_transform(scale(X))
explained_variance_ratio = np.var(X_scaled, axis=0) / np.sum(np.var(X_scaled, axis=0))
EVR = pd.DataFrame(np.cumsum(np.round(explained_variance_ratio, decimals=4)*100), columns=['explained variance ratio'])
EVR.index = EVR.index + 1
EVR
# Plot of explained variance ratio
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.plot(EVR, '-', marker = 'o', markerfacecolor='blue', markersize=8, color='green')
plt.xlabel('number of components', fontsize=20)
plt.ylabel('explained variance ratio', fontsize=20)
plt.title('explained variance ratio', fontsize=30)
plt.xlim(xmin=-1);
```
**Explained variance ratio is the percentage of variance explained in the predictors and in the response using different
number of components.**
```
# cross validation
from sklearn.model_selection import cross_val_score, KFold
n = len(X_scaled)
kf10 = KFold(n_splits=10, shuffle=True, random_state=1)
lm = LinearRegression()
RMSEPD = []
# Calculate RMSE with only the intercept (i.e. no principal components)
MSE = -1*cross_val_score(lm, np.ones((n,1)), y.ravel(), cv=kf10, scoring='neg_mean_squared_error').mean()
RMSEPD.append(pow(MSE, 0.5))
# Calculate MSE using CV for the 19 principle components
for i in np.arange(1, 20):
MSE = -1*cross_val_score(lm, X_scaled[:,:i], y.ravel(), cv=kf10, scoring='neg_mean_squared_error').mean()
RMSEPD.append(pow(MSE, 0.5))
RMSEdf = pd.DataFrame(data=RMSEPD, columns=['RMSE'])
RMSEdf
# Plot of PCR results
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.plot(RMSEdf, '-', marker = 'o', markerfacecolor='blue', markersize=8, color='green')
plt.xlabel('number of principal components', fontsize=20)
plt.ylabel('RMSE', fontsize=20)
plt.title('principal components regression results', fontsize=30)
plt.xlim(xmin=-1);
```
**We see that the lowest MSE occurs for 18 principal components. This is not too different from the total number of
variables(=19). So, there is not much dimension reduction to do and therefore PCR is not too useful. However, the model's RMSE drops significantly after adding just one variable and remains roughly the same which suggests that just
a small number of components might suffice.**
### Split dataset into training and test dataset (and standardise them)
```
from sklearn.model_selection import train_test_split
X = Hitters_1.drop(columns = ['Salary', 'Names'])
y = Hitters_1.Salary
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
```
### Principal components regression - cross validation
```
pca2 = PCA()
X_train_scaled = pca2.fit_transform(scale(X_train))
n = len(X_train_scaled)
n
kf10 = KFold(n_splits=10, shuffle=True, random_state=1)
lm = LinearRegression()
RMSEPD = []
# Calculate RMSE with only the intercept (i.e. no principal components)
MSE = -1*cross_val_score(lm, np.ones((n,1)), y_train.ravel(), cv=kf10, scoring='neg_mean_squared_error').mean()
RMSEPD.append(pow(MSE, 0.5))
# Calculate MSE using CV for the 19 principle components
for i in np.arange(1, 20):
MSE = -1*cross_val_score(lm, X_train_scaled[:,:i], y_train.ravel(), cv=kf10, scoring='neg_mean_squared_error').mean()
RMSEPD.append(pow(MSE, 0.5))
RMSEdf = pd.DataFrame(data=RMSEPD, columns=['RMSE'])
RMSEdf
# Plot of PCR results
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.plot(RMSEdf, '-', marker = 'o', markerfacecolor='blue', markersize=8, color='green')
plt.xlabel('number of principal components', fontsize=20)
plt.ylabel('RMSE', fontsize=20)
plt.title('principal components regression results - cross validation', fontsize=30)
plt.xlim(xmin=-1);
```
**We notice that the smallest RMSE occurs at 5 principal components. Therefore, we will perform principal component
regression with 5 principal components.**
```
X_test_scaled = pca2.transform(scale(X_test))[:,:6]
lm2fit = LinearRegression().fit(X_train_scaled[:,:6], y_train)
lm2pred = lm2fit.predict(X_test_scaled)
print(np.sqrt(mean_squared_error(y_test, lm2pred)))
```
**This MSE from principal components regression (PCR) is comparable to that of ridge regression (=152308.5473577816) and
lasso regression (=150198.92762434622). However, because PCR does not produce coefficient estimates like other methods,
it is much more difficult to interpret.**
```
explained_variance_ratio_test = np.var(X_test_scaled, axis=0) / np.sum(np.var(X_test_scaled, axis=0))
EVR6 = pd.DataFrame(np.cumsum(np.round(explained_variance_ratio_test, decimals=4)*100), columns=['Explained Variance Ratio'])
EVR6.index = EVR6.index + 1
EVR6
```
| github_jupyter |
# OSMOSIS Spring
This notebook runs [GOTM](https://gotm.net/) with initial conditions and surface forcing during the spring months (Dec. 25, 2012 - Sep. 10, 2013) of the Ocean Surface Mixing, Ocean Submesoscale Interaction Study in the northeast Atlantic (OSMOSIS, 48.7$^\circ$N, 16.2$^\circ$W; [Damerell et al., 2016](https://doi.org/10.1002/2015JC011423)). See Fig. 3 of [Li et al., 2019](https://doi.org/10.1029/2019MS001810).
```
import sys
import copy
import numpy as np
import matplotlib.pyplot as plt
sys.path.append("../../../gotmtool")
from gotmtool import *
```
## Create a model
Create a model with environment file `../../.gotm_env.yaml`, which is created by `gotm_env_init.py`.
```
m = Model(name='OSMOSIS-Spring', environ='../../.gotm_env.yaml')
```
Take a look at what are defined in the environment file.
```
for key in m.environ:
print('{:>15s}: {}'.format(key, m.environ[key]) )
```
## Build the model
```
%%time
m.build()
```
## Configuration
Initialize the GOTM configuration
```
cfg = m.init_config()
```
Update the configuration
```
# setup
title = 'OSMOSIS - Spring'
nlev = 480
cfg['title'] = title
cfg['location']['name'] = 'OSMOSIS'
cfg['location']['latitude'] = 48.7
cfg['location']['longitude'] = -16.2
cfg['location']['depth'] = 480.0
cfg['time']['start'] = '2012-12-25 00:00:00'
cfg['time']['stop'] = '2013-09-10 03:00:00'
cfg['time']['dt'] = 600.0
cfg['grid']['nlev'] = nlev
# output
cfg['output'] = {}
cfg['output']['gotm_out'] = {}
cfg['output']['gotm_out']['use'] = True
cfg['output']['gotm_out']['title'] = title
cfg['output']['gotm_out']['k1_stop'] = nlev+1
cfg['output']['gotm_out']['k_stop'] = nlev
cfg['output']['gotm_out']['time_unit'] = 'hour'
cfg['output']['gotm_out']['time_step'] = 3
cfg['output']['gotm_out']['variables'] = [{}]
cfg['output']['gotm_out']['variables'][0]['source'] = '*'
# forcing
datadir = m.environ['gotmdir_data']+'/examples/OSMOSIS-Spring'
cfg['temperature']['method'] = 'file'
cfg['temperature']['file'] = datadir+'/t_prof.dat'
cfg['salinity']['method'] = 'file'
cfg['salinity']['file'] = datadir+'/s_prof.dat'
cfg['surface']['fluxes']['heat']['method'] = 'file'
cfg['surface']['fluxes']['heat']['file'] = datadir+'/heat_flux.dat'
cfg['surface']['fluxes']['tx']['method'] = 'file'
cfg['surface']['fluxes']['tx']['file'] = datadir+'/momentum_flux.dat'
cfg['surface']['fluxes']['tx']['column'] = 1
cfg['surface']['fluxes']['ty']['method'] = 'file'
cfg['surface']['fluxes']['ty']['file'] = datadir+'/momentum_flux.dat'
cfg['surface']['fluxes']['ty']['column'] = 2
cfg['surface']['swr']['method'] = 'file'
cfg['surface']['swr']['file'] = datadir+'/swr.dat'
cfg['surface']['precip']['method'] = 'file'
cfg['surface']['precip']['file'] = datadir+'/pme.dat'
cfg['waves']['stokes_drift']['us']['method'] = 'exponential'
cfg['waves']['stokes_drift']['vs']['method'] = 'exponential'
cfg['waves']['stokes_drift']['exponential']['us0']['method'] = 'file'
cfg['waves']['stokes_drift']['exponential']['us0']['file'] = datadir+'/stokes_drift.dat'
cfg['waves']['stokes_drift']['exponential']['us0']['column'] = 1
cfg['waves']['stokes_drift']['exponential']['vs0']['method'] = 'file'
cfg['waves']['stokes_drift']['exponential']['vs0']['file'] = datadir+'/stokes_drift.dat'
cfg['waves']['stokes_drift']['exponential']['vs0']['column'] = 2
cfg['waves']['stokes_drift']['exponential']['ds']['method'] = 'file'
cfg['waves']['stokes_drift']['exponential']['ds']['file'] = datadir+'/stokes_drift.dat'
cfg['waves']['stokes_drift']['exponential']['ds']['column'] = 3
# water type
cfg['light_extinction']['method'] = 'custom'
cfg['light_extinction']['A']['method'] = 'constant'
cfg['light_extinction']['A']['constant_value'] = 0.57
cfg['light_extinction']['g1']['method'] = 'constant'
cfg['light_extinction']['g1']['constant_value'] = 0.55
cfg['light_extinction']['g2']['method'] = 'constant'
cfg['light_extinction']['g2']['constant_value'] = 17.0
# EOS -- use linear
cfg['eq_state']['form'] = 'linear_custom'
cfg['eq_state']['linear']['T0'] = 10.0
cfg['eq_state']['linear']['S0'] = 35.0
cfg['eq_state']['linear']['dtr0'] = -0.17
cfg['eq_state']['linear']['dsr0'] = 0.78
# damping on velocity: relaxation to zero with a 5-day decay time
cfg['velocities']['relax']['tau'] = 432000.0
```
Set the turbulence method to the generic length scale (GLS; [Umlauf and Burchard, 2003](https://doi.org/10.1357/002224003322005087)) model in the $k$-$\epsilon$ formulation with the weak-equilibrium stability function by [Canuto et al., 2001](https://doi.org/10.1175/1520-0485(2001)031%3C1413:OTPIOP%3E2.0.CO;2) (C01A).
```
cfg['turbulence']['turb_method'] = 'second_order'
cfg['turbulence']['tke_method'] = 'tke'
cfg['turbulence']['len_scale_method'] = 'gls'
cfg['turbulence']['scnd']['method'] = 'weak_eq_kb_eq'
cfg['turbulence']['scnd']['scnd_coeff'] = 'canuto-a'
cfg['turbulence']['turb_param']['length_lim'] = 'false'
cfg['turbulence']['turb_param']['compute_c3'] = 'true'
cfg['turbulence']['turb_param']['Ri_st'] = 0.25
cfg['turbulence']['generic']['gen_m'] = 1.5
cfg['turbulence']['generic']['gen_n'] = -1.0
cfg['turbulence']['generic']['gen_p'] = 3.0
cfg['turbulence']['generic']['cpsi1'] = 1.44
cfg['turbulence']['generic']['cpsi2'] = 1.92
cfg['turbulence']['generic']['cpsi3minus'] = -0.63
cfg['turbulence']['generic']['cpsi3plus'] = 1.0
cfg['turbulence']['generic']['sig_kpsi'] = 1.0
cfg['turbulence']['generic']['sig_psi'] = 1.3
```
## Run the model
```
%%time
sim = m.run(config=cfg, label='SMC-C01A')
```
## Results
Load the data into an `xarray.Dataset`.
```
data = sim.load_data()
```
Temperature
```
fig = plt.figure(figsize=[8,4])
levels = np.linspace(11.5, 18.5, 71)
data.temp.plot(levels=levels)
```
## Try different turbulence models
Now run the same model with [CVMix](http://cvmix.github.io). Try three options in CVMix:
- KPP-CVMix ([Large et al., 1994](https://doi.org/10.1029/94RG01872), [Griffies et al., 2015](https://github.com/CVMix/CVMix-description/raw/master/cvmix.pdf))
- KPPLT-VR12 ([Li et al., 2016](https://doi.org/10.1016%2Fj.ocemod.2015.07.020))
- KPPLT-LF17 ([Li and Fox-Kemper, 2017](https://doi.org/10.1175%2FJPO-D-17-0085.1))
```
cfgs = []
labels = []
cfg['turbulence']['turb_method'] = 'cvmix'
cfg['cvmix']['surface_layer']['kpp']['langmuir_method'] = 'none'
cfgs.append(copy.deepcopy(cfg))
labels.append('KPP-CVMix')
cfg['cvmix']['surface_layer']['kpp']['langmuir_method'] = 'lwf16'
cfgs.append(copy.deepcopy(cfg))
labels.append('KPPLT-VR12')
cfg['cvmix']['surface_layer']['kpp']['langmuir_method'] = 'lf17'
cfgs.append(copy.deepcopy(cfg))
labels.append('KPPLT-LF17')
```
Run the three cases in parallel with 3 processes
```
%%time
sims = m.run_batch(configs=cfgs, labels=labels, nproc=3)
```
Plot the temperature in `KPP-CVMix` and the differences from it in `KPPLT-VR12` and `KPPLT-LF17`
```
fig, axarr = plt.subplots(3, sharex='col')
fig.set_size_inches(8, 9)
data0 = sims[0].load_data()
levels = np.linspace(11.5, 18.5, 71)
data0.temp.plot(ax=axarr[0], levels=levels)
axarr[0].set_title(labels[0])
levels_diff = np.linspace(-2, 2, 41)
for i in np.arange(2):
j = i+1
diff = sims[j].load_data().temp - data0.temp
diff.attrs['long_name'] = '$\Delta$ '+ data0.temp.attrs['long_name']
diff.attrs['units'] = data0.temp.attrs['units']
diff.plot(ax=axarr[j], levels=levels_diff)
axarr[j].set_title(labels[j]+' $-$ '+labels[0])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Serbeld/RX-COVID-19/blob/master/Detection5C_Norm_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install lime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import inception_v3
from tensorflow.keras.layers import Dense,Dropout,Flatten,Input,AveragePooling2D,BatchNormalization
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
import lime
from lime import lime_image
from skimage.segmentation import mark_boundaries
import pandas as pd
plt.rcParams["figure.figsize"] = (10,5)
#Loading the dataset
!pip install h5py
import h5py
from google.colab import drive,files
drive.mount('/content/drive')
hdf5_path = '/content/drive/My Drive/Dataset5C/Dataset5C.hdf5'
dataset = h5py.File(hdf5_path, "r")
import numpy as np
import matplotlib.pylab as plt
#train
train_img = dataset["train_img"]
xt = np.array(train_img)
yt = np.array(dataset["train_labels"])
#test
testX = np.array(dataset["test_img"])
testY = np.array(dataset["test_labels"])
#Validation
xval = np.array(dataset["val_img"])
yval = np.array(dataset["val_labels"])
print("Training Shape: "+ str(xt.shape))
print("Validation Shape: "+ str(xval.shape))
print("Testing Shape: "+ str(testX.shape))
#Categorical values or OneHot
import keras
num_classes = 5
yt = keras.utils.to_categorical(yt,num_classes)
testY = keras.utils.to_categorical(testY,num_classes)
yval = keras.utils.to_categorical(yval,num_classes)
#Image
num_image = 15
print()
print('Healthy: [1 0 0 0 0]')
print('Pneumonia & Covid-19: [0 1 0 0 0]')
print('Cardiomegaly: [0 0 1 0 0]')
print('Other respiratory disease: [0 0 0 1 0]')
print('Pleural Effusion: [0 0 0 0 1]')
print()
print("Output: "+ str(yt[num_image]))
imagen = train_img[num_image]
plt.imshow(imagen)
plt.show()
## global params
INIT_LR = 1e-5 # learning rate
EPOCHS = 10 # training epochs
BS = 4 # batch size
## build network
from tensorflow.keras.models import load_model
#Inputs
inputs = Input(shape=(512, 512, 3), name='images')
inputs2 = BatchNormalization()(inputs)
#Inception Model
output1 = inception_v3.InceptionV3(include_top=False,weights= "imagenet",
input_shape=(512, 512, 3),
classes = 5)(inputs2)
#AveragePooling2D
output = AveragePooling2D(pool_size=(4, 4), strides=None,
padding='valid',name='AvgPooling')(output1)
#Flattened
output = Flatten(name='Flatten')(output)
#ReLU layer
output = Dense(1000, activation = 'relu',name='ReLU')(output)
#Dropout
output = Dropout(0.35,name='Dropout')(output)
#Dense layer
output = Dense(5, activation='softmax',name='softmax')(output)
# the actual model train)
model = Model(inputs=inputs, outputs=output)
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
model_checkpoint = ModelCheckpoint(filepath="/content/drive/My Drive/Dataset5C/Model",
monitor='val_loss', save_best_only=True)
## train
print("[INFO] training head...")
H = model.fit({'images': xt},
{'softmax': yt},
batch_size = BS,
epochs = EPOCHS,
validation_data=(xval, yval),
callbacks=[model_checkpoint],
shuffle=True)
#Load the best model trained
model = load_model("/content/drive/My Drive/Dataset5C/Model")
## eval
print("[INFO] evaluating network...")
print()
print("Loss: "+ str(round(model.evaluate(testX,testY,verbose=0)[0],2))+ " Acc: "+ str(round(model.evaluate(testX,testY,verbose=1)[1],2)))
print()
predIdxs = model.predict(testX)
predIdxs = np.argmax(predIdxs, axis=1) # argmax for the predicted probability
#print(classification_report(testY.argmax(axis=1), predIdxs,target_names=lb.classes_))
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
#print(total) #60
acc = (cm[0, 0] + cm[1, 1] + cm[2, 2] + cm[3,3]+ cm[4,4]) / total
#sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
#specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
#print("sensitivity: {:.4f}".format(sensitivity))
#print("specificity: {:.4f}".format(specificity))
## explain
N = EPOCHS
plt.style.use("ggplot")
plt.figure(1)
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Precision of COVID-19 detection.")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
#plt.axis([0, EPOCHS, 0.3, 0.9])
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_cero_plot_Inception_2nd_time.png")
plt.show()
import cv2
plt.figure(2)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Normal"+str(ind)+".png")
plt.show()
plt.figure(3)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Light"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=3, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
end = cv2.addWeighted((imagen/255), 0.7, mask/255, 0.3, 0)
plt.imshow((end))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_purple"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=2, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
plt.imshow((end))
cv2.imwrite("/content/drive/My Drive/Maps/Heat_map"+str(ind)+".png",end*255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map"+str(ind)+".png")
plt.show()
plt.figure(5)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=1, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
deep = np.reshape(end,newshape=(512,512,3),order='C')
CHANNEL1=deep[:,:,2]
CHANNEL2=deep[:,:,0]
deep[:,:,0] = CHANNEL1
#deep[:,:,2] = CHANNEL2
plt.imshow((deep))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_ma"+str(ind)+".png")
plt.show()
```
| github_jupyter |
```
#NOTE: This must be the first call in order to work properly!
from deoldify import device
from deoldify.device_id import DeviceId
#choices: CPU, GPU0...GPU7
device.set(device=DeviceId.GPU0)
from deoldify.visualize import *
plt.style.use('dark_background')
torch.backends.cudnn.benchmark=True
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*?Your .*? set is empty.*?")
```
NOTE: Set artistic to False if you're having trouble getting a good render. Chances are it will work with the Stable model.
```
colorizer = get_image_colorizer(artistic=True)
```
# Instructions
### source_url
Type in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, you can set source_url to None and just upload the image to /test_images/ in Jupyter. Just make sure that the source_path parameter matches the file you uploaded.
### source_path
Name this whatever sensible image path (plus extension of jpg/png/ext) you want! Sensible means the path exists and the file exists if source_url=None.
### render_factor
The default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out.
### result_path
Ditto- don't change.
### How to Download a Copy
Simply shift+right click on the displayed image and click "Save Image As..."!
## Pro Tips
1. You can evaluate how well the image is rendered at each render_factor by using the code at the bottom (that cell under "See how well render_factor values perform on a frame here").
2. Keep in mind again that you can go up top and set artistic to False for the colorizer to use the 'Stable' model instead. This will often tend to do better on portraits, and natural landscapes.
## Troubleshooting
If you get a 'CUDA out of memory' error, you probably have the render_factor too high. The max is 45 on 11GB video cards.
## Colorize!!
```
#NOTE: Max is 45 with 11GB video cards. 35 is a good default
render_factor=35
#NOTE: Make source_url None to just read from file at ./video/source/[file_name] directly without modification
source_url='https://upload.wikimedia.org/wikipedia/commons/e/e4/Raceland_Louisiana_Beer_Drinkers_Russell_Lee.jpg'
source_path = 'test_images/image.png'
result_path = None
if source_url is not None:
result_path = colorizer.plot_transformed_image_from_url(url=source_url, path=source_path, render_factor=render_factor, compare=True)
else:
result_path = colorizer.plot_transformed_image(path=source_path, render_factor=render_factor, compare=True)
show_image_in_notebook(result_path)
```
## See how well render_factor values perform on the image here
```
#for i in range(10,46):
#colorizer.plot_transformed_image(source_path, render_factor=i, display_render_factor=True, figsize=(10,10))
```
| github_jupyter |
## Naive Bayes
#### What is Naive Bayes?
Naive Bayes is among one of the most simple and powerful algorithms for classification based on Bayes’ Theorem with an assumption of independence among predictors. Naive Bayes model is easy to build and particularly useful for very large data sets. There are two parts to this algorithm:
Naive
Bayes
The Naive Bayes classifier assumes that the presence of a feature in a class is unrelated to any other feature. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that a particular fruit is an apple or an orange or a banana and that is why it is known as “Naive”.
#### What is Bayes Theorem?
In Statistics and probability theory, Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. It serves as a way to figure out conditional probability.
Given a Hypothesis H and evidence E, Bayes’ Theorem states that the relationship between the probability of Hypothesis before getting the evidence **P(H)** and the probability of the hypothesis after getting the evidence **P(H|E)** is :

This relates the probability of the hypothesis before getting the evidence **P(H)**, to the probability of the hypothesis after getting the evidence, **P(H|E)**. For this reason, is called the prior probability, while P(H|E) is called the posterior probability. The factor that relates the two, **P(H|E) / P(E)**, is called the likelihood ratio. Using these terms, Bayes’ theorem can be rephrased as:
**“The posterior probability equals the prior probability times the likelihood ratio.”**
### Process of Table Creation


### Steps for the Algorithm
#### Step-1: First, we will create a frequency table using each attribute of the dataset.

#### Step-2: For each frequency table, we will generate a likelihood table.

#### Solution
Likelihood of **‘Yes’** given **‘Sunny‘** is:
**P(c|x) = P(Yes|Sunny) = P(Sunny|Yes)* P(Yes) / P(Sunny) = (0.3 x 0.71) /0.36 = 0.591**
Similarly Likelihood of **‘No’** given **‘Sunny‘** is:
**P(c|x) = P(No|Sunny) = P(Sunny|No)* P(No) / P(Sunny) = (0.4 x 0.36) /0.36 = 0.40**
#### Now, in the same way, we need to create the Likelihood Table for other attributes as well.

### Problem Statement:
#### Suppose we have a Day with the following values :
Outlook = Rain
Humidity = High
Wind = Weak
Play =?
So, with the data, we have to predict whether **“we can play on that day or not”**.
Likelihood of **‘Yes’** on that Day = **P(Outlook = Rain|Yes)*P(Humidity= High|Yes)* P(Wind= Weak|Yes)*P(Yes)**
= 2/9 * 3/9 * 6/9 * 9/14 = 0.0199
Likelihood of **‘No’** on that Day = **P(Outlook = Rain|No)*P(Humidity= High|No)* P(Wind= Weak|No)*P(No)**
= 2/5 * 4/5 * 2/5 * 5/14 = 0.0166
**Now we normalize the values, then**
**P(Yes)** = 0.0199 / (0.0199+ 0.0166) = 0.55
**P(No)** = 0.0166 / (0.0199+ 0.0166) = 0.45
**Our model predicts that there is a 55% chance there will be a Game tomorrow.**
## Implementation in Python
#### Import Libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
import numpy as np
```
#### Import Data
```
link="C:/Users/comp/Desktop/Summer/ML/"
df = pd.read_csv(link+"tennis.csv")
df.head()
df.info()
```
### Check Variables
#### Check Outlook
```
outlook_count = df.groupby(['outlook', 'play']).size()
print("Checking Outlook Variable for Play Segregation: \n",outlook_count)
outlook_total = df.groupby(['outlook']).size()
print("Checking Total Outlook: ",outlook_total)
```
#### Check Temperature
```
temp_count = df.groupby(['temp', 'play']).size()
temp_total = df.groupby(['temp']).size()
print("Checking Temperature Variable for Play Segregation: \n",temp_count)
print("Checking Total Outlook",temp_total)
```
#### Check Humidity
```
humidity_count = df.groupby(['humidity', 'play']).size()
humidity_total = df.groupby(['humidity']).size()
print("Checking Humidity Variable for Play Segregation: \n",humidity_count)
print("Checking Humidity Outlook",humidity_total)
```
#### Check Windy
```
windy_count = df.groupby(['windy', 'play']).size()
windy_total = df.groupby(['windy']).size()
print("Checking Windy Variable for Play Segregation: \n",windy_count)
print("Checking Windy Outlook",windy_total)
p_over_yes = outlook_count['overcast','yes']
#p_over_no = outlook_count['overcast','no']
print("Total OVERCAST+YES: ",p_over_yes)
#print(p_over_no)
p_rainy_yes = outlook_count['rainy','yes']
print("Total RAINY+YES: ",p_rainy_yes)
p_rainy_no = outlook_count['rainy','no']
print("Total RAINY+NO: ",p_rainy_no)
```
#### Creating Data Subset for training
```
X_train = pd.get_dummies(df[['outlook', 'temp', 'humidity', 'windy']])
y_train = pd.DataFrame(df['play'])
X_train.head()
```
#### Creating Model
```
model = GaussianNB()
model.fit(X_train, y_train)
```
#### Predicting Model
```
predicted= model.predict([[False,1,0,0,0,1,0,1,0]])
#print(predicted)
if predicted=='yes':
print("I will play Tennis")
else:
print("I will not play Tennis")
```
| github_jupyter |
# Chaotic systems prediction using NN
## This notebook is developed to show how well Neural Networks perform when presented with the task of predicting the trajectories of **Chaotic Systems**, this notebook is part of the work presented in *New results for prediction of chaotic systems using Deep Recurrent Neural Networks* in the journal of **Neural Processing Letters**
### In this experiment RNN-LSTM, RNN-GRU and MLP neural networks are trained and tested to predict the trajectories of the chaotic systems of Lorenz, Rabinovich-Fabrikant and Rossler.
## Description of this Notebook
* The initial conditions of the chaotic systems are defined in the *Chaos_Attractors* class
* The *NN_Identifier* class is where the neural test and training takes place, the outputs are graphs that show the performance obtained by the trained model as well as the trajectories identified by the neural network
* In the last cells the global parameters such as the number of epochs, layers, neurons and batch size are defined to train and predict the chaotic systems as well as the time series size.
#Libraries
```
import tensorflow as tf
from keras.models import Sequential
from keras.layers import LSTM, GRU, Dense, Dropout, Masking, Embedding, Flatten
from sklearn.preprocessing import MinMaxScaler
from scipy.integrate import odeint
from google.colab import drive
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
from mpl_toolkits.mplot3d import Axes3D
fig_size = plt.rcParams["figure.figsize"]
#Save images to you Google Drive Path
drive.mount('/content/gdrive')
images_dir = '/content/gdrive/My Drive/Colab_Images'
```
#Lorenz, Rabinovich–Fabrikant and Rossler Chaotic Systems
```
class ChaosAttractors():
"""
Initial conditions for the systems to display chaotic behaviour are
defined as follows:
Lorenz 63 -> s = 10, r = 8/3, b = 28 and dt = 0.01
Fabrikant-Rabinovich -> a = 0.14, g = 0.1 and dt = 0.01
Rossler -> a = 0.2, b = 0.2, c = 6.3 and dt = 0.01
"""
def __init__(self, steps, lrz_s=10, lrz_r=28, lrz_b=8/3, lrz_dt = 0.01,
rab_fab_a = 0.14, rab_fab_g = 0.1, rab_fab_dt = 0.01,
ros_a=0.2, ros_b=0.2, ros_c=6.3, ros_dt = 0.01):
self.lrz_s = lrz_s
self.lrz_b = lrz_b
self.lrz_r = lrz_r
self.lrz_dt = lrz_dt
self.rab_fab_a = rab_fab_a
self.rab_fab_g = rab_fab_g
self.rab_fab_dt = rab_fab_dt
self.ros_a = ros_a
self.ros_b = ros_b
self.ros_c = ros_c
self.ros_dt = ros_dt
self.steps = steps
"""Lorenz 63 System"""
def lorenz63(self):
xs = np.empty((self.steps + 1,))
ys = np.empty((self.steps + 1,))
zs = np.empty((self.steps + 1,))
xs[0], ys[0], zs[0] = (1.0, 1.0, 1.0)
for i in range(self.steps):
x_dot = self.lrz_s*(ys[i] - xs[i])
y_dot = self.lrz_r*xs[i] - ys[i] - xs[i]*zs[i]
z_dot = xs[i]*ys[i] - self.lrz_b*zs[i]
xs[i + 1] = xs[i] + (x_dot * self.lrz_dt)
ys[i + 1] = ys[i] + (y_dot * self.lrz_dt)
zs[i + 1] = zs[i] + (z_dot * self.lrz_dt)
return xs, ys, zs
"""Rabinovich–Fabrikant equations"""
def rabinovich_fabrikant(self):
xs = np.zeros((self.steps))
ys = np.zeros((self.steps))
zs = np.zeros((self.steps))
xs[0] ,ys[0] ,zs[0] = (-1,0,0.5)
for i in range(1,self.steps):
x = xs[i-1]
y = ys[i-1]
z = zs[i-1]
dx = y*(z - 1 + x*x) + self.rab_fab_g*x
dy = x*(3*z + 1 - x*x) + self.rab_fab_g *y
dz = -2*z*(self.rab_fab_a + x*y)
xs[i] = x+self.rab_fab_dt*dx
ys[i] = y+self.rab_fab_dt*dy
zs[i] = z+self.rab_fab_dt*dz
return xs, ys, zs
"""Rossler Hyperchaotic System"""
def rossler(self):
xs = np.empty([self.steps + 1])
ys = np.empty([self.steps + 1])
zs = np.empty([self.steps + 1])
xs[0], ys[0], zs[0] = (1.0, 1.0, 1.0)
for i in range(self.steps):
x_dot = -ys[i] - zs[i]
y_dot = xs[i] + self.ros_a*ys[i]
z_dot = self.ros_b + xs[i]*zs[i] - self.ros_c*zs[i]
xs[i+1] = xs[i] + (x_dot * self.ros_dt)
ys[i+1] = ys[i] + (y_dot * self.ros_dt)
zs[i+1] = zs[i] + (z_dot * self.ros_dt)
return xs, ys, zs
```
# Neural characterization models
```
class NNIdentifier():
"""
Neural network models to predict chaotic systems
The neural network models used are the RNN-LSTM, RNN-GRU and MLP
...
Attributes
----------
num_neurons : int
Number of neurons used in each layer of the NN
num_neurons : int
Number of layers in the NN
dataset : array[x ,y ,z]
Dataset used to train and test the NN model
training_epochs : int
Number of epochs for training the NN
batch_size: int
Size of the batch passed to the NN
attractor_name: string
Name of the chaotic system (Used for title of the trajectory graph)
chaos_x_series: array[x]
Time series of the chaotic system in the X variable
chaos_y_series: array[x]
Time series of the chaotic system in the Y variable
chaos_z_series: array[x]
Time series of the chaotic system in the Z variable
"""
def __init__(self, num_neurons, num_layers, dataset, training_epochs, batch_size, attractor_name, chaos_x_series, chaos_y_series, chaos_z_series):
self.num_neurons = num_neurons
self.num_layers = num_layers
self.dataset = dataset
self.training_epochs = training_epochs
self.batch_size = batch_size
self.trainX = []
self.trainY = []
self.testX = []
self.testY = []
self.attractor_name = attractor_name
self.look_back = 1
self.chaos_x_series = chaos_x_series
self.chaos_y_series = chaos_y_series
self.chaos_z_series = chaos_z_series
def predict_attractor(self):
self.normalize_datase()
self.train_eval_models()
def create_dataset(self, dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back)]
dataX.append(a)
dataY.append(dataset[i + look_back])
return np.array(dataX), np.array(dataY)
def normalize_datase(self):
# Normalize Uk
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(self.dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.6)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
# reshape into X=t and Y=t+1
look_back = 1
self.trainX, self.trainY = self.create_dataset(train, look_back)
self.testX, self.testY = self.create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
self.trainX = np.reshape(self.trainX, (self.trainX.shape[0], 1, self.trainX.shape[2]))
self.testX = np.reshape(self.testX, (self.testX.shape[0], 1, self.testX.shape[2]))
def gru_model(self):
"""GRU RNN"""
gru_model = Sequential()
if(self.num_layers>1):
gru_model.add(GRU(self.num_neurons, input_shape=(1,3), return_sequences = True))
for x in range(self.num_layers):
gru_model.add(GRU(self.num_neurons, return_sequences = True))
if(x == self.num_layers-1):
gru_model.add(GRU(self.num_neurons, return_sequences = False))
else:
gru_model.add(GRU(self.num_neurons, input_shape=(1,3), return_sequences = False))
gru_model.add(Dense(3))
gru_model.compile(optimizer='adam', loss='mean_squared_error', metrics =['mse','acc'])
seq_gru_model = gru_model.fit(self.trainX, self.trainY, epochs=self.training_epochs, batch_size=self.batch_size)
return gru_model, seq_gru_model
def lstm_model(self):
"""LSTM RNN"""
lstm_model = Sequential()
if(self.num_layers>1):
lstm_model.add(LSTM(self.num_neurons, input_shape=(1,3), return_sequences = True))
for x in range(self.num_layers):
lstm_model.add(LSTM(self.num_neurons, return_sequences = True))
if(x == self.num_layers-1):
lstm_model.add(LSTM(self.num_neurons, return_sequences = False))
else:
lstm_model.add(LSTM(self.num_neurons, input_shape=(1,3), return_sequences = False))
lstm_model.add(Dense(3))
lstm_model.compile(optimizer='adam', loss='mean_squared_error', metrics =['mse','acc'])
seq_lstm_model = lstm_model.fit(self.trainX, self.trainY, epochs=self.training_epochs, batch_size=self.batch_size)
return lstm_model, seq_lstm_model
def mlp_model(self):
"""MLP NN"""
mlp_model = Sequential()
mlp_model.add(Dense(self.num_neurons, input_shape=(1,3)))
if(self.num_layers>1):
for x in range(self.num_layers):
mlp_model.add(Dense(self.num_neurons))
mlp_model.add(Flatten())
mlp_model.add(Dense(3))
mlp_model.compile(optimizer='adam', loss='mean_squared_error', metrics =['mse','acc'])
seq_mlp_model = mlp_model.fit(self.trainX, self.trainY, epochs=self.training_epochs, batch_size=self.batch_size)
return mlp_model, seq_mlp_model
def predict_eval_model(self, model, label_nn):
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(self.dataset)
# make predictions
trainPredict = model.predict(self.trainX)
testPredict = model.predict(self.testX)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
self.trainY = scaler.inverse_transform(self.trainY)
testPredict = scaler.inverse_transform(testPredict)
self.testY = scaler.inverse_transform(self.testY)
# shift train predictions for plotting
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[self.look_back:len(trainPredict)+self.look_back, :] = trainPredict
# shift test predictions for plotting
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(self.look_back*2)+1:len(dataset)-1, :] = testPredict
# get values to graph
val_xtrain = []
val_ytrain = []
val_ztrain = []
for x in range(len(trainPredictPlot)):
val_xtrain.append(trainPredictPlot[x][0])
val_ytrain.append(trainPredictPlot[x][1])
val_ztrain.append(trainPredictPlot[x][2])
val_xtest = []
val_ytest = []
val_ztest = []
for x in range(len(testPredictPlot)):
val_xtest.append(testPredictPlot[x][0])
val_ytest.append(testPredictPlot[x][1])
val_ztest.append(testPredictPlot[x][2])
#Graph
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(self.chaos_x_series, self.chaos_y_series, self.chaos_z_series, lw=0.8, label=self.attractor_name)
ax.plot(val_xtrain, val_ytrain, val_ztrain, lw=0.5,label='Train Set')
ax.plot(val_xtest, val_ytest, val_ztest, lw=0.5, label='Test Set')
legend = plt.legend(loc='upper left', shadow=True, fontsize='xx-large')
fig_size = plt.rcParams["figure.figsize"]
ax.set_xlabel("X Axis", fontsize=20)
ax.set_ylabel("Y Axis", fontsize=20)
ax.set_zlabel("Z Axis", fontsize=20)
ax.set_title(self.attractor_name + ' - '+label_nn)
plt.rcParams["figure.figsize"] = (10,10)
plt.savefig(f'{images_dir}/{self.attractor_name}_{label_nn}.eps', format='eps')
plt.show()
def graph_eval_model(self, gru_train_loss, lstm_train_loss, mlp_train_loss, gru_train_acc, lstm_train_acc, mlp_train_acc):
"""Loss evaluation and graph"""
xc = range(self.training_epochs)
plt.figure()
plt.plot(xc, gru_train_loss, label='MSE - GRU')
plt.plot(xc, lstm_train_loss, label='MSE - LSTM')
plt.plot(xc, mlp_train_loss, label='MSE - MLP')
plt.xlabel('Epochs')
plt.ylabel('Error %')
plt.yscale('log')
plt.title('MSE for the '+ self.attractor_name)
legend = plt.legend(loc='upper right', shadow=True, fontsize='x-large')
plt.grid(True)
plt.show()
"""Accuracy evaluation and graph"""
plt.figure()
plt.plot(xc, gru_train_acc, label ='Accuracy - GRU')
plt.plot(xc, lstm_train_acc, label ='Accuracy - LSTM')
plt.plot(xc, mlp_train_acc, label ='Accuracy - MLP')
plt.xlabel('Epochs')
plt.ylabel('Accuracy %')
plt.title('Accuracy for the '+ self.attractor_name+' with '+str(self.num_layers)+' layers - '+str(self.num_neurons)+' neurons')
legend = plt.legend(loc='lower right', shadow=True, fontsize='x-large')
plt.grid(True)
plt.show()
def eval_model(self, model, seqModel):
loss_and_metrics = model.evaluate(self.testX, self.testY, batch_size=self.batch_size)
train_loss = seqModel.history['mse']
train_acc = seqModel.history['acc']
return train_loss, train_acc
def train_eval_models(self):
"""Train NN Models"""
gru_model, seq_gru_model = self.gru_model()
lstm_model, seq_lstm_model = self.lstm_model()
mlp_model, seq_mlp_model = self.mlp_model()
"""Eval NN Models"""
gru_train_loss, gru_train_acc = self.eval_model(gru_model, seq_gru_model)
lstm_train_loss, lstm_train_acc = self.eval_model(lstm_model, seq_lstm_model)
mlp_train_loss, mlp_train_acc = self.eval_model(mlp_model, seq_mlp_model)
"""Graph NN Eval Model"""
self.graph_eval_model(gru_train_loss, lstm_train_loss, mlp_train_loss, gru_train_acc, lstm_train_acc, mlp_train_acc)
"""Graph NN Predict Model"""
self.predict_eval_model(gru_model, 'GRU')
self.predict_eval_model(lstm_model, 'LSTM')
self.predict_eval_model(mlp_model, 'MLP')
# Format dataset to pass it into the NN
def create_dataset(x,y,z):
dataset = []
for i in range(len(x)):
dataset.append([x[i], y[i], z[i]])
return dataset
```
# Global parameters
```
# Number of neurons
num_neurons = 128
# Number of layers
num_layers = 5
# Number of epochs
epochs = 10
# Batch size
batch_size = 32
```
# Predicting Lorenz, Rabinovich-Fabrikant and Rossler systems
```
# Define length of the chaotic time series
attractors_series = ChaosAttractors(10000)
# Obtain the time series for the Lorenz systems
lorenz_x, lorenz_y, lorenz_z = attractors_series.lorenz63()
# Create dataset to pass it to the NN
dataset = create_dataset(lorenz_x, lorenz_y, lorenz_z)
# Instantiate the class to evaluate the prediction with the previously defined parameters
nn_identifier = NNIdentifier(num_neurons, num_layers, dataset,epochs,batch_size,'Lorenz Chaotic System',lorenz_x, lorenz_y, lorenz_z)
# Start evaluation
nn_identifier.predict_attractor()
# Define length of the chaotic time series
attractors_series = ChaosAttractors(50000)
# Obtain the time series for the Lorenz systems
rab_x, rab_y, rab_z = attractors_series.rabinovich_fabrikant()
# Create dataset to pass it to the NN
dataset = create_dataset(rab_x, rab_y, rab_z)
# Instantiate the class to evaluate the prediction with the previously defined parameters
nn_identifier = NNIdentifier(num_neurons, num_layers, dataset,epochs,batch_size,'Rabinovich–Fabrikant Equations',rab_x, rab_y, rab_z)
# Start evaluation
nn_identifier.predict_attractor()
# Define length of the chaotic time series
attractors_series = ChaosAttractors(50000)
# Obtain the time series for the Lorenz systems
ros_x, ros_y, ros_z = attractors_series.rossler()
# Create dataset to pass it to the NN
dataset = create_dataset(ros_x, ros_y, ros_z)
# Instantiate the class to evaluate the prediction with the previously defined parameters
nn_identifier = NNIdentifier(num_neurons, num_layers, dataset,epochs,batch_size,'Rossler System',ros_x, ros_y, ros_z)
# Start evaluation
nn_identifier.predict_attractor()
```
| github_jupyter |
# Python Crash Course
Please note, this is not meant to be a comprehensive overview of Python or programming in general, if you have no programming experience, you should probably take my other course: [Complete Python Bootcamp](https://www.udemy.com/complete-python-bootcamp/?couponCode=PY20) instead.
**This notebook is just a code reference for the videos, no written explanations here**
This notebook will just go through the basic topics in order:
* Data types
* Numbers
* Strings
* Printing
* Lists
* Dictionaries
* Booleans
* Tuples
* Sets
* Comparison Operators
* if,elif, else Statements
* for Loops
* while Loops
* range()
* list comprehension
* functions
* lambda expressions
* map and filter
* methods
____
## Data types
### Numbers
```
1 + 1
1 * 3
1 / 2
2 ** 4
4 % 2
5 % 2
(2 + 3) * (5 + 5)
```
### Variable Assignment
```
# Can not start with number or special characters
name_of_var = 2
x = 2
y = 3
z = x + y
z
```
### Strings
```
'single quotes'
"double quotes"
" wrap lot's of other quotes"
```
### Printing
```
x = 'hello'
x
print(x)
num = 12
name = 'Sam'
print('My number is: {one}, and my name is: {two}'.format(one=num,two=name))
print('My number is: {}, and my name is: {}'.format(num,name))
```
### Lists
```
[1,2,3]
['hi',1,[1,2]]
my_list = ['a','b','c']
my_list.append('d')
my_list
my_list[0]
my_list[1]
my_list[1:]
my_list[:1]
my_list[0] = 'NEW'
my_list
nest = [1,2,3,[4,5,['target']]]
nest[3]
nest[3][2]
nest[3][2][0]
```
### Dictionaries
```
d = {'key1':'item1','key2':'item2'}
d
d['key1']
```
### Booleans
```
True
False
```
### Tuples
```
t = (1,2,3)
t[0]
t[0] = 'NEW'
```
### Sets
```
{1,2,3}
{1,2,3,1,2,1,2,3,3,3,3,2,2,2,1,1,2}
```
## Comparison Operators
```
1 > 2
1 < 2
1 >= 1
1 <= 4
1 == 1
'hi' == 'bye'
```
## Logic Operators
```
(1 > 2) and (2 < 3)
(1 > 2) or (2 < 3)
(1 == 2) or (2 == 3) or (4 == 4)
```
## if,elif, else Statements
```
if 1 < 2:
print('Yep!')
if 1 < 2:
print('yep!')
if 1 < 2:
print('first')
else:
print('last')
if 1 > 2:
print('first')
else:
print('last')
if 1 == 2:
print('first')
elif 3 == 3:
print('middle')
else:
print('Last')
```
## for Loops
```
seq = [1,2,3,4,5]
for item in seq:
print(item)
for item in seq:
print('Yep')
for jelly in seq:
print(jelly+jelly)
```
## while Loops
```
i = 1
while i < 5:
print('i is: {}'.format(i))
i = i+1
```
## range()
```
range(5)
for i in range(5):
print(i)
list(range(5))
```
## list comprehension
```
x = [1,2,3,4]
out = []
for item in x:
out.append(item**2)
print(out)
[item**2 for item in x]
```
## functions
```
def my_func(param1='default'):
"""
Docstring goes here.
"""
print(param1)
my_func
my_func()
my_func('new param')
my_func(param1='new param')
def square(x):
return x**2
out = square(2)
print(out)
```
## lambda expressions
```
def times2(var):
return var*2
times2(2)
lambda var: var*2
```
## map and filter
```
seq = [1,2,3,4,5]
map(times2,seq)
list(map(times2,seq))
list(map(lambda var: var*2,seq))
filter(lambda item: item%2 == 0,seq)
list(filter(lambda item: item%2 == 0,seq))
```
## methods
```
st = 'hello my name is Sam'
st.lower()
st.upper()
st.split()
tweet = 'Go Sports! #Sports'
tweet.split('#')
tweet.split('#')[1]
d
d.keys()
d.items()
lst = [1,2,3]
lst.pop()
lst
'x' in [1,2,3]
'x' in ['x','y','z']
```
# Great Job!
| github_jupyter |
```
# This code block is for automatic testing purposes, please ignore.
try:
import openfermion
except:
import os
os.chdir('../src/')
```
# Lowering qubit requirements using binary codes
## Introduction
Molecular Hamiltonians are known to have certain symmetries that are not taken into account by mappings like the Jordan-Wigner or Bravyi-Kitaev transform. The most notable of such symmetries is the conservation of the total number of particles in the system. Since those symmetries effectively reduce the degrees of freedom of the system, one is able to reduce the number of qubits required for simulation by utilizing binary codes (arXiv:1712.07067).
We can represent the symmetry-reduced Fermion basis by binary vectors of a set $\mathcal{V} \ni \boldsymbol{\nu}$, with $ \boldsymbol{\nu} = (\nu_0, \, \nu_1, \dots, \, \nu_{N-1} ) $, where every component $\nu_i \in \lbrace 0, 1 \rbrace $ and $N$ is the total number of Fermion modes. These binary vectors $ \boldsymbol{\nu}$ are related to the actual basis states by: $$
\left[\prod_{i=0}^{N-1} (a_i^{\dagger})^{\nu_i} \right] \left|{\text{vac}}\right\rangle \, ,
$$ where $ (a_i^\dagger)^{0}=1$. The qubit basis, on the other hand, can be characterized by length-$n$ binary vectors $\boldsymbol{\omega}=(\omega_0, \, \dots , \, \omega_{n-1})$, that represent an $n$-qubit basis state by:
$$ \left|{\omega_0}\right\rangle \otimes \left|\omega_1\right\rangle \otimes \dots \otimes \left|{\omega_{n-1}}\right\rangle \, . $$
Since $\mathcal{V}$ is a mere subset of the $N$-fold binary space, but the set of the vectors $\boldsymbol{\omega}$ spans the entire $n$-fold binary space we can assign every vector $\boldsymbol{\nu}$ to a vector $ \boldsymbol{\omega}$, such that $n<N$. This reduces the amount of qubits required by $(N-n)$. The mapping can be done by a binary code, a classical object that consists of an encoder function $\boldsymbol{e}$ and a decoder function $\boldsymbol{d}$.
These functions relate the binary vectors $\boldsymbol{e}(\boldsymbol{\nu})=\boldsymbol{\omega}$, $\boldsymbol{d}(\boldsymbol{\omega})=\boldsymbol{\nu}$, such that $\boldsymbol{d}(\boldsymbol{e}(\boldsymbol{\nu}))=\boldsymbol{\nu}$.
In OpenFermion, we, at the moment, allow for non-linear decoders $\boldsymbol{d}$ and linear encoders $\boldsymbol{e}(\boldsymbol{\nu})=A \boldsymbol{\nu}$, where the matrix multiplication with the $(n\times N)$-binary matrix $A$ is $(\text{mod 2})$ in every component.
## Symbolic binary functions
The non-linear binary functions for the components of the decoder are here modeled by the $\text{BinaryPolynomial}$ class in openfermion.ops.
For initialization we can conveniently use strings ('w0 w1 + w1 +1' for the binary function $\boldsymbol{\omega} \to \omega_0 \omega_1 + \omega_1 + 1 \;\text{mod 2}$), the native data structure or symbolic addition and multiplication.
```
from openfermion.ops import BinaryPolynomial
binary_1 = BinaryPolynomial('w0 w1 + w1 + 1')
print("These three expressions are equivalent: \n", binary_1)
print(BinaryPolynomial('w0') * BinaryPolynomial('w1 + 1') + BinaryPolynomial('1'))
print(BinaryPolynomial([(1, 0), (1, ), ('one', )]))
print('The native data type structure can be seen here:')
print(binary_1.terms)
print('We can always evaluate the expression for instance by the vector (w0, w1, w2) = (1, 0, 0):',
binary_1.evaluate('100'))
```
## Binary codes
The $\text{BinaryCode}$ class bundles a decoder - a list of decoder components, which are instances of $\text{BinaryPolynomial}$ - and an encoder - the matrix $A$ as sparse numpy array - as a binary code. The constructor however admits (dense) numpy arrays, nested lists or tuples as input for $A$, and arrays, lists or tuples of $\text{BinaryPolynomial}$ objects - or valid inputs for $\text{BinaryPolynomial}$ constructors - as input for $\boldsymbol{d}$. An instance of the $\text{BinaryCode}$ class knows about the number of qubits and the number of modes in the mapping.
```
from openfermion.ops import BinaryCode
code_1 = BinaryCode([[1, 0, 0], [0, 1, 0]], ['w0', 'w1', 'w0 + w1 + 1' ])
print(code_1)
print('number of qubits: ', code_1.n_qubits, ' number of Fermion modes: ', code_1.n_modes )
print('encoding matrix: \n', code_1.encoder.toarray())
print('decoder: ', code_1.decoder)
```
The code used in the example above, is in fact the (odd) Checksum code, and is implemented already - along with a few other examples from arxiv:1712.07067. In addition to the $\text{checksum_code}$ the functions $\text{weight_one_segment_code}$, $\text{weight_two_segment_code}$, that output a subcode each, as well as $\text{weight_one_binary_addressing_code}$ can be found under openfermion.transforms._code_transform_functions.
There are two other ways to construct new codes from the ones given - both of them can be done conveniently with symbolic operations between two code objects $(\boldsymbol{e}, \boldsymbol{d})$ and $(\boldsymbol{e^\prime}, \boldsymbol{d^\prime})$ to yield a new code $(\boldsymbol{e^{\prime\prime}}, \boldsymbol{d^{\prime\prime}})$:
**Appendage**
Input and output vectors of two codes are appended to each other such that:
$$ e^{\prime\prime}(\boldsymbol{\nu} \oplus \boldsymbol{\nu^{\prime} })=\boldsymbol{e}(\boldsymbol{\nu}) \oplus \boldsymbol{e^\prime}(\boldsymbol{\nu^\prime})\, , \qquad d^{\prime\prime}(\boldsymbol{\omega} \oplus \boldsymbol{\omega^{\prime} })=\boldsymbol{d}(\boldsymbol{\omega}) \oplus \boldsymbol{d^\prime}(\boldsymbol{\omega^\prime}) \, . $$
This is implemented with symbolic addition of two $\text{BinaryCode}$ objects (using + or += ) or, for appending several instances of the same code at once, multiplication of the $\text{BinaryCode}$ with an integer. Appending codes is useful when we want to obtain a segment code, or a segmented transform.
**Concatenation**
Two codes can (if the corresponding vectors match in size) be applied consecutively, in the sense that the output of the encoder of the first code is input to the encoder of the second code. This defines an entirely new encoder, and the corresponding decoder is defined to undo this operation.
$$ \boldsymbol{e^{\prime\prime}}(\boldsymbol{\nu^{\prime\prime}})=\boldsymbol{e^\prime}\left(\boldsymbol{e}(\boldsymbol{\nu^{\prime\prime}}) \right) \, , \qquad \boldsymbol{d^{\prime\prime}}(\boldsymbol{\omega^{\prime\prime}})=\boldsymbol{d}\left(\boldsymbol{d^\prime}(\boldsymbol{\omega^{\prime\prime}}) \right)
$$
This is done by symbolic multiplication of two $\text{BinaryCode}$ instances (with \* or \*= ). One can concatenate the codes with each other such that additional qubits can be saved (e.g. checksum code \* segment code ), or to modify the resulting gates after transform (e.g. checksum code \* Bravyi-Kitaev code).
A broad palette of codes is provided to help construct codes symbolically.
The $\text{jordan_wigner_code}$ can be appended to every code to fill the number of modes, concatenating the $\text{bravyi_kitaev_code}$ or $\text{parity_code}$ will modify the appearance of gates after the transform. The $\text{interleaved_code}$ is useful to concatenate appended codes with if in Hamiltonians, Fermion operators are ordered by spin indexing even-odd (up-down-up-down-up ...) . This particular instance is used in the demonstration below.
Before we turn to describe the transformation, a word of warning has to be spoken here. Controlled gates that occur in the Hamiltonian by using non-linear codes are decomposed into Pauli strings, e.g. $\text{CPHASE}(1,2)=\frac{1}{2}(1+Z_1+Z_2-Z_1Z_2)$. In that way the amount of terms in a Hamiltonian might rise exponentially, if one chooses to use strongly non-linear codes.
## Operator transform
The actual transform of Fermion operators into qubit operators is done with the routine $\text{binary_code_transform}$, that takes a Hamiltonian and a suitable code as inputs, outputting a qubit Hamiltonian.
Let us consider the case of a molecule with 4 modes where, due to the absence of magnetic interactions, the set of valid modes is only $$ \mathcal{V}=\lbrace (1,\, 1,\, 0,\, 0 ),\,(1,\, 0,\, 0,\, 1 ),\,(0,\, 1,\, 1,\, 0 ),\,(0,\, 0,\, 1,\, 1 )\rbrace \, .$$
One can either use an (even weight) checksum code to save a single qubit, or use and (odd weight) checksum code on spin-up and -down modes each to save two qubits. Since the ordering is even-odd, however, this requires to concatenate the with the interleaved code, which switches the spin indexing of the qubits from even-odd ordering to up-then-down. Instead of using the interleaved code, we can also use the reorder function to apply up-then-down ordering on the hamiltonian.
```
from openfermion.transforms import *
from openfermion.hamiltonians import MolecularData
from openfermion.transforms import binary_code_transform
from openfermion.transforms import get_fermion_operator
from openfermion.utils import eigenspectrum, normal_ordered, up_then_down, reorder
def LiH_hamiltonian():
geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., 1.45))]
molecule = MolecularData(geometry, 'sto-3g', 1,
description="1.45")
molecule.load()
molecular_hamiltonian = molecule.get_molecular_hamiltonian(occupied_indices = [0], active_indices = [1,2])
hamiltonian = normal_ordered(get_fermion_operator(molecular_hamiltonian))
return hamiltonian
hamiltonian = LiH_hamiltonian()
print('Fermionic Hamiltonian')
print (hamiltonian)
print("The eigenspectrum")
print(eigenspectrum(hamiltonian))
print('\n-----\n')
jw = binary_code_transform(hamiltonian, jordan_wigner_code(4))
print('Jordan-Wigner transformed Hamiltonian')
print(jw)
print("the eigenspectrum of the transformed hamiltonian")
print(eigenspectrum(jw))
print('\n-----\n')
cksm_save_one = binary_code_transform(hamiltonian, checksum_code(4,0))
print('Even-weight checksum code')
print(cksm_save_one)
print("the eigenspectrum of the transformed hamiltonian")
print(eigenspectrum(cksm_save_one))
print('\n-----\n')
up_down_save_two = binary_code_transform(hamiltonian, interleaved_code(4)*(2*checksum_code(2,1)))
print('Double odd-weight checksum codes')
print(up_down_save_two )
print("the eigenspectrum of the transformed hamiltonian")
print(eigenspectrum(up_down_save_two ))
print('\n-----\n')
print('Instead of interleaving, we can apply up-then-down ordering using the reorder function:')
up_down_save_two = binary_code_transform(reorder(hamiltonian,up_then_down), 2*checksum_code(2,1))
print(up_down_save_two)
print("the eigenspectrum of the transformed hamiltonian")
print(eigenspectrum(up_down_save_two))
```
| github_jupyter |
# Full experimentation pipeline
Reference: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps https://arxiv.org/abs/1312.6034
We explore the possibility of detecting the trojan using saliency.
```
from math import ceil
import logging
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import keras.backend as K
from trojan_defender import set_root_folder, datasets, set_db_conf, plot, experiment, util
from trojan_defender import models, train, evaluate
from trojan_defender.poison import patch
from trojan_defender.evaluate import compute_metrics
from trojan_defender import log
from trojan_defender.detect import saliency_ as saliency
from sklearn.metrics import classification_report, accuracy_score
from sklearn.covariance import EllipticEnvelope
from scipy import stats
# config logging
logging.basicConfig(level=logging.INFO)
# matplotlib size
plt.rcParams['figure.figsize'] = (10, 10)
# root folder (experiments will be saved here)
# set_root_folder('/home/Edu/data')
# db configuration (experiments metadata will be saved here)
set_db_conf('db.yaml')
dataset_name = 'mnist'
objective_class = 5
METRICS = [accuracy_score]
loader = datasets.cifar10 if dataset_name == 'cifar10' else datasets.mnist
clean = loader()
trainer = train.cifar10_cnn if dataset_name == 'cifar10' else train.mnist_cnn
architecture = models.cifar10_cnn if dataset_name == 'cifar10' else models.mnist_cnn
epochs = 20 if dataset_name == 'cifar10' else 2
# train baseline - model without data poisoning
baseline = trainer(clean, architecture, epochs=epochs)
# log experiment
log.experiment(baseline, clean, METRICS)
# make patch
p = patch.Patch('sparse', proportion=0.005,
input_shape=clean.input_shape,
dynamic_mask=False,
dynamic_pattern=False)
objective = util.make_objective_class(objective_class, clean.num_classes)
# apply patch to clean dataset
patched = clean.poison(objective, p, fraction=0.15)
plot.image(p())
plot.grid(patched.x_test[patched.test_poisoned_idx],
patched.y_test_cat[patched.test_poisoned_idx],
suptitle_kwargs=dict(t='Some poisoned examples in the test set', fontsize=20))
model = trainer(patched, architecture, epochs=epochs)
# log experiment
log.experiment(model, patched, METRICS)
# baseline, clean, baseline_metadata = experiment.load('27-Apr-2018@03-32-38')
# model, patched, model_metadata = experiment.load('27-Apr-2018@18-32-06')
# p = patched.a_patch
```
## Evaluation
```
# compute metrics of poisoned model in poisoned
# test dataset
compute_metrics(METRICS, model, patched)
# accuracy of BASELINE model on original test data
y_pred = baseline.predict_classes(clean.x_test)
y_true = clean.y_test_cat
accuracy_score(y_true, y_pred)
```
## Saliency detector score
```
saliency.score(model, clean, random_trials=100)
saliency.score(baseline, clean, random_trials=100)
```
## Visualization
```
(sms, outs, recovered,
sample, res,
mask_prop) = saliency.detect(model, clean, random_trials=100)
(sms_base, outs_base, recovered_base,
sample_base, res_base,
mask_prop_base) = saliency.detect(baseline, clean, random_trials=100)
plot.grid(sms)
plot.grid(sms_base)
plt.rcParams['figure.figsize'] = (5, 5)
plot.image(recovered)
plot.image(recovered_base)
```
| github_jupyter |
# GradientBoostingClassifier with StandardScaler
**This Code template is for the Classification tasks using a GradientBoostingClassifier based on the Gradient Boosting Ensemble Learning Technique and feature rescaling technique StandardScaler**
### Required Packages
```
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandarScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
wr.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path) #reading file
df.head()#displaying initial entries
print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1])
df.columns.tolist()
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
plt.figure(figsize = (20, 12))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
sns.countplot(Y,palette='pastel')
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123) #performing datasplitting
```
# StandarScaler
* It will transform your data such that its distribution will have a mean value 0 and STD of 1
* In case of multivariate data, this is done feature-wise
* We will **fit** an object of StandardScaler to training data then transform the same data by **fit_transform(X_train)** method
```
scaler=StandardScaler() #making a object of StandardScaler
X_train=scaler.fit_transform(X_train) #fiting the data on the training set
X_test=scaler.transform(X_test) #scaling testing set
```
* Now over data is scaled, let's trained the moder
## Model
**GradientBoostingClassifier**
Gradient Boosting builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions.In each stage nclasses regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function.
#### Model Tuning Parameters
1. loss : {‘deviance’, ‘exponential’}, default=’deviance’
> The loss function to be optimized. ‘deviance’ refers to deviance (= logistic regression) for classification with probabilistic outputs. For loss ‘exponential’ gradient boosting recovers the AdaBoost algorithm.
2. learning_ratefloat, default=0.1
> Learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators.
3. n_estimators : int, default=100
> The number of trees in the forest.
4. criterion : {‘friedman_mse’, ‘mse’, ‘mae’}, default=’friedman_mse’
> The function to measure the quality of a split. Supported criteria are ‘friedman_mse’ for the mean squared error with improvement score by Friedman, ‘mse’ for mean squared error, and ‘mae’ for the mean absolute error. The default value of ‘friedman_mse’ is generally the best as it can provide a better approximation in some cases.
5. max_depth : int, default=3
> The maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables.
6. max_features : {‘auto’, ‘sqrt’, ‘log2’}, int or float, default=None
> The number of features to consider when looking for the best split:
7. random_state : int, RandomState instance or None, default=None
> Controls both the randomness of the bootstrapping of the samples used when building trees (if <code>bootstrap=True</code>) and the sampling of the features to consider when looking for the best split at each node (if `max_features < n_features`).
8. verbose : int, default=0
> Controls the verbosity when fitting and predicting.
9. n_iter_no_change : int, default=None
> n_iter_no_change is used to decide if early stopping will be used to terminate training when validation score is not improving. By default it is set to None to disable early stopping. If set to a number, it will set aside validation_fraction size of the training data as validation and terminate training when validation score is not improving in all of the previous n_iter_no_change numbers of iterations. The split is stratified.
10. tol : float, default=1e-4
> Tolerance for the early stopping. When the loss is not improving by at least tol for <code>n_iter_no_change</code> iterations (if set to a number), the training stops.
```
#training the GradientBoostingClassifier
model = GradientBoostingClassifier(random_state = 50)
model.fit(X_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
#ploting_confusion_matrix(model,X_test,y_test,cmap=plt.cm.Blues)
cf_matrix=confusion_matrix(y_test,prediction)
plt.figure(figsize=(7,6))
sns.heatmap(cf_matrix,annot=True,fmt="d")
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(X_test)))
```
#### Feature Importances.
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Creator: Vipin Kumar , Github: [Profile](https://github.com/devVipin01)
| github_jupyter |
# Features selection for multiple linear regression
Following is an example taken from the masterpiece book *Introduction to Statistical Learning by Hastie, Witten, Tibhirani, James*. It is based on an Advertising Dataset, available on the accompanying web site: http://www-bcf.usc.edu/~gareth/ISL/data.html
The dataset contains statistics about the sales of a product in 200 different markets, together with advertising budgets in each of these markets for different media channels: TV, radio and newspaper.
Imaging being the Marketing responsible and you need to prepare a new advertising plan for next year.
## Import Advertising data
```
import pandas as pd
ad = pd.read_csv("../datasets/advertising.csv", index_col=0)
ad.info()
ad.describe()
ad.head()
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(ad.TV, ad.Sales, color='blue', label="TV")
plt.scatter(ad.Radio, ad.Sales, color='green', label='Radio')
plt.scatter(ad.Newspaper, ad.Sales, color='red', label='Newspaper')
plt.legend(loc="lower right")
plt.title("Sales vs. Advertising")
plt.xlabel("Advertising [1000 $]")
plt.ylabel("Sales [Thousands of units]")
plt.grid()
plt.show()
ad.corr()
plt.imshow(ad.corr(), cmap=plt.cm.Blues, interpolation='nearest')
plt.colorbar()
tick_marks = [i for i in range(len(ad.columns))]
plt.xticks(tick_marks, ad.columns, rotation='vertical')
plt.yticks(tick_marks, ad.columns)
```
## Is there a relationship between sales and advertising?
First of all, we fit a regression line using the Ordinary Least Square algorithm, i.e. the line that minimises the squared differences between the actual Sales and the line itself.
The multiple linear regression model takes the form:
Sales = β0 + β1\*TV + β2\*Radio + β3\*Newspaper + ε, where Beta are the regression coefficients we want to find and epsilon is the error that we want to minimise.
For this we use the statsmodels package and its *ols* function.
### Fit the LR model
```
import statsmodels.formula.api as sm
modelAll = sm.ols('Sales ~ TV + Radio + Newspaper', ad).fit()
```
These are the beta coefficients calculated:
```
modelAll.params
```
We interpret these results as follows: for a given amount of TV and newspaper advertising, spending an additional 1000 dollars on radio advertising leads to an increase in sales by approximately 189 units.
In contrast, the coefficient for newspaper represents the average effect (negligible) of increasing newspaper spending by 1000 dollars while holding TV and radio fixed.
## Is at least one of the features useful in predicting Sales?
We use a hypothesis test to answer this question.
The most common hypothesis test involves testing the null hypothesis of:
H0: There is **no relationship** between the media and sales versus the alternative hypothesis
Ha: There is **some relationship** between the media and sales.
Mathematically, this corresponds to testing
H0: β1 = β2 = β3 = β4 = 0
versus
Ha: at least one βi is non-zero.
This hypothesis test is performed by computing the F-statistic
### The F-statistic
We need first of all the Residual Sum of Squares (RSS), i.e. the sum of all squared errors (differences between actual sales and predictions from the regression line). Remember this is the number that the regression is trying to minimise.
```
y_pred = modelAll.predict(ad)
import numpy as np
RSS = np.sum((y_pred - ad.Sales)**2)
RSS
```
Now we need the Total Sum of Squares (TSS): the total variance in the response Y, and can be thought of as the amount of variability inherent in the response before the regression is performed.
The distance from any point in a collection of data, to the mean of the data, is the deviation.
```
y_mean = np.mean(ad.Sales) # mean of sales
TSS = np.sum((ad.Sales - y_mean)**2)
TSS
```
The F-statistic is the ratio between (TSS-RSS)/p and RSS/(n-p-1)
```
p=3 # we have three predictors: TV, Radio and Newspaper
n=200 # we have 200 data points (input samples)
F = ((TSS-RSS)/p) / (RSS/(n-p-1))
F
```
When there is no relationship between the response and predictors, one would expect the F-statistic to take on a value close to 1.
On the other hand, if Ha is true, then we expect F to be greater than 1.
In this case, F is far larger than 1: at least one of the three advertising media must be related to sales.
## How strong is the relationship?
Once we have rejected the null hypothesis in favor of the alternative hypothesis, it is natural to want to quantify the extent to which the model fits the data.
The quality of a linear regression fit is typically assessed using two related quantities: the residual standard error (RSE) and the R2 statistic (the square of the correlation of the response and the variable, when close to 1 means high correlation).
```
RSE = np.sqrt((1/(n-2))*RSS);
RSE
np.mean(ad.Sales)
R2 = 1 - RSS/TSS;
R2
```
RSE is 1.68 units while the mean value for the response is 14.02, indicating a percentage error of roughly 12%.
Second, the R2 statistic records the percentage of variability in the response that is explained by the predictors.
The predictors explain almost 90% of the variance in sales.
## Summary
*statsmodels* has a handy function that provides the above metrics in one single table:
```
modelAll.summary()
```
One thing to note is that R2 (R-squared above) will always increase when more variables are added to the model, even if those variables are only weakly associated with the response.
Therefore an adjusted R2 is provided, which is R2 adjusted by the number of predictors.
Another thing to note is that the summary table shows also a t-statistic and a p-value for each single feature.
These provide information about whether each individual predictor is related to the response (high t-statistic or low p-value).
But be careful looking only at these individual p-values instead of looking at the overall F-statistic. It seems likely that if any one of the p-values for the individual features is very small, then at least one of the predictors is related to the response. However, this logic is flawed, especially when you have many predictors; statistically about 5 % of the p-values will be below 0.05 by chance (this is the effect infamously leveraged by the so-called p-hacking).
The F-statistic does not suffer from this problem because it adjusts for the number of predictors.
## Which media contribute to sales?
To answer this question, we could examine the p-values associated with each predictor’s t-statistic. In the multiple linear regression above, the p-values for TV and radio are low, but the p-value for newspaper is not. This suggests that only TV and radio are related to sales.
But as just seen, if p is large then we are likely to make some false discoveries.
The task of determining which predictors are associated with the response, in order to fit a single model involving only those predictors, is referred to as **variable /feature selection**.
Ideally, we could perform the variable selection by trying out a lot of different models, each containing a different subset of the features.
We can then select the best model out of all of the models that we have considered (for example, the model with the smallest RSS and the biggest R2). Other used metrics are the Mallow’s Cp, Akaike information criterion (AIC), Bayesian information criterion (BIC), and adjusted R2. All of them are visible in the summary model.
```
def evaluateModel (model):
print("RSS = ", ((ad.Sales - model.predict())**2).sum())
print("R2 = ", model.rsquared)
```
Unfortunately, there are a total of 2^p models that contain subsets of p variables.
For three predictors, it would still be manageable, only 8 models to fit and evaluate but as p increases, the number of models grows exponentially.
Instead, we can use other approaches. The three classical ways are the forward selection (start with no features and add one after the other until a threshold is reached); the backward selection (start with all features and remove one by one) and the mixed selection (a combination of the two).
We try here the **forward selection**.
### Forward selection
We start with a null model (no features), we then fit three (p=3) simple linear regressions and add to the null model the variable that results in the lowest RSS.
```
modelTV = sm.ols('Sales ~ TV', ad).fit()
modelTV.summary().tables[1]
evaluateModel(modelTV)
```
The model containing only TV as a predictor had an RSS=2103 and an R2 of 0.61
```
modelRadio = sm.ols('Sales ~ Radio', ad).fit()
modelRadio.summary().tables[1]
evaluateModel(modelRadio)
modelPaper = sm.ols('Sales ~ Newspaper', ad).fit()
modelPaper.summary().tables[1]
evaluateModel(modelPaper)
```
The lowest RSS and the highest R2 are for the TV medium.
Now we have a best model M1 which contains TV advertising.
We then add to this M1 model the variable that results
in the lowest RSS for the new two-variable model.
This approach is continued until some stopping rule is satisfied.
```
modelTVRadio = sm.ols('Sales ~ TV + Radio', ad).fit()
modelTVRadio.summary().tables[1]
evaluateModel(modelTVRadio)
modelTVPaper = sm.ols('Sales ~ TV + Newspaper', ad).fit()
modelTVPaper.summary().tables[1]
evaluateModel(modelTVPaper)
```
Well, the model with TV AND Radio greatly decreased RSS and increased R2, so that will be our M2 model.
Now, we have only three variables here. We can decide to stop at M2 or use an M3 model with all three variables.
Recall that we already fitted and evaluated a model with all features, just at the beginning.
```
evaluateModel(modelAll)
```
M3 is *slightly* better than M2 (but remember that R2 always increases when adding new variables) so we call the approach completed and decide that the M2 model with TV and Radio is the good compromise. Adding the newspaper could possibly overfits on new test data.
Next year no budget for newspaper advertising and that amount will be used for TV and Radio instead.
```
modelTVRadio.summary()
```
### Plotting the model
The M2 model has two variables therefore can be plotted as a plane in a 3D chart.
```
modelTVRadio.params
```
The M2 model can be described by this equation:
Sales = 0.19 * Radio + 0.05 * TV + 2.9 which I can write as:
0.19*x + 0.05*y - z + 2.9 = 0
Its normal is (0.19, 0.05, -1)
and a point on the plane is (-2.9/0.19,0,0) = (-15.26,0,0)
```
normal = np.array([0.19,0.05,-1])
point = np.array([-15.26,0,0])
# a plane is a*x + b*y +c*z + d = 0
# [a,b,c] is the normal. Thus, we have to calculate
# d and we're set
d = -np.sum(point*normal) # dot product
# create x,y
x, y = np.meshgrid(range(50), range(300))
# calculate corresponding z
z = (-normal[0]*x - normal[1]*y - d)*1./normal[2]
```
Let's plot the actual values as red points and the model predictions as a cyan plane:
```
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.suptitle('Regression: Sales ~ Radio + TV Advertising')
ax = Axes3D(fig)
ax.set_xlabel('Radio')
ax.set_ylabel('TV')
ax.set_zlabel('Sales')
ax.scatter(ad.Radio, ad.TV, ad.Sales, c='red')
ax.plot_surface(x,y,z, color='cyan', alpha=0.3)
```
## Is there synergy among the advertising media?
Adding radio to the model leads to a substantial improvement in R2. This implies that a model that uses TV and radio expenditures to predict sales is substantially better than one that uses only TV advertising.
In our previous analysis of the Advertising data, we concluded that both TV and radio seem to be associated with sales. The linear models that formed the basis for this conclusion assumed that the effect on sales of increasing one advertising medium is independent of the amount spent on the other media.
For example, the linear model states that the average effect on sales of a one-unit increase in TV is always β1, regardless of the amount spent on radio.
However, this simple model may be incorrect. Suppose that spending money on radio advertising actually increases the effectiveness of TV advertising, so that the slope term for TV should increase as radio increases. In this situation, given a fixed budget of $100K spending half on radio and half on TV may increase sales more than allocating the entire amount to either TV or to radio.
In marketing, this is known as a **synergy effect**. The figure above suggests that such an effect may be present in the advertising data. Notice that when levels of either TV or radio are low, then the true sales are lower than predicted by the linear model. But when advertising is split between the two media, then the model tends to underestimate sales.
```
modelSynergy = sm.ols('Sales ~ TV + Radio + TV*Radio', ad).fit()
modelSynergy.summary().tables[1]
```
The results strongly suggest that the model that includes the interaction term is superior to the model that contains only main effects. The p-value for the interaction term, TV×radio, is extremely low, indicating that there is strong evidence for Ha : β3 not zero. In other words, it is clear that the true relationship is not additive.
```
evaluateModel(modelSynergy)
```
The R2 for this model is 96.8 %, compared to only 89.7% for the model M2 that predicts sales using TV and radio without an interaction term. This means that (96.8 − 89.7)/(100 − 89.7) = 69% of the variability in sales that remains after fitting the additive model has been explained by the interaction term.
A linear model that uses radio, TV, and an interaction between the two to predict sales takes the form:
sales = β0 + β1 × TV + β2 × radio + β3 × (radio×TV) + ε
```
modelSynergy.params
```
We can interpret β3 as the increase in the effectiveness of TV advertising for a one unit increase in radio advertising (or vice-versa).
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import os
import sys, os
sys.path.insert(0, os.path.abspath('..'))
import data_generation.diff_utils
import data_generation.mwdiff.mwdiffs_to_tsv
import numpy as np
# load split data
out_dir = "../../data/figshare"
in_dir = "../../data/annotations/split"
splits = ["train", "dev", "test"]
dfs = []
for split in splits:
df = pd.read_csv(os.path.join(in_dir, split, 'annotations.tsv'), sep = '\t')
df['split'] = split
dfs.append(df)
df = pd.concat(dfs)
df.shape
# rename workers
df_workers = df[['_worker_id']].drop_duplicates()
df_workers['anon_id'] = range(df_workers.shape[0])
df = df.merge(df_workers, how = 'inner', on = '_worker_id')
df = df.rename(columns={
'other': 'other_attack',
'quoting': 'quoting_attack',
'recipient': 'recipient_attack',
'third_party': 'third_party_attack'
})
df.shape
# save worker id mapping
df_workers.to_csv(os.path.join( "../../data/figshare", 'attack_annotations_worker_id_map.tsv'), sep = '\t', index = False)
df_workers.to_csv(os.path.join( "../../data/figshare", 'aggression_annotations_worker_id_map.tsv'), sep = '\t', index = False)
# get set of labeled comments
df_comments = df.drop_duplicates(subset = ['rev_id']).copy()
df_comments['logged_in'] = df_comments['user_id'].notnull()
df_comments['year'] = pd.to_datetime(df_comments['rev_timestamp']).apply(lambda x: x.year)
# fix legacy special token issues
df_comments['diff'] = df_comments['diff'].apply(data_generation.mwdiff.mwdiffs_to_tsv.replace_special_chars)
df_comments['diff'] = df_comments['diff'].apply(lambda x: x.replace('TAB', 'TAB_TOKEN'))
df_comments['diff'] = df_comments['diff'].apply(lambda x: x.replace('NEWLINE', 'NEWLINE_TOKEN'))
df_comments['diff'] = df_comments['diff'].apply(lambda x: x.replace('"', '`'))
# apply latest version of clean and filter
df_comments = data_generation.diff_utils.clean_and_filter(df_comments)
# clean and filter drops some comments, so drop associated labels
df = df.merge(df_comments[['rev_id']], how = 'inner', on = 'rev_id' )
df.columns
# rename some columns
df_comments = df_comments.rename(columns={
'clean_diff': 'comment',
'rev_timestamp': 'timestamp',
})
order = ['rev_id', 'comment', 'year', 'logged_in', 'ns', 'sample', 'split']
df_comments = df_comments[order]
df_comments = df_comments.sort_values('rev_id')
df_comments.shape
# get set of human labels
df_attack_labels = df[['rev_id', 'anon_id', 'quoting_attack',
'recipient_attack', 'third_party_attack', 'other_attack', 'attack']]
df_attack_labels = df_attack_labels.rename(columns={
'anon_id': 'worker_id',
})
df_attack_labels = df_attack_labels.sort_values('rev_id')
df_aggression_labels = df[['rev_id', 'anon_id', 'aggression', 'aggression_score']]
df_aggression_labels = df_aggression_labels.rename(columns={
'anon_id': 'worker_id',
})
df_aggression_labels = df_aggression_labels.sort_values('rev_id')
# save dfs
df_comments.to_csv(os.path.join( "../../data/figshare", 'attack_annotated_comments.tsv'), sep = '\t', index = False)
df_comments.to_csv(os.path.join( "../../data/figshare", 'aggression_annotated_comments.tsv'), sep = '\t', index = False)
df_attack_labels.to_csv(os.path.join( "../../data/figshare", 'attack_annotations.tsv'), sep = '\t', index = False)
df_aggression_labels.to_csv(os.path.join( "../../data/figshare", 'aggression_annotations.tsv'), sep = '\t', index = False)
pd.read_csv(os.path.join( "../../data/figshare", 'attack_annotated_comments.tsv'), sep = '\t').shape
pd.read_csv(os.path.join( "../../data/figshare", 'attack_annotations.tsv'), sep = '\t').drop_duplicates(subset = 'rev_id').shape
df_comments['logged_in'].value_counts()
df_comments.head()
df_attack_labels.head()
```
| github_jupyter |
### Quickstart
To run the code below:
1. Click on the cell to select it.
2. Press `SHIFT+ENTER` on your keyboard or press the play button
(<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
Feel free to create new cells using the plus button
(<button class='fa fa-plus icon-plus btn btn-xs btn-default'></button>), or pressing `SHIFT+ENTER` while this cell
is selected.
# Example 2 (Smooth pursuit eye movements) – interactive version based on *matplotlib*
This is an interactive verison of the idealized model of the smooth pursuit reflex. This version does not explain the model itself, but shows how Brian's "runtime mode" can be used to interact with a running simulation. In this mode, the generated code based on the model descriptions is seemlessly integrated with the Python environment and can execute arbitrary Python code at any point during the simulation via a specially annotated function, called a "network operation".
For a non-interactive version of this example which generates the article's figure see [this notebook](example_2_eye_movements.ipynb).
This notebook is based on *matplotlib* and *ipympl*, which enables quick updates of the plot in real-time. For a version based on *plotly* (as the other, non-interactive examples), see [this notebook](example_2_eye_movements_interactive.ipynb).
```
# Needs ipywidgets and ipympl
%matplotlib widget
import ipywidgets as widgets
import threading
from brian2 import *
plt.ioff()
```
The model itself (mostly identical to the [non-interactive example](example_2_eye_movements.ipynb), except that some of the constants are included as parameters in the equation and can therefore change during the simulation):
```
alpha = (1/(50*ms))**2 # characteristic relaxation time is 50 ms
beta = 1/(50*ms) # friction parameter
eqs_eye = '''
dx/dt = velocity : 1
dvelocity/dt = alpha*(x0-x)-beta*velocity : 1/second
dx0/dt = -x0/tau_muscle : 1
dx_object/dt = (noise - x_object)/tau_object: 1
dnoise/dt = -noise/tau_object + tau_object**-0.5*xi : 1
tau_object : second
tau_muscle : second
'''
eye = NeuronGroup(1, model=eqs_eye, method='euler')
taum = 20*ms
motoneurons = NeuronGroup(2, model= 'dv/dt = -v/taum : 1', threshold = 'v>1',
reset = 'v=0', refractory = 5*ms, method='exact')
motosynapses = Synapses(motoneurons, eye, model = 'w : 1', on_pre = 'x0+=w')
motosynapses.connect() # connects all motoneurons to the eye
motosynapses.w = [-0.5,0.5]
N = 20
width = 2./N # width of receptive field
gain = 4.
eqs_retina = '''
I = gain*exp(-((x_object-x_eye-x_neuron)/width)**2) : 1
x_neuron : 1 (constant)
x_object : 1 (linked) # position of the object
x_eye : 1 (linked) # position of the eye
dv/dt = (I-(1+gs)*v)/taum : 1
gs : 1 # total synaptic conductance
'''
retina = NeuronGroup(N, model = eqs_retina, threshold = 'v>1', reset = 'v=0', method='exact')
retina.v = 'rand()'
retina.x_eye = linked_var(eye, 'x')
retina.x_object = linked_var(eye, 'x_object')
retina.x_neuron = '-1.0 + 2.0*i/(N-1)'
sensorimotor_synapses = Synapses(retina, motoneurons, model = 'w : 1 (constant)', on_pre = 'v+=w')
sensorimotor_synapses.connect(j = 'int(x_neuron_pre > 0)')
sensorimotor_synapses.w = '20*abs(x_neuron_pre)/N_pre'
M = StateMonitor(eye, ('x', 'x0', 'x_object'), record = True)
S_retina = SpikeMonitor(retina)
S_motoneurons = SpikeMonitor(motoneurons)
```
We create an empty plot that will be updated during the run:
```
# Plot preparation
fig, (ax_spikes, ax_position) = plt.subplots(2, 1, gridspec_kw={'height_ratios': (2, 1)}, sharex=True)
h_retina = ax_spikes.plot([], [], '|k', markeredgecolor='k', label='retina')[0]
h_left = ax_spikes.plot([], [], '|', color='C0', markeredgecolor='C0', label='left motoneuron')[0]
h_right = ax_spikes.plot([], [], '|', color='C1', markeredgecolor='C1', label='right motoneuron')[0]
ax_spikes.set(yticks=[], ylabel='neuron index', xticks=[], xlim=(0, 10), ylim=(0, 22))
ax_spikes.spines['bottom'].set_visible(False)
ax_position.axhline(0, color='gray')
h_eye = ax_position.plot([], [], 'k', label='eye')[0]
h_object = ax_position.plot([], [], color='C2', label='object')[0]
ax_position.set(yticks=[-1, 1], yticklabels=['left', 'right'], xlabel='time (s)',
xticks=np.arange(11, 2), xticklabels=np.arange(11, 2)-10,
xlim=(0, 10), ylim=(-1, 1))
ax_position.legend(loc='upper right', bbox_to_anchor=(1.0, 2.0));
```
We now create interactive widgets that the user can use to start/stop the simulation, as well as for setting certain simulation parameters.
```
time_label = widgets.Label(value='Time: 0 s')
start_stop_button = widgets.Button(tooltip='Start simulation', icon='play')
tau_obj_slider = widgets.FloatSlider(orientation='horizontal', description='tau_object',
value=500, min=100, max=1000)
tau_muscle_slider = widgets.FloatSlider(orientation='horizontal', description='tau_muscle',
value=20, min=5, max=100)
weight_slider = widgets.FloatSlider(orientation='horizontal', description='w_muscle',
value=0.5, min=0, max=2)
sliders = widgets.VBox([widgets.HBox([time_label, start_stop_button]),
tau_obj_slider, tau_muscle_slider, weight_slider])
layout = widgets.HBox([fig.canvas, sliders])
```
We interact with the running simulation via a "network operation", a Python function that will be regularly called by Brian during the simulation run (here, every 100ms of biological time). This function can access arbitrary attributes of the model to get or set their values. We use this here to 1) update the plot with the data from the last second and 2) set parameters of the model to the values requested by the user.
```
should_stop = False
@network_operation(dt=100*ms)
def plot_output(t):
cutoff = (t - 10*second)
# Plot the data of the last 10 seconds
indices = S_retina.t > cutoff
h_retina.set_data((S_retina.t[indices] - cutoff)/second, S_retina.i[indices])
motoneuron_trains = S_motoneurons.spike_trains()
to_plot = motoneuron_trains[0][motoneuron_trains[0] > cutoff]
h_left.set_data((to_plot - cutoff)/second, np.ones(len(to_plot))*N)
to_plot = motoneuron_trains[1][motoneuron_trains[1] > cutoff]
h_right.set_data((to_plot - cutoff)/second, np.ones(len(to_plot))*(N+1))
indices = M.t > cutoff
h_eye.set_data((M.t[indices] - cutoff)/second, M.x[0][indices])
h_object.set_data((M.t[indices] - cutoff)/second, M.x_object[0][indices])
fig.canvas.draw_idle()
time_label.value = 'Time: {:.1f}s'.format(float(t[:]))
# Set the simulation parameters according to user settings
eye.tau_object = tau_obj_slider.value*ms
eye.tau_muscle = tau_muscle_slider.value*ms
motosynapses.w = [-weight_slider.value, weight_slider.value]
if should_stop:
net.stop()
```
We store the model and the "network operation" in a `Network` object, and store its current state to allow for repeated execution.
```
net = Network(collect())
net.store()
```
We now define two helper functions used to start/stop simulations. The actual simulation will be run in a background thread so that the user interface stays reactive while the simulation is running:
```
def do_run(runtime):
net.restore()
net.run(runtime)
running = False
def button_pressed(b):
global running
global should_stop
if running:
should_stop = True
running = False
start_stop_button.tooltip = 'Start simulation'
start_stop_button.icon = 'play'
else:
should_stop = False
running = True
time_label.value = 'starting...'
start_stop_button.tooltip = 'Stop simulation'
start_stop_button.icon = 'stop'
thread = threading.Thread(target=do_run, args=(100*second, ))
thread.start()
start_stop_button.on_click(button_pressed)
```
We are now ready to display the plot and user interface, which can then be used to start the simulation and interact with the simulation parameters:
```
display(layout)
```
| github_jupyter |
# Detect the best variables for each role so that we have variables to compare performance between a random player and our dataset
```
from datetime import datetime, timedelta
from functools import reduce
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 20)
import sklearn.linear_model as linear
import sklearn.tree as tree
import sklearn.ensemble as rf
import sklearn.svm as svm
import sklearn.neural_network as neural
import sklearn.feature_selection as feat
import sklearn.metrics as metrics
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
```
# Prediction using isolated player ingame statistics
```
match_info = pd.read_csv("../data/match_info.csv")
laning = pd.read_csv("../data/player_laning_stats.csv")
combat = pd.read_csv("../data/player_combat_stats.csv")
flair = pd.read_csv("../data/player_flair_stats.csv")
objectives = pd.read_csv("../data/player_objective_stats.csv")
```
**Define a function that handles the cleaning and prediction for us:**
```
def get_prediction(data: pd.DataFrame, lane, model, key_features=[], train=0.9, random_seed=12, feature_selection=0):
# selecting the lanes and dropping non useful columns
data = data.loc[laning["lane"] == lane]
data = data.drop(columns=["match_id","account_id", "region", "champion", "lane"])
try:
data = data.drop(columns=["patch", "date_created"])
except:
print("sorting columns not found.")
# defining our target and variables
target = data["won"]
if len(key_features) > 0:
variables = data[key_features]
else:
variables = data.loc[:, data.columns != "won"]
# creating a list of columns so that we can return the top features
columns = variables.columns.to_list()
# standarazing our variables
scale = StandardScaler()
scale.fit(variables)
variables = scale.transform(variables)
del(scale)
# splitting our test and train data
variables_train, variables_test, target_train, target_test = train_test_split(variables, target, train_size=train, random_state=random_seed)
# training the model
model = model()
model.fit(variables_train, target_train);
# implementing feature selection if needed
try:
if feature_selection > 0:
# recursive feature selection
rfe = feat.RFE(model, n_features_to_select=feature_selection);
rfe.fit(variables_train, target_train);
except:
feature_selection = 0
# returning multiple variables
results = {
"accuracy": round(model.score(variables_test, target_test), 3),
#"balanced_accuracy": round(metrics.balanced_accuracy_score(target_test, model.predict(variables_test)), 3),
#"precision": round(metrics.precision_score(target_test, model.predict(variables_test)), 3),
#"avg_precision": round(metrics.average_precision_score(target_train, model.predict(variables_train)), 3),
"key_features": [columns[index] for index, ranking in enumerate(rfe.ranking_) if ranking < 4] if feature_selection > 0 else "No feature selection",
}
return results
```
## 1. Laning stats
```
get_prediction(laning, "TOP", tree.DecisionTreeClassifier)
```
## 2. Combat stats
```
get_prediction(combat, "TOP", tree.DecisionTreeClassifier)
```
## 3. Objective stats
```
get_prediction(objectives, "TOP", tree.DecisionTreeClassifier)
```
## 4. Flair stats
```
get_prediction(flair, "TOP", tree.DecisionTreeClassifier)
```
**From the above examples we see that we cannot have an accurate prediction using isolated statistics, we need more data, lets then combine all the player's ingame statistics**
# Prediction with merged player ingame statistics
## 1. Merge the stats and make a prediction for each role
```
shared = ["match_id", "account_id", "region", "champion", "lane", "won"]
complete_df = (pd.merge(laning, combat, on=shared, how="left")
.merge(objectives, on=shared, how="left")
.merge(flair, on=shared, how="left")
.fillna(0))
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(complete_df, x, rf.RandomForestClassifier, random_seed=12)}")
del(x)
```
## 2. Lets find the best features for TOP lane
```
key_features = get_prediction(complete_df, "TOP", rf.RandomForestClassifier, random_seed=12, feature_selection=9)["key_features"]
key_features
get_prediction(complete_df, "TOP", rf.RandomForestClassifier, random_seed=12, key_features=key_features)
print(f"{len(key_features)} features selected out of {len(complete_df.drop(columns=['account_id', 'region', 'champion', 'lane']).columns)}")
```
# Using time and patch variables to see if that increases our accuracy
```
match_info["patch"] = pd.to_numeric(match_info["patch"], errors="coerce")
match_info.head()
sorted_df = complete_df.merge(match_info[["match_id", "patch", "date_created"]], on="match_id", how="left").dropna()
sorted_df.head()
```
## 1. By Patch
```
last_patch = sorted_df.loc[sorted_df["patch"] == 10.14]
patches = sorted_df.loc[sorted_df["patch"] > 10.12]
patches_3 = sorted_df.loc[sorted_df["patch"] >= 10.12]
```
**Last patch**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(last_patch, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
**Last 2 patches**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(patches, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
**Last 3 patches**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(patches_3, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
## 2. By date
```
def get_date(days: int) -> str:
since = pd.to_datetime(sorted_df["date_created"].max()).date() - timedelta(days=days)
if since.day < 10:
day = f"0{since.day}"
else:
day = since.day
if since.month < 10:
month = f"0{since.month}"
else:
month = since.month
since = f"{since.year}-{month}-{day}"
return since
last_month = sorted_df.loc[sorted_df["date_created"] > get_date(30)]
two_weeks = sorted_df.loc[sorted_df["date_created"] > get_date(14)]
one_week = sorted_df.loc[sorted_df["date_created"] > get_date(7)]
```
**Last month**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(last_month, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
**Last two weeks**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(two_weeks, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
**Last week**
```
for x in ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]:
print(f"{x}: {get_prediction(one_week, x, rf.RandomForestClassifier, random_seed=12, sorted=True)}")
del(x)
```
# Run multiple algorithms with and without features and store the results so that we can make a better analysis
```
def get_model_accuracy():
# store results on a dictionary for future analysis
model_accuracy = {
"model": ["rf_classifier", "linear_ridge", "linear_logistic", "linear_svc", "linear_stochastic", "decision_tree", "neural_network", "support_vc"],
"accuracy_avg": [],
}
# define the models to use
models = {
"rf_classifier": rf.RandomForestClassifier,
"linear_ridge": linear.RidgeClassifier,
"linear_logistic" : linear.LogisticRegression,
"linear_svc": svm.LinearSVC,
"linear_stochastic": linear.SGDClassifier,
"decision_tree": tree.DecisionTreeClassifier,
"neural_network": neural.MLPClassifier,
"support_vc": svm.SVC,
}
# define the lanes
lanes = ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]
# make predictions without features
for i, model in enumerate(models):
results = []
# return mean avg score without features
for lane in lanes:
prediction = get_prediction(last_month, lane, models[model], sorted=True)
results.append(prediction["accuracy"])
# append mean prediction result to model_accuracy
model_accuracy["accuracy_avg"].append(float(format(np.mean(results), ".2f")))
print(f"Done at {i}")
print("Done without features")
return model_accuracy
model_accuracy = get_model_accuracy()
model_accuracy = pd.DataFrame(model_accuracy)
model_accuracy
```
**From the accuracy average I determined that RandomForestClassifier was the best approach, since it is not as high cost as support vector classification or Neural Networks**
```
model_accuracy.to_pickle("../data/model_accuracy.pkl", protocol=4)
```
## Feature selection and period accuracy
```
def get_model_acc_period():
model_by_period = {
"period": ["complete", "last_patch", "last_2_patches", "last_3_patches", "last_month", "last_two_weeks", "last_week"],
"TOP": [],
"JUNGLE": [],
"MIDDLE": [],
"BOTTOM": [],
"SUPPORT": [],
}
lane_features = {
"TOP": [],
"JUNGLE": [],
"MIDDLE": [],
"BOTTOM": [],
"SUPPORT": [],
}
# define the iterations
periods = {
"complete": complete_df,
"last_patch": last_patch,
"last_2_patches": patches,
"last_3_patches": patches_3,
"last_month": last_month,
"last_two_weeks": two_weeks,
"last_week": one_week
}
# define the lanes
lanes = ["TOP", "JUNGLE", "MIDDLE", "BOTTOM", "SUPPORT"]
# without features
for period in periods:
for lane in lanes:
prediction = get_prediction(periods[period], lane, rf.RandomForestClassifier)
model_by_period[lane].append(prediction["accuracy"])
for lane in lane_features:
prediction = get_prediction(last_month, lane, rf.RandomForestClassifier, feature_selection=7)
lane_features[lane].append(prediction["key_features"])
return [model_by_period, lane_features]
results = get_model_acc_period()
results[0]
pd.DataFrame(results[0]).to_pickle("../data/model_by_period.pkl", protocol=4)
pd.DataFrame(results[1]).to_pickle("../data/lane_features.pkl", protocol=4)
```
| github_jupyter |
# Sub-string divisibility
<p>The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property.</p>
<p>Let <i>d</i><sub>1</sub> be the 1<sup>st</sup> digit, <i>d</i><sub>2</sub> be the 2<sup>nd</sup> digit, and so on. In this way, we note the following:</p>
<ul><li><i>d</i><sub>2</sub><i>d</i><sub>3</sub><i>d</i><sub>4</sub>=406 is divisible by 2</li>
<li><i>d</i><sub>3</sub><i>d</i><sub>4</sub><i>d</i><sub>5</sub>=063 is divisible by 3</li>
<li><i>d</i><sub>4</sub><i>d</i><sub>5</sub><i>d</i><sub>6</sub>=635 is divisible by 5</li>
<li><i>d</i><sub>5</sub><i>d</i><sub>6</sub><i>d</i><sub>7</sub>=357 is divisible by 7</li>
<li><i>d</i><sub>6</sub><i>d</i><sub>7</sub><i>d</i><sub>8</sub>=572 is divisible by 11</li>
<li><i>d</i><sub>7</sub><i>d</i><sub>8</sub><i>d</i><sub>9</sub>=728 is divisible by 13</li>
<li><i>d</i><sub>8</sub><i>d</i><sub>9</sub><i>d</i><sub>10</sub>=289 is divisible by 17</li>
</ul><p>Find the sum of all 0 to 9 pandigital numbers with this property.</p>
---
### Idea
At first glance, we get:
$
\begin{cases}
d_4 \bmod 2 = 0 \\
(d_3+d_4+d_5) \bmod 3 = 0 \\
d_6 = 0 \ or \ 5
\end{cases}
$
if $d_6=0$, then $d_7, d_8$ must be same if "$d_6d_7d_8$ is divisible by 11", which violates the rule of pandigital.
So, $ d_6 =5 $, this is the key to this problem.
Then start with "$d_6d_7d_8$ is divisible by 11", we can get $d_6d_7d_8$ must be in [506, 517, 528, 539, 561, 572, 583, 594]
Then start with "$d_7d_8d_9$ is divisible by 13", we can get $d_7d_8d_9$ must be in [286, 390, 728, 832]
Then start with "$d_8d_9d_{10}$ is divisible by 17", we can get $d_8d_9d_{10}$ must be in [867, 901, 289]
Chain the above three conclusions, we get $d_6d_7d_8d_9d_{10}$ must be in [52867, 53901, 57289]
Together with the first observation, we can reduce search space quite a lot.
---
```
from math import ceil
from itertools import permutations
from functools import reduce
# divisible by 11
[11 * i for i in range(ceil(500 / 11), ceil(600 / 11))]
# divisible by 13
[13 * ceil((i*10) / 13) for i in [6, 17, 28, 39, 61, 72, 83, 94]]
# divisible by 17
[17 * ceil((i % 100) * 10 / 17) for i in [286, 390, 728, 832]]
def get_remain_digits_permuations(last_five_digits):
remain = set(range(10)) - set(map(int, str(last_five_digits)))
return permutations(remain)
list(get_remain_digits_permuations(52867))[:10]
def is_candidate(permutation):
# divisible by 2 and 3
return permutation[3] % 2 == 0 and sum(permutation[2:5]) % 3 == 0
is_candidate((1,4,0,6,3))
def combine_digits(digits):
return reduce(lambda n, d: n*10+d, digits, 0)
combine_digits([2, 3, 4])
def is_substring_divisible(candidate):
# divisible by 7
return combine_digits(candidate[4:7]) % 7 == 0
def solve():
s = 0
for last_five_digits in [52867, 53901, 57289]:
for candidate in filter(is_candidate, get_remain_digits_permuations(last_five_digits)):
if is_substring_divisible(candidate + tuple(map(int, str(last_five_digits)))):
s += (combine_digits(candidate) * int(1e5) + last_five_digits)
return s
solve()
```
| github_jupyter |
```
#imports
import os
import pandas as pd
from collections import Counter
from sklearn.model_selection import train_test_split
import torch
import torch.optim as optim
import torch.nn as nn
from torch.nn.utils.rnn import pad_sequence
from torch.nn.utils.rnn import pack_padded_sequence
from torch.utils.data import DataLoader,Dataset
import spacy
import statistics
import torchtext
import torchvision
import torchvision.transforms as transforms
import torchvision.models as models
from PIL import Image
import matplotlib.pyplot as plt
import pytorch_lightning as pl
from pytorch_lightning import loggers as pl_loggers
from pytorch_lightning.metrics import Accuracy
torch.backends.cudnn.benchmark = True
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Hyperparameters
BATCH_SIZE = 128
NUM_WORKERS = 8
SPLIT_VAL = 0.2
EMBED_SIZE = 256
HIDDEN_SIZE = 256
NUM_LAYERS = 1
LRATE = 3e-5
MAX_EPOCHS = 100
NUM_GPU=1
class Vocabulary(object):
def __init__(self, freq_threshold, spacy_eng=None):
self.start_word = "<SOS>"
self.end_word = "<EOS>"
self.pad_word = "<PAD>"
self.unk_word = "<UNK>"
self.itos = {0: self.pad_word, 1: self.start_word, 2: self.end_word, 3: self.unk_word}
self.stoi = {self.pad_word: 0, self.start_word: 1, self.end_word: 2, self.unk_word: 3}
self.itos = {0: self.start_word, 1: self.end_word, 2: self.pad_word, 3: self.unk_word}
self.stoi = {self.start_word: 0, self.end_word: 1, self.pad_word: 2, self.unk_word: 3}
self.freq_threshold = freq_threshold
if spacy_eng==None:
self.spacy_eng = spacy.load('en_core_web_sm')
else:
self.spacy_eng = spacy_eng
def __len__(self):
return len(self.itos)
def tokenizer_eng(self, text):
tokenizer = [tok.text.lower() for tok in self.spacy_eng.tokenizer(text)]
return tokenizer
def build_vocabulary(self, sentence_list):
frequencies = {}
idx = 4
for sentence in sentence_list:
for word in self.tokenizer_eng(sentence):
if word not in frequencies:
frequencies[word] = 1
else:
frequencies[word] += 1
if frequencies[word] == self.freq_threshold:
self.stoi[word] = idx
self.itos[idx] = word
idx += 1
def numericalize(self, text):
tokenized_text = self.tokenizer_eng(text)
return [
self.stoi[token] if token in self.stoi else self.stoi["<UNK>"]
for token in tokenized_text
]
class FlickrDataset(Dataset):
def __init__(self, root_dir, caption_file, caption_delimiter='|',
image_column='image_name', text_column='caption_text',
transform=None, freq_threshold=5,
train=True, split_val=0.2):
self.root_dir = root_dir
self.caption_file = caption_file
self.caption_delimiter = caption_delimiter
self.image_column = image_column
self.text_column = text_column
self.dataframe = pd.read_csv(caption_file, delimiter=caption_delimiter)
self.transform = transform
self.vocab = Vocabulary(freq_threshold)
self.vocab.build_vocabulary(self.dataframe[self.text_column].tolist())
self.train = train
self.split_val = split_val
self._do_split_train_valid()
def _do_split_train_valid(self):
imgs_train, imgs_valid, caps_train, caps_valid = train_test_split(
self.dataframe[self.image_column], self.dataframe[self.text_column],
test_size=self.split_val, random_state=16
)
if self.train:
self.imgs = imgs_train
self.captions = caps_train
else:
self.imgs = imgs_valid
self.captions = caps_valid
self.imgs = self.imgs.tolist()
self.captions = self.captions.tolist()
def __len__(self):
return len(self.imgs)
def _numericalized_caption(self, caption):
numericalized_caption = [self.vocab.stoi["<SOS>"]]
numericalized_caption += self.vocab.numericalize(caption)
numericalized_caption.append(self.vocab.stoi["<EOS>"])
return numericalized_caption
def __getitem__(self, index):
caption = self.captions[index]
img_id = self.imgs[index]
img = Image.open(os.path.join(self.root_dir, img_id)).convert("RGB")
if self.transform is not None:
img = self.transform(img)
ncaption = self._numericalized_caption(caption)
return img, torch.tensor(ncaption)
class CaptionCollate:
def __init__(self, pad_idx, batch_first=True):
self.pad_idx = pad_idx
self.batch_first = batch_first
def __call__(self, batch):
batch.sort(key=lambda x: len(x[1]), reverse=True)
(images, captions) = zip(*batch)
imgs = [img.unsqueeze(0) for img in images]
imgs = torch.cat(imgs, dim=0)
lengths = [len(cap) for cap in captions]
targets = torch.zeros(len(captions), max(lengths)).long()
for idx, cap in enumerate(captions):
end = lengths[idx]
targets[idx, :end] = cap[:end]
return imgs, targets, lengths
def flickr8k_dataloader(root_folder, caption_file, transform, train=True,
batch_size=32, num_workers=8, shuffle=True, pin_memory=True):
dataset = FlickrDataset(root_folder, caption_file, transform=transform, train=train)
PAD_IDX = dataset.vocab.stoi["<PAD>"]
dataloader = DataLoader(dataset=dataset, batch_size=batch_size, num_workers=num_workers,
shuffle=shuffle, pin_memory=pin_memory,
collate_fn=CaptionCollate(pad_idx=PAD_IDX))
return dataloader, dataset
train_transform = transforms.Compose([
transforms.Resize((356, 356)),
transforms.RandomCrop((224, 224)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),(0.229, 0.224, 0.225)),
])
valid_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),(0.229, 0.224, 0.225)),
])
root_dir = "data/flickr30k/images/"
caption_file = "data/flickr30k/captions.txt"
train_loader, trainset = flickr8k_dataloader(root_dir, caption_file, transform=train_transform,
num_workers=NUM_WORKERS, shuffle=True, train=True)
valid_loader, validset = flickr8k_dataloader(root_dir, caption_file, transform=valid_transform,
num_workers=NUM_WORKERS, shuffle=False, train=False)
img, cap = trainset[200]
print([trainset.vocab.itos[token] for token in cap.tolist()])
len(trainset), len(validset)
imgs, caps, lengths = next(iter(valid_loader))
targets = pack_padded_sequence(caps, lengths, batch_first=True)
targets[0][0]
print([trainset.vocab.itos[token] for token in targets[0].tolist()])
print(imgs[0].shape)
class Encoder(nn.Module):
def __init__(self, embed_size, train_cnn=False, resnet_model=models.resnet50):
super(Encoder, self).__init__()
self.train_cnn = train_cnn
resnet = resnet_model(pretrained=True)
resnet = self._fine_tune(resnet)
modules = list(resnet.children())[:-1]
self.resnet = nn.Sequential(*modules)
self.embed = nn.Linear(resnet.fc.in_features, embed_size)
self.dropout = nn.Dropout(0.5)
self.bn = nn.BatchNorm1d(embed_size, momentum=0.01)
def _fine_tune(self, resnet):
if self.train_cnn:
for param in resnet.parameters():
param.requires_grad_(True)
else:
for param in resnet.parameters():
param.requires_grad_(False)
return resnet
def forward(self, images):
features = self.resnet(images)
features = features.view(features.size(0), -1)
embed = self.bn(self.embed(features))
return embed
class Decoder(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers):
super(Decoder, self).__init__()
self.embed_size = embed_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = num_layers
self.embed = nn.Embedding(vocab_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
self.linear = nn.Linear(hidden_size, vocab_size)
self._init_weights()
def _init_weights(self):
torch.nn.init.xavier_uniform_(self.linear.weight)
torch.nn.init.xavier_uniform_(self.embed.weight)
def forward(self, features, captions, lengths):
features = features.unsqueeze(dim=1)
embeddings = self.embed(captions)
embeddings = torch.cat((features, embeddings), dim=1)
packed = pack_padded_sequence(embeddings, lengths, batch_first=True)
hiddens, _ = self.lstm(packed)
outputs = self.linear(hiddens[0])
return outputs
class ImageCaptionNet(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers):
super(ImageCaptionNet, self).__init__()
self.embed_size = embed_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = num_layers
self.encoder = Encoder(embed_size)
self.decoder = Decoder(embed_size, hidden_size, vocab_size, num_layers)
def forward(self, images, captions, lengths):
features = self.encoder(images)
outputs = self.decoder(features, captions, lengths)
return outputs
class ImageCaptionTask(pl.LightningModule):
def __init__(self, model, optimizers, criterion, vocab_size, scheduler=None, batch_first=True):
super().__init__()
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.scheduler = scheduler
self.vocab_size = vocab_size
self.batch_first = batch_first
self.metric = Accuracy()
def forward(self, imgs, captions):
outputs = self.model(imgs, captions[:-1])
return outputs
def shared_step(self, batch, batch_idx):
imgs, captions, lengths = batch
packed = pack_padded_sequence(captions, lengths, batch_first=self.batch_first)
targets, _, _, _ = packed
outputs = self.model(imgs, captions, lengths)
loss = criterion(outputs, targets)
acc = self.metric(outputs, targets)
return loss, acc
def training_step(self, batch, batch_idx):
loss, acc = self.shared_step(batch, batch_idx)
result = pl.TrainResult(loss)
result.log_dict({'trn_loss': loss, 'trn_acc': acc})
return result
def validation_step(self, batch, batch_idx):
loss, acc = self.shared_step(batch, batch_idx)
result = pl.EvalResult(checkpoint_on=loss)
result.log_dict({'val_loss': loss, 'val_acc': acc})
return result
def configure_optimizers(self):
if self.scheduler:
return [self.optimizer], [self.scheduler]
return self.optimizer
# initialize model, loss etc
PAD_INDEX = trainset.vocab.stoi["<PAD>"]
VOCAB_SIZE = len(trainset.vocab)
print(f'VOCAB_SIZE : {VOCAB_SIZE}')
model = ImageCaptionNet(EMBED_SIZE, HIDDEN_SIZE, VOCAB_SIZE, NUM_LAYERS)
criterion = nn.CrossEntropyLoss(ignore_index=PAD_INDEX)
optimizer = optim.Adam(model.parameters(), lr=LRATE)
task = ImageCaptionTask(model, optimizer, criterion, vocab_size=VOCAB_SIZE)
checkpoint_path = '../saved_model'
# DEFAULTS used by the Trainer
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_top_k=1,
verbose=True,
monitor='checkpoint_on',
mode='min',
prefix='flickr30k_net_'
)
tensorboard_logger = pl_loggers.TensorBoardLogger('../logs/flickr30k')
trainer = pl.Trainer(max_epochs=MAX_EPOCHS, gpus=NUM_GPU, logger=tensorboard_logger, checkpoint_callback=checkpoint_callback)
trainer.fit(task, train_loader, valid_loader)
saved_checkpoint_path = '../saved_model/flickr8k_net_epoch=99.ckpt'
checkpoint = torch.load(saved_checkpoint_path, map_location=lambda storage, loc: storage)
task.load_state_dict(checkpoint['state_dict'])
# model.load_state_dict(task.model.state_dict())
# task.load_from_checkpoint(saved_checkpoint_path, optimizers=optimizer, criterion=criterion, vocab_size=VOCAB_SIZE)
class ImageCaptionTest(object):
def __init__(self, model, vocab, max_len=20):
self.model = model
self.vocab = vocab
self.max_len = max_len
self.model.eval()
self.encoder = model.encoder
self.decoder = model.decoder
def process_sentence_list(self, sentences):
sentence_list = []
for sentence in sentences:
sentence_list.append(self.clean_sentence(sentence))
return sentence_list
def clean_sentence(self, sentence_index):
sentence = ""
for i in sentence_index:
word = self.vocab.itos[i]
if (word == self.vocab.start_word):
continue
elif (word == self.vocab.end_word):
break
else:
sentence = sentence + " " + word
return sentence
def sample(self, images, states=None):
with torch.no_grad():
inputs = self.encoder(images).unsqueeze(dim=1)
sampled_ids = []
for i in range(self.max_len):
hiddens, states = self.decoder.lstm(inputs, states) # hiddens: (batch_size, 1, hidden_size)
outputs = self.decoder.linear(hiddens.squeeze(1)) # outputs: (batch_size, vocab_size)
_, predicted = outputs.max(1) # predicted: (batch_size)
sampled_ids.append(predicted)
inputs = self.decoder.embed(predicted) # inputs: (batch_size, embed_size)
inputs = inputs.unsqueeze(1) # inputs: (batch_size, 1, embed_size)
sampled_ids = torch.stack(sampled_ids, 1) # sampled_ids: (batch_size, max_seq_length)
return sampled_ids.tolist()
def generate_caption(self, images):
sentences_indexs = self.sample(images)
sentences = self.process_sentence_list(sentences_indexs)
return sentences
def clean_sentence(sentence_index, vocab):
sentence = ""
for i in sentence_index:
word = vocab.itos[i]
if (word == vocab.start_word):
continue
elif (word == vocab.end_word):
break
else:
sentence = sentence + " " + word
return sentence
img, txt = validset[40]
img = img.unsqueeze(0)
image_caption = ImageCaptionTest(model, validset.vocab)
result = image_caption.generate_caption(img)
print(f'predicted : {result[0]}')
print(f'ground truth : {clean_sentence(txt.tolist(), validset.vocab)}')
plt.imshow(img.squeeze().permute(1,2,0), cmap='gray')
print(txt)
```
| github_jupyter |
# GAIT RECOGNITION
## 1. Data preparation
Let's create some directories
```
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import shutil
partA = 'DatasetB-1/video/'
partB = 'DatasetB-2/video/'
silhouettes_dir = 'silhouettes_Unet22K/'
# define the path of CASIA directory
CASIA_dir = '/home/israel/Downloads/CASIA/'
conditions = np.array(['bg-01','bg-02','cl-01','cl-02','nm-01','nm-02','nm-03','nm-04','nm-05','nm-06'])
views = ['090']
subjects = 50
# plt.figure(figsize=(20, 10))
# plt.subplot(131); plt.imshow(bkgnd[:,:,::-1]);
# plt.subplot(132); plt.imshow(fr_cap[:,:,::-1]);
def create_dir(folder, force=True, verbose=False):
''' Create a directory if it doesn't exist '''
try:
os.makedirs(folder)
if verbose: print('Directory {} created succesfully.'.format(folder))
except:
if force:
if verbose: print('{} already exists. Creating a new one'.format(folder))
shutil.rmtree(folder)
os.makedirs(folder)
else:
if verbose: print('{} already exists.'.format(folder))
pass
def get_roi(bkgnd, frgnd, margin = 10):
height, width, _ = frgnd.shape
bk_gray = cv2.cvtColor(bkgnd, cv2.COLOR_BGR2HSV)
fr_gray = cv2.cvtColor(frgnd, cv2.COLOR_BGR2HSV)
diff = cv2.subtract(bk_gray[:,:,2],fr_gray[:,:,2])
_, diff = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)
if diff.sum() > 400000:
col1, row1, col2, row2 = cv2.boundingRect(diff)
x = col1 - margin if col1 - margin >= 0 else 0
y = row1 - margin if row1 - margin >= 0 else 0
w = col2 + 2*margin if col2 + 2*margin <= width else width
h = row2 + 2*margin if row2 + 2*margin <= height else height
return (frgnd[y:y+h,x:x+w], [x, y, w, h])
else:
return None
import tensorflow as tf
# from tensorflow_examples.models.pix2pix import pix2pix
AUTOTUNE = tf.data.experimental.AUTOTUNE
import matplotlib.pyplot as plt
import numpy as np
import os
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
import sys
import cv2
import time
import colorsys
import random
# import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def random_colors(N, bright=True):
brightness = 1.0 if bright else 0.7
hsv = [(i / N, 1, brightness) for i in range(N)]
colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
return colors
def apply_mask(image, mask, color, alpha=0.5):
# if image.shape[:2] != mask.shape:
# h, w, _ = image.shape
# mask = cv2.resize(mask, (w, h), cv2.INTER_LINEAR)
"""Apply the given mask to the image"""
for c in range(3):
image[:, :, c] = np.where(mask == 1,
image[:, :, c] *
(1 - alpha) + alpha * color[c] * 255,
image[:, :, c])
return image
name = '128x128unet_acc_0.9451_loss_0.0845_val-acc_0.9457_val-loss_0.0826_0.22M_24-10-21-DB_UCB300_Epochs_10x1E-4_5x1E-5'
model = tf.keras.models.load_model(f'../models/BioSmart/{name}')
# width, height = 160, 160
width, height = 128, 128
def inference(imgs):
start = time.time()
tf_images = []
for img in imgs:
tf_image = tf.image.resize(tf.convert_to_tensor(np.array(img)),(width, height))/255
tf_images.append(tf_image)
# output = model.predict(tf.expand_dims(tf_image, 0))
output = model.predict(tf.stack(tf_images))
out = np.where(output[:,:,:,0]>0.4, 1, 0)
del tf_images
end = time.time()
return out
for subject in range(1, subjects+1):
part = partA if subject < 63 else partB
subject = str(subject).zfill(3)
print(f'Processing subject {subject}')
for condition in conditions:
for view in views:
print(f'Processing subject {subject}-{condition}-{view}')
rois = []
rois_pos = []
bkgnd_path = os.path.join(CASIA_dir, part, f'{subject}-bkgrd-{view}.avi')
bk_cap = cv2.VideoCapture(bkgnd_path)
_, bkgnd = bk_cap.read()
file_path = os.path.join(CASIA_dir, part, f'{subject}-{condition}-{view}.avi')
fr_cap = cv2.VideoCapture(file_path)
directory_path = os.path.join(CASIA_dir, silhouettes_dir, subject, condition, view)
create_dir(directory_path, True)
while True:
ret, frgnd = fr_cap.read()
if ret:
roi = get_roi(bkgnd, frgnd)
if roi is not None:
rois.append(roi[0])
rois_pos.append(roi[1])
# cv2.imshow('out', roi)
if cv2.waitKey(1) == ord('q'):
break
else:
break
predictions = np.uint8(inference(rois))*255
masks = []
for i, (x, y, w, h) in enumerate(rois_pos):
try:
back = np.zeros_like(bkgnd[:,:,0])
pred = predictions[i]
pred = cv2.resize(pred, (w, h), cv2.INTER_CUBIC)
back[y:y+h,x:x+w] = pred
cv2.imwrite(os.path.join(directory_path, f'{str(i).zfill(4)}.png'), back)
except:
pass
bk_cap.release()
fr_cap.release()
cv2.destroyAllWindows()
tf_im
pred = inference(rois)
color = random_colors(1)
roi_resized = cv2.resize(roi, (width, height), cv2.INTER_LINEAR)
out = apply_mask(roi_resized, pred, color[0])
plt.subplot(121); plt.imshow(roi);
plt.subplot(122); plt.imshow(out);
plt.imshow(pred[3,:,:,], 'gray')
np.uint(pred).shape
```
# ! unzip GaitDatasetB-silh.zip
!mkdir CASIA/
!mkdir CASIA/DatasetB
!mkdir CASIA/DatasetB/silhouettes
!mkdir GEI GEnI data_100_100
!mkdir CASIA/sil
```
import tarfile
from glob import glob
import shutil
import os
def create_dir(folder, force=True, verbose=False):
''' Create a directory if it doesn't exist '''
try:
os.makedirs(folder)
if verbose: print('Directory {} created succesfully.'.format(folder))
except:
if force:
if verbose: print('{} already exists. Creating a new one'.format(folder))
shutil.rmtree(folder)
os.makedirs(folder)
else:
if verbose: print('{} already exists.'.format(folder))
pass
partA = 'DatasetB-1/silhouettes/'
partB = 'DatasetB-2/silhouettes/'
silhouettes_dir = 'silhouettes_Unet22K/'
# define the path of CASIA directory
CASIA_dir = '/home/israel/Downloads/CASIA/'
from glob import glob
for i in range(50, 125):
part = partA if i<63 else partB
if(i%10 == 0):
print('Extracting Subject:'+str(i).zfill(3)+ '...')
tf = tarfile.open(os.path.join(CASIA_dir, part, f'{str(i).zfill(3)}.tar.gz'))
tf.extractall(os.path.join(CASIA_dir, silhouettes_dir))
```
## 2. FEATURE NORMALIZATION
```
import numpy as np
from os import listdir
import cv2
import os
from glob import glob
import time
import matplotlib.pyplot as plt
```
Let's define some functions to process the contours
```
def find_if_close(cnt1,cnt2):
row1,row2 = cnt1.shape[0],cnt2.shape[0]
for i in range(row1):
for j in range(row2):
dist = np.linalg.norm(cnt1[i]-cnt2[j])
if abs(dist) < 40 :
return True
elif i==row1-1 and j==row2-1:
return False
def joincontours(contours,thresh):
LENGTH = len(contours)
status = np.zeros((LENGTH,1))
for i,cnt1 in enumerate(contours):
x = i
if i != LENGTH-1:
for j,cnt2 in enumerate(contours[i+1:]):
x = x+1
dist = find_if_close(cnt1,cnt2)
if dist == True:
val = min(status[i],status[x])
status[x] = status[i] = val
else:
if status[x]==status[i]:
status[x] = i+1
unified = []
maximum = int(status.max())+1
for i in range(maximum):
pos = np.where(status==i)[0]
if pos.size != 0:
cont = np.vstack(contours[i] for i in pos)
hull = cv2.convexHull(cont)
unified.append(hull)
cv2.drawContours(thresh,unified,-1,255,-1)
return thresh
from IPython.display import clear_output
import numpy as np
import cv2
import os
from os import listdir
from glob import glob
import time
import matplotlib.pyplot as plt
def GEI_generator(sil_file, size = 64,debug = False):
lfiles = os.listdir(sil_file)
lfiles.sort()
stack_GEI = []
if debug:
plt.figure(figsize=(20,int(len(lfiles)/10)))
for idimg, path in enumerate(lfiles):
if debug: plt.subplot((len(lfiles)/15)+1,15, idimg+1)
img = cv2.imread(sil_file+path,0)
# Silhouette extraction
contours1,_ = cv2.findContours(img.copy(),cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img,contours1,-1,255,-1)
if (len(contours1)>0):
ncoun= np.concatenate(contours1)[:,0,:]
x1, y1 = np.min(ncoun,axis=0)
x2, y2 = np.max(ncoun,axis=0)
silhouette = img[y1:y2,x1:x2]
# Normalizae silhouette
factor = size/max(silhouette.shape)
height = round(factor*silhouette.shape[0])
width = round(factor*silhouette.shape[1])
if(height>width):
nor_sil = cv2.resize(silhouette,(width,height))
# print(nor_sil.shape)
# We add a background of the shape size x size
portion_body = 0.3 # We take the upper part of the body to center the image and avoid the legs
moments = cv2.moments(nor_sil[0:int(nor_sil.shape[0]*portion_body),])
w = round(moments['m10']/moments['m00'])
background = np.zeros((size, size))
shift = round((size/2)-w)
# print('center:',w,' shift:',shift)
if(shift<0 or shift+nor_sil.shape[1]>size): shift = round((size-nor_sil.shape[1])/2)
background[:,shift:nor_sil.shape[1]+shift] = nor_sil
stack_GEI.append(background)
if debug:
plt.xticks([])
plt.yticks([])
plt.imshow(background,'gray')
# plt.subplots_adjust(wspace=0.05, hspace=0.01)
if stack_GEI == []:
GEI = np.zeros((size, size))
print('\tNo Files Found')
else:
GEI = np.mean(np.array(stack_GEI),axis=0)
return GEI, stack_GEI
GEI, stack = GEI_generator(rep_sil)
plt.imshow(GEI)
representation_dir = 'representations/'
conditions = np.array(['bg-01','bg-02','cl-01','cl-02','nm-01','nm-02','nm-03','nm-04','nm-05','nm-06'])
view = '090/'
size = 100
GEI_dir = 'GEI_UNet22K/'
create_dir(os.path.join(CASIA_dir, GEI_dir))
subjects = sorted(os.listdir(os.path.join(CASIA_dir, silhouettes_dir)))
count = 0
for subject in subjects:
print('Sujeto :', subject)
for condition in conditions:
# directory_sub = os.path.join(CASIA_dir, representation_dir, subject);
# if not os.path.exists(directory_sub):
# os.makedirs(directory_sub)
# for condition in conditions:
# directory_condition = os.path.join(CASIA_dir, representation_dir, subject, condition)
# if not os.path.exists(directory_condition):
# os.makedirs(directory_condition)
path = os.path.join(CASIA_dir, silhouettes_dir, subject, condition, view)
save_path = os.path.join(CASIA_dir, GEI_dir, f'{subject}_{condition}_{view[:-1]}.png')
GEI, _ = GEI_generator(path, size)
GEI[int(size*0.12):int(size*0.68),:] = 0
cv2.imwrite(save_path, GEI)
if cv2.waitKey(1) & 0xff==27:
break
cv2.destroyAllWindows()
save_path
```
PREPROCESSING
```
import numpy as np
from numpy import ma
from os import listdir
import cv2
import os
from glob import glob
import time
# import skimage
# from skimage import transform
# Definomos la direccion de las imagenes
# data_base = 'CASIA'
# data_set = 'DatasetB'
# directorio = 'silhouettes'
train_ = np.array(['nm-01','nm-02','nm-03','nm-04'])
test_nm_ = np.array(['nm-05','nm-06'])
test_cl_ = np.array(['cl-01','cl-02'])
test_bg_ = np.array(['bg-01','bg-02'])
# view = '090'
# formato = '.png'
# slash='/'
# location = data_base+slash+data_set+slash+directorio
# subject = np.array([])
# for cosa in range(1,125):
# subject = np.append(subject,str(cosa).zfill(3))
# print(subject.shape)
# print(subject)
# # subject = subject[:60]
# rango = 1
# paso = 2
# size = 100
data = {}
data["train_"] = train_
data["test_nm_"] = test_nm_
data["test_cl_"] = test_cl_
data["test_bg_"] = test_bg_
# print(data['train_'])
for dset in data:
print(dset, ':',data[dset])
conditions = data[dset]
matriz = []
etiqueta = []
# matriz = np.zeros((len(subject)*len(condition)*rango,size*size))
# etiqueta = np.zeros((len(subject)*len(condition)*rango,1))
# c1 =0
for subject in subjects:
for condition in conditions:
GEI_path = os.path.join(CASIA_dir, GEI_dir, f'{subject}_{condition}_{view[:-1]}.png')
GEI = cv2.imread(GEI_path, 0)
matriz.append(GEI.flatten())
etiqueta.append(int(subject))
# for j in range(len(subject)):
# for i in range(len(condition)):
# # Definimos el nombre de la imagen
# slash='_'
# save_sil = 'GEI/'+data_base+slash+data_set+slash+directorio+slash+subject[j]+slash+condition[i]+slash+view+formato
# img = cv2.imread(save_sil,0)
# matriz[c1,:] = img.flatten()[np.newaxis]
# etiqueta[c1] = int(subject[j])
# c1 +=1
# cv2.imshow('frame',imgr)
if cv2.waitKey(1) & 0xff==27:
break
# matriz = matriz/255
matriz = np.array(matriz)
etiqueta = np.array(etiqueta)
np.savetxt('data_100_100/'+dset+'data.dat', matriz)
np.savetxt('data_100_100/'+dset+'target.dat', etiqueta)
print(dset,'data.dat-> ',matriz.shape)
print(dset,'data.dat-> ',etiqueta.shape)
# print(etiqueta)
cv2.destroyAllWindows()
import numpy as np
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import svm
def normalizacion(data):
u = np.mean(data,axis=0)
s = np.std(data,axis=0)
data = (data - u)/s
return (data,u,s)
def normalizacion2(data,u,s):
data = (data - u) / s
return data
# Cargamos la base de datos
if(0):
carpeta = 'A/data/'
else:
carpeta = 'data_100_100/'
trainX = np.loadtxt(carpeta+'train_data.dat')
print('Max: ',trainX.max(),' min: ',trainX.min())
# Escalamos los datos
scaler = StandardScaler()
scaler.fit(trainX)
trainX = scaler.transform(trainX)
# trainX,mean,st = normalizacion(trainX)
print('Entrenamiento data: ',trainX.shape)
print('Max: ',trainX.max(),' min: ',trainX.min())
trainY = np.loadtxt(carpeta+'train_target.dat')
print('Entrenamiento target: ',trainY.shape)
_ ,componentes_original = trainX.shape
# Aplicamos PCA
pre = 0.9999
pca = PCA(pre)
pca.fit(trainX)
componentes_PCA = pca.n_components_
print('Componentes Original : ',componentes_original,'\nComponentes PCA: ',componentes_PCA, '\nPreservando: ',pre*100)
trainX = pca.transform(trainX)
# lda = LinearDiscriminantAnalysis(n_components=60,solver='eigen')
# trainX = lda.fit(trainX, trainY).transform(trainX)
# Declaramos el modelo y lo ajustamos
# logisticRegr = LogisticRegression(solver = 'lbfgs',C=0.1,tol=0.0000001)
# logisticRegr = KNeighborsClassifier(n_neighbors=1)
# logisticRegr = svm.SVC()
logisticRegr = LinearDiscriminantAnalysis(solver = 'lsqr',shrinkage=0.2)
logisticRegr.fit(trainX, trainY)
# Calculamos su score
score = logisticRegr.score(trainX, trainY)
print('Dataset: Train',' shape test: ',trainX.shape,' correcto: ',np.round(score,4))
# Realizamos pruebas en los datasets de testeo
datasets = np.array(['test_nm_','test_bg_','test_cl_'])
for i in range(len(datasets)):
testX = np.loadtxt(carpeta+datasets[i]+'data.dat')
testX = scaler.transform(testX)
# testX = normalizacion2(testX,mean,st)
testX = pca.transform(testX)
testY = np.loadtxt(carpeta+datasets[i]+'target.dat')
# Calculamos su score
score = logisticRegr.score(testX, testY)
certeza = logisticRegr.predict(testX)
# Mostramos la matriz de confusion
# print(confusion_matrix(certeza,testY))
print('Dataset: ',datasets[i],' shape test: ',testX.shape,' correcto: ',np.round(score,4))
# Guardamos el modelo entrenado
# from sklearn.externals import joblib
# joblib.dump(logisticRegr, 'LR.pkl')
# joblib.dump(pca, 'pca.pkl')
# joblib.dump(scaler, 'scaler.pkl')
from pickle import dump
dump(scaler, open('CASIA_UNet22K_scaler.pkl', 'wb'))
dump(pca, open('CASIA_UNet22K_pca.pkl', 'wb'))
dump(logisticRegr, open('CASIA_UNet22K_model.pkl', 'wb'))
```
Lets join multiple subjects to test in the OAK-D
```
import cv2
import numpy as np
import os
partA = 'DatasetB-1/'
partB = 'DatasetB-2/silhouettes/'
silhouettes_dir = 'silhouettes/'
# define the path of CASIA directory
CASIA_dir = '/home/israel/Downloads/CASIA/'
videos_dir = 'video/'
view = '090'
conditions = ['nm-05', 'nm-06']
out_inf = None
cap = cv2.VideoCapture('/home/israel/Downloads/CASIA/DatasetB-1/video/001-nm-05-090.avi')
clip_name = 'nm_1-30_cond.avi'
codec = cv2.VideoWriter_fourcc(*'XVID')
fwidth = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
fheight = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
out = cv2.VideoWriter(clip_name, codec, fps, (fwidth, fheight))
for subject in range(1, 30):
for condition in conditions:
part = partA if subject<63 else partB
path = os.path.join(CASIA_dir, part, videos_dir, f'{str(subject).zfill(3)}-{condition}-{view}.avi')
print(path)
cap = cv2.VideoCapture(path)
ret = True
while(ret):
ret, frame = cap.read()
if ret:
out.write(frame)
# cv2.imshow('out',frame)
if cv2.waitKey(1) == ord('q'):
break
white = (np.ones((fheight, fwidth, 3))*255).astype('uint8')
for i in range(5):
out.write(white)
cap.release()
cv2.destroyAllWindows()
out.release()
```
frame
```
frame.shape
```
```
white.shape
targets = []
for i in range(1,30):
targets.append(i)
targets.append(i)
oak_pred = [[57, 1, 1, 1], [77, 1, 1, 1], [81, 56, 17, 2, 2], [51, 51, 2, 2], [87, 3, 3, 3], [93, 3, 3, 3], [4, 4, 4, 4, 4], [14, 4, 4, 4, 4], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 123, 30, 57, 57, 57, 57, 57], [33, 33, 33, 48, 48, 83, 83, 83, 83, 83], [120, 124, 6], [86, 6, 6], [97, 7, 7], [7, 7, 7], [8, 8, 8, 8], [8, 8, 8, 8], [81, 9, 9, 9], [93, 67, 67, 67], [123, 123, 123, 123], [51, 51, 123, 123], [30, 73, 73, 11], [30, 11, 11, 11], [12, 12, 12], [12, 12, 12, 12], [93, 61, 93, 93, 93], [26, 99, 81, 28, 28], [14, 14, 14, 14, 14], [14, 14, 14, 14, 14], [34, 47, 47, 94], [30, 30, 117, 117], [65, 123, 123, 123], [123, 123, 123], [29, 17, 17, 17], [81, 17, 17, 17], [20, 44, 18], [30, 18, 18], [19, 19, 19], [110, 19, 66], [96, 96, 96, 96], [96, 96, 96], [21, 48, 48, 21], [21, 48, 48], [22, 22, 22, 22], [22, 22, 22, 22], [51, 33, 33, 33], [51, 33, 33, 33], [57, 24, 24, 24, 24], [26, 26, 57, 24, 24], [14, 93, 87, 25], [93, 25, 25, 25], [117, 30, 26, 26], [30, 26, 26, 26], [30, 30, 30, 30, 30], [96, 30, 30, 30], [101, 123, 123, 123, 123], [123, 123, 123, 123], [110, 29, 29, 29], [29, 29, 29, 29]]
oak_pred = [[1, 1, 1, 1], [63, 1, 1, 1], [99, 99, 84, 56, 99], [87, 2, 2, 2], [115, 93, 3, 3], [120, 38, 38, 3], [73, 51, 4, 4, 4], [73, 51, 4, 4, 4], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 123, 123, 123, 123, 123, 123, 123], [33, 33, 33, 88, 73, 73, 73, 73, 73, 73], [99, 35, 6], [105, 114, 6], [7, 7, 7], [97, 7, 7], [110, 110, 110, 8], [55, 110, 8, 8], [81, 9, 9, 9], [81, 38, 38, 38], [51, 51, 10, 10], [55, 110, 110, 110], [11, 11, 11, 11], [11, 11, 11, 11], [55, 12, 12], [55, 55, 12, 12], [26, 61, 61, 93, 93], [26, 99, 93, 28, 28], [87, 14, 14, 14, 14], [87, 14, 14, 14, 14], [34, 94, 94, 94], [30, 30, 73, 94], [65, 55, 99, 107], [61, 99, 56], [110, 38, 38, 17], [81, 99, 17, 17], [102, 18, 18], [30, 18, 18], [19, 19, 19], [81, 19, 6], [96, 96, 96, 96], [96, 96, 96], [82, 48, 48, 48], [51, 48, 48], [55, 22, 104, 100], [61, 61, 22, 104], [93, 93, 23, 23], [93, 93, 93, 23], [30, 26, 26, 24, 24], [30, 30, 26, 26, 24], [14, 93, 14, 56], [87, 98, 25, 25], [30, 26, 26, 26], [30, 26, 26, 26], [30, 30, 30, 30, 30], [30, 30, 30, 30], [101, 101, 28, 28, 28], [113, 28, 28, 28], [29, 29, 29, 14], [110, 110, 110, 14]]
oak_pred = [[1, 1, 1, 1], [77, 1, 1, 1], [35, 17, 17, 17, 56], [61, 99, 17, 2], [46, 46, 3, 3], [87, 38, 3, 3], [4, 4, 4, 4, 4], [4, 4, 4, 4, 4], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 65, 101, 38, 38, 56, 56, 56], [33, 33, 33, 88, 55, 83, 83, 83, 83, 83], [99, 38, 6], [38, 6, 6], [7, 7, 7], [100, 7, 7], [8, 8, 8, 8], [8, 8, 8, 8], [93, 9, 9, 9], [93, 67, 9, 9], [33, 46, 46, 46], [123, 46, 17, 10], [30, 73, 11, 11], [11, 11, 11, 11], [12, 12, 12], [55, 12, 12, 12], [93, 61, 61, 61, 93], [99, 99, 14, 28, 28], [14, 14, 14, 14, 14], [14, 14, 14, 14, 14], [34, 47, 47, 47], [30, 30, 47, 47], [104, 104, 17, 35], [61, 51, 56], [29, 17, 17, 17], [81, 17, 17, 17], [102, 100, 18], [30, 18, 18], [19, 19, 19], [110, 87, 72], [96, 96, 96, 96], [96, 96, 96], [21, 48, 48, 21], [21, 21, 21], [22, 22, 22, 22], [22, 22, 22, 22], [51, 93, 93, 93], [51, 51, 51, 51], [26, 24, 24, 24, 24], [26, 26, 57, 24, 24], [87, 87, 38, 25], [87, 98, 25, 25], [117, 117, 117, 26], [30, 26, 26, 26], [123, 123, 123, 30, 30], [96, 96, 30, 30], [123, 123, 28, 123, 123], [123, 123, 123, 123], [29, 29, 29, 29], [110, 29, 29, 29]]
# UNET 22K Fine 0.4
oak_pred = [[26, 1, 1, 1], [86, 1, 1, 1], [99, 17, 17, 56, 56], [87, 87, 17, 2], [46, 46, 38, 3], [87, 38, 38, 3], [73, 73, 4, 4, 4], [73, 73, 4, 4, 4], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 5, 30, 73, 73, 73, 73, 73], [33, 33, 33, 123, 73, 73, 123, 123, 123, 123], [73, 38, 6], [73, 6, 6], [99, 99, 7], [110, 7, 7], [55, 8, 8, 8], [55, 8, 8, 8], [73, 9, 9, 9], [93, 38, 38, 9], [123, 46, 46, 46], [123, 123, 105, 123], [30, 73, 73, 73], [30, 11, 11, 11], [55, 12, 12], [73, 55, 12, 12], [26, 61, 61, 93, 93], [26, 99, 14, 28, 28], [14, 14, 14, 14, 14], [57, 14, 14, 14, 14], [30, 30, 117, 30], [30, 30, 30, 117], [123, 123, 123, 123], [123, 123, 123], [110, 38, 38, 38], [14, 17, 17, 17], [30, 102, 18], [30, 102, 18], [19, 19, 56], [42, 73, 66], [96, 96, 96, 96], [96, 96, 96], [21, 48, 48, 48], [21, 48, 48], [99, 22, 99, 99], [22, 22, 104, 104], [51, 51, 51, 51], [51, 33, 51, 51], [30, 26, 26, 24, 24], [30, 73, 26, 57, 24], [14, 87, 38, 98], [87, 98, 40, 40], [30, 30, 73, 73], [30, 30, 30, 26], [96, 30, 30, 30, 30], [96, 30, 30, 30], [123, 123, 123, 123, 123], [51, 51, 51, 123], [42, 110, 29, 14], [73, 110, 110, 14]]
# UNET 22K Fine 0.3
oak_pred = [[26, 1, 1, 1], [26, 1, 1, 1], [99, 87, 87, 56, 56], [87, 87, 2, 2], [73, 93, 93, 3], [63, 87, 93, 3], [73, 73, 73, 4, 4], [73, 73, 4, 4, 4], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 5, 30, 30, 73, 73, 73, 73], [33, 33, 33, 48, 73, 73, 73, 73, 73, 73], [73, 124, 6], [73, 6, 6], [7, 7, 7], [110, 7, 7], [8, 8, 8, 8], [8, 8, 8, 8], [73, 9, 9, 9], [93, 67, 67, 67], [123, 123, 123, 123], [123, 123, 123, 123], [30, 11, 11, 11], [30, 11, 11, 11], [12, 12, 12], [73, 55, 12, 12], [26, 61, 61, 93, 93], [26, 26, 26, 28, 28], [14, 14, 14, 14, 14], [57, 14, 14, 14, 14], [30, 30, 30, 30], [30, 30, 30, 30], [123, 123, 123, 123], [123, 123, 123], [110, 38, 38, 17], [14, 17, 17, 17], [30, 102, 18], [30, 102, 18], [19, 19, 19], [42, 73, 66], [96, 96, 96, 96], [96, 96, 96], [21, 48, 48, 21], [21, 48, 48], [99, 22, 22, 22], [22, 22, 22, 22], [33, 33, 33, 33], [33, 33, 33, 33], [30, 30, 26, 24, 24], [30, 73, 73, 24, 24], [14, 87, 87, 25], [87, 98, 25, 25], [30, 30, 117, 117], [30, 30, 30, 26], [96, 30, 30, 30, 96], [96, 96, 30, 30], [123, 123, 123, 123, 123], [123, 33, 33, 123], [42, 110, 29, 14], [73, 110, 110, 14]]
# UNET 59K Fine 0.4
oak_pred = [[26, 1, 1, 1], [26, 1, 1, 1], [110, 14, 87, 14, 113], [42, 99, 99, 2], [73, 38, 38, 38], [73, 14, 38, 57], [30, 73, 73, 73, 73], [73, 73, 73, 73, 73], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 5, 30, 30, 73, 73, 73, 73], [33, 33, 33, 48, 48, 73, 73, 73, 73, 73], [73, 73, 6], [73, 6, 6], [7, 7, 7], [7, 7, 7], [30, 55, 110, 8], [73, 55, 8, 8], [73, 73, 73, 9], [93, 14, 67, 67], [73, 73, 55, 73], [110, 110, 110, 110], [30, 73, 73, 73], [30, 11, 11, 73], [55, 55, 12], [73, 55, 55, 55], [26, 26, 93, 93, 93], [26, 26, 14, 81, 28], [14, 14, 14, 14, 14], [73, 14, 14, 14, 14], [86, 86, 1, 94], [73, 73, 73, 73], [73, 73, 73, 107], [55, 55, 42], [110, 17, 17, 17], [14, 14, 14, 17], [84, 84, 18], [81, 18, 18], [87, 19, 19], [30, 73, 73], [30, 30, 30, 30], [30, 30, 30], [21, 123, 48, 48], [48, 48, 48], [55, 55, 55, 100], [22, 110, 110, 110], [51, 93, 93, 93], [30, 51, 51, 116], [101, 26, 26, 57, 24], [30, 57, 57, 57, 57], [14, 93, 14, 25], [41, 87, 25, 25], [30, 30, 30, 26], [30, 26, 26, 26], [30, 26, 27, 27, 27], [30, 26, 93, 27], [30, 28, 28, 28, 28], [30, 26, 28, 28], [73, 73, 110, 14], [73, 110, 14, 14]]
# UNET 59K Fine 0.5
# oak_pred = [[30, 1, 1, 1], [26, 1, 1, 1], [110, 87, 87, 113, 113], [42, 87, 2, 2], [42, 38, 93, 93], [73, 42, 93, 57], [30, 73, 73, 73, 73], [73, 73, 73, 73, 73], [33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 5, 30, 30, 30, 30, 30, 30], [33, 33, 33, 48, 73, 73, 73, 73, 73, 73], [73, 73, 6], [73, 6, 6], [7, 7, 7], [7, 7, 7], [55, 110, 8, 8], [73, 55, 8, 8], [73, 73, 73, 9], [26, 67, 67, 67], [73, 55, 110, 42], [55, 110, 110, 110], [30, 11, 11, 11], [30, 11, 11, 11], [55, 12, 12], [73, 55, 55, 55], [26, 26, 26, 93, 28], [26, 26, 26, 94, 28], [87, 14, 14, 14, 14], [30, 14, 14, 14, 14], [117, 117, 117, 94], [73, 73, 73, 73], [73, 99, 99, 107], [30, 55, 41], [110, 110, 17, 17], [81, 110, 14, 17], [100, 18, 18], [86, 18, 18], [26, 19, 19], [30, 73, 73], [30, 30, 30, 30], [30, 30, 30], [117, 123, 48, 48], [55, 48, 48], [55, 55, 110, 100], [110, 22, 110, 110], [51, 51, 23, 23], [33, 33, 33, 51], [101, 26, 26, 24, 24], [30, 30, 26, 57, 24], [14, 93, 14, 113], [41, 93, 25, 25], [30, 30, 30, 30], [30, 30, 30, 26], [30, 26, 26, 94, 71], [30, 26, 41, 41], [30, 101, 28, 28, 28], [26, 28, 28, 28], [73, 73, 110, 14], [73, 73, 110, 14]]
acc = 0
corrects = 0
samples = len(targets)
for target, pred in zip(targets, oak_pred):
if (target in pred):
corrects +=1
print(corrects/samples)
```
| github_jupyter |
```
# Initialize OK
from client.api.notebook import Notebook
ok = Notebook('lab08.ok')
```
# Lab 8: Multiple Linear Regression and Feature Engineering
In this lab, we will work through the process of:
1. Implementing a linear model
1. Defining loss functions
1. Feature engineering
1. Minimizing loss functions using numeric libraries and analytical methods
This lab will continue using the toy tip calculation dataset used in Lab 5.
**This assignment should be completed and submitted by Wednesday May 29, 2019 at 11:59pm**
### Collaboration Policy
Data science is a collaborative activity. While you may talk with others about the labs, we ask that you **write your solutions individually**. If you do discuss the assignments with others, please **include their names** at the top of this notebook.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
np.random.seed(42)
plt.style.use('fivethirtyeight')
sns.set()
sns.set_context("talk")
%matplotlib inline
```
# Loading the Tips Dataset
To begin, let's load the tips dataset from the `seaborn` library. This dataset contains records of tips, total bill, and information about the person who paid the bill. This is the same dataset used in Lab 5, so it should look familiar!
```
data = sns.load_dataset("tips")
print("Number of Records:", len(data))
data.head()
```
# Question 1: Defining the Model and Feature Engineering
In the previous lab, we defined a simple linear model with only one parameter. Now let's make a more complicated model that utilizes other features in our dataset. Let our prediction for tip be a combination of the following features:
$$
\text{Tip} = \theta_1 \cdot \text{total_bill} + \theta_2 \cdot \text{sex} + \theta_3 \cdot \text{smoker} + \theta_4 \cdot \text{day} + \theta_5 \cdot \text{time} + \theta_6 \cdot \text{size}
$$
Notice that some of these features are not numbers! But our linear model will need to predict a numerical value. Let's start by converting some of these non-numerical values into numerical values. Below we split the tips and the features.
```
tips = data['tip']
X = data.drop(columns='tip')
```
## Question 1a: Feature Engineering
First, let's convert our features to numerical values. A straightforward approach is to map some of these non-numerical features into numerical ones.
For example, we can treat the day as a value from 1-7. However, one of the disadvantages in directly translating to a numeric value is that we unintentionally assign certain features disproportionate weight. Consider assigning Sunday to the numeric value of 7, and Monday to the numeric value of 1. In our linear model, Sunday will have 7 times the influence of Monday, which can lower the accuracy of our model.
Instead, let's use **one-hot encoding** to better represent these features!
One-hot encoding will produce a binary vector indicating the non-numeric feature. Sunday would be encoded as a `[0 0 0 0 0 0 1]`. This assigns a more even weight across each category in non-numeric features. Complete the code below to one-hot encode our dataset, allowing us to see the transformed dataset named `one_hot_X`. This dataframe holds our "featurized" data, which is also often denoted by $\phi$.
**Hint**: Check out the `pd.get_dummies` function.
<!--
BEGIN QUESTION
name: q1a
-->
```
def one_hot_encode(data):
"""
Return the one-hot encoded dataframe of our input data.
Parameters
-----------
data: a dataframe that may include non-numerical features
Returns
-----------
A one-hot encoded dataframe that only contains numeric features
"""
...
one_hot_X = one_hot_encode(X)
one_hot_X.head()
ok.grade("q1a");
```
## Question 1b: Defining the Model
Now that all of our data is numeric, we can begin to define our model function. Notice that after one-hot encoding our data, we now have 12 features instead of 6. Therefore, our linear model now looks like:
$$
\text{Tip} = \theta_1 \cdot \text{size} + \theta_2 \cdot \text{total_bill} + \theta_3 \cdot \text{day_Thur} + \theta_4 \cdot \text{day_Fri} + ... + \theta_{11} \cdot \text{time_Lunch} + \theta_{12} \cdot \text{time_Dinner}
$$
We can represent the linear combination above as a matrix-vector product. Implement the `linear_model` function to evaluate this product.
**Hint**: You can use [np.dot](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.dot.html), [pd.DataFrame.dot](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dot.html), or the `@` operator to multiply matrices/vectors. However, while the `@` operator can be used to multiply `numpy` arrays, it generally will not work between two `pandas` objects, so keep that in mind when computing matrix-vector products!
<!--
BEGIN QUESTION
name: q1b
-->
```
def linear_model(thetas, X):
"""
Return the linear combination of thetas and features as defined above.
Parameters
-----------
thetas: a 1D vector representing the parameters of our model ([theta1, theta2, ...])
X: a 2D dataframe of numeric features
Returns
-----------
A 1D vector representing the linear combination of thetas and features as defined above.
"""
...
ok.grade("q1b");
```
# Question 2: Fitting the Model using Numeric Methods
Recall in the previous lab we defined multiple loss functions and found optimal theta using the `scipy.minimize` function. Adapt the loss functions and optimization code from the previous lab (provided below) to work with our new linear model.
<!--
BEGIN QUESTION
name: q2
-->
```
from scipy.optimize import minimize
def l1(y, y_hat):
return np.abs(y - y_hat)
def l2(y, y_hat):
return (y - y_hat)**2
def minimize_average_loss(loss_function, model, X, y):
"""
Minimize the average loss calculated from using different theta vectors, and
estimate the optimal theta for the model.
Parameters
-----------
loss_function: either the squared or absolute loss functions defined above
model: the model (as defined in Question 1b)
X: a 2D dataframe of numeric features (one-hot encoded)
y: a 1D vector of tip amounts
Returns
-----------
The estimate for the optimal theta vector that minimizes our loss
"""
## Notes on the following function call which you need to finish:
#
# 0. The first '...' should be replaced with the average loss evaluated on
# the data X, y using the model and appropriate loss function.
# 1. x0 are the initial values for THETA. Yes, this is confusing
# but optimization people like x to be the thing they are
# optimizing. Replace the second '...' with an initial value for theta,
# and remember that theta is now a vector. DO NOT hard-code the length of x0;
# it should depend on the number of features in X.
# 2. Your answer will be very similar to your answer to question 2 from lab 5.
...
return minimize(lambda theta: ..., x0=...)['x']
# Notice above that we extract the 'x' entry in the dictionary returned by `minimize`.
# This entry corresponds to the optimal theta estimated by the function.
minimize_average_loss(l2, linear_model, one_hot_X, tips)
ok.grade("q2");
```
# Question 3: Fitting the Model using Analytic Methods
Let's also fit our model analytically, for the l2 loss function. In this question we will derive an analytical solution, fit our model and compare our results with our numerical optimization results.
Recall that if we're fitting a linear model with the l2 loss function, we are performing least squares! Remember, we are solving the following optimization problem for least squares:
$$\min_{\theta} ||X\theta - y||^2$$
Let's begin by deriving the analytic solution to least squares. We begin by expanding out the l2 norm and multiplying out all of the terms.
<table style="width:75%">
<tr>
<th style="text-align: center">Math</th>
<th style="text-align: center">Explanation</th>
</tr>
<tr>
<td>$$||X\theta - y||^2 = (X\theta - y)^T (X\theta - y)$$</td>
<td>Expand l2 norm using the definition for matrices.</td>
</tr>
<tr>
<td>$$ = (\theta^T X^T - y^T) (X\theta - y)$$</td>
<td>Distribute the transpose operator. Remember that $(AB)^T = B^TA^T$.</td>
</tr>
<tr>
<td>$$ = \theta^T X^T X \theta - \theta^T X^T y - y^T X \theta + y^T y$$</td>
<td>Multiply out all of the terms (FOIL).</td>
</tr>
<tr>
<td>$$ = \theta^T X^T X \theta - 2\theta^T X^T y + y^T y$$</td>
<td>The two middle terms are both transposes of each other, and they are both scalars (since we have a 1xn row vector times an nxn matrix times an nx1 column vector). Since the transpose of a scalar is still the same scalar, we can combine the two middle terms.</td>
</tr>
</table>
Whew! Now that we have everything expanded out and somewhat simplified, let's take the gradient of the expression above and set it to the zero vector. This will allow us to solve for the optimal $\theta$ that will minimize our loss.
<table style="width:75%">
<tr>
<th style="text-align: center">Math</th>
<th style="text-align: center">Explanation</th>
</tr>
<tr>
<td>$$\nabla_\theta (\theta^T X^T X \theta) - \nabla_\theta(2\theta^TX^T y) + \nabla_\theta(y^T y) = \vec{0}$$</td>
<td>Let's take derivatives one term at a time.</td>
</tr>
<tr>
<td>$$(X^T X + (X^T X)^T)\theta - \nabla_\theta(2\theta^TX^T y) + \nabla_\theta(y^T y) = \vec{0}$$</td>
<td>For the first term, we use the identity $\frac{\partial}{\partial x} x^T A x = (A + A^T)x$.</td>
</tr>
<tr>
<td>$$(X^T X + (X^T X)^T)\theta - 2X^T y + \nabla_\theta(y^T y) = \vec{0}$$</td>
<td>For the second term, we use the identity $\frac{\partial}{\partial x} x^T A = A$.</td>
</tr>
<tr>
<td>$$(X^T X + (X^T X)^T)\theta - 2X^T y + \vec{0} = \vec{0}$$</td>
<td>The last derivative is the easiest, since $y^T y$ does not depend on $\theta$.</td>
</tr>
<tr>
<td>$$2X^T X\theta = 2X^T y$$</td>
<td>Notice that $(X^T X)^T = X^T X$, so we can combine the $X^T X$ terms into $2X^TX$. We also move $2X^Ty$ to the right side of the equation.</td>
</tr>
<tr>
<td>$$\theta = (X^T X)^{-1} X^T y$$</td>
<td>Divide by 2 on both sides, then left-multiply by $(X^T X)^{-1}$ on both sides to solve for $\theta$.</td>
</tr>
</table>
## Question 3a: Solving for Theta
Now that we have the analytic solution for $\theta$, let's find the optimal numerical thetas for our tips dataset. Fill out the function below.
Hints:
1. Use the `np.linalg.inv` function to compute matrix inverses
1. To compute the transpose of a matrix, you can use `X.T` or `X.transpose()`
<!--
BEGIN QUESTION
name: q3a
-->
```
def get_analytical_sol(X, y):
"""
Computes the analytical solution to our least squares problem
Parameters
-----------
X: a 2D dataframe of numeric features (one-hot encoded)
y: a 1D vector of tip amounts
Returns
-----------
The estimate for theta computed using the equation mentioned above
"""
...
analytical_thetas = get_analytical_sol(one_hot_X, tips)
print("Our analytical loss is: ", l2(linear_model(analytical_thetas, one_hot_X), tips).mean())
print("Our numerical loss is: ", l2(linear_model(minimize_average_loss(l2, linear_model, one_hot_X, tips), one_hot_X), tips).mean())
ok.grade("q3a");
```
## Question 3b: Fixing our analytical loss
Our analytical loss is surprisingly much worse than our numerical loss. Why is this?
Here is a relevant Stack Overflow post: https://stackoverflow.com/questions/31256252/why-does-numpy-linalg-solve-offer-more-precise-matrix-inversions-than-numpy-li
In summary, `np.linalg.inv` loses precision, which propogates error throughout the calculation. If you're not convinced, try `np.linalg.solve` instead of `np.linalg.inv`, you'll find that our loss is much closer to the expected numerical loss. These results are meant to demonstrate that even if our math is correct, limits of our computational precision and machinery can lead us to poor results.
You might also notice that `one_hot_X` has a rank of 9 while it has 12 columns. This means that $X^TX$ will also have a rank of 9 and 12 columns; thus, $X^TX$ will not be invertible because it does not have full column rank.
Complete the code below to one-hot-encode our dataset such that `one_hot_X_revised` has no redundant features. After this, you should see that the analytical loss and the numerical loss are similar as expected.
**Hint**: Check out the `drop_first` parameter of the `pd.get_dummies` function.
<!--
BEGIN QUESTION
name: q3b
-->
```
def one_hot_encode_revised(data):
"""
Return the one-hot encoded dataframe of our input data, removing redundancies.
Parameters
-----------
data: a dataframe that may include non-numerical features
Returns
-----------
A one-hot encoded dataframe that only contains numeric features without any redundancies.
"""
...
one_hot_X_revised = one_hot_encode_revised(X)
revised_analytical_thetas = get_analytical_sol(one_hot_X_revised, tips)
print("Our analytical loss is: ", l2(linear_model(revised_analytical_thetas, one_hot_X_revised), tips).mean())
print("Our numerical loss is: ", l2(linear_model(minimize_average_loss(l2, linear_model, one_hot_X_revised, tips), one_hot_X_revised), tips).mean())
ok.grade("q3b");
```
## Question 4: Diabetes data
### Let's take a look at the diabetes data we used from Lab 4
```
from sklearn.datasets import load_diabetes
from sklearn import linear_model
from scipy import stats
import statsmodels.api as sm
```
#### Look at a small description of the data to remind you what the data contains, and also load it.
```
diabetes_data = load_diabetes()
print(diabetes_data.DESCR)
```
#### Again, we'll divide the data into portions: the features (X) and the target (Y).
```
# Unpacking the data into new variables
diabetes_features = diabetes_data['data']
diabetes_target = diabetes_data['target']
```
#### And we will fit the model in a more traditional way.
```
model = sm.OLS(diabetes_target, diabetes_features).fit()
model.summary()
```
Using your PSTAT 126 knowledge, which of the variables below are important? Can you think of any interactions that might be worth exploring?
<!--
BEGIN QUESTION
name: q4a
-->
*Write your answer here, replacing this text.*
### Interaction term
Make a new variable named `newvar` to contain an interaction term of columns 5 and 6 in `diabetes_features`.
Create a new variable called `diabetes_features2` that appends this new variable to `diabetes_features`.
One method to this is to use `np.insert` to do this, specifying that `axis = 1`.
<!--
BEGIN QUESTION
name: q4b
-->
```
newvar = ...
diabetes_features2 = ...
ok.grade("q4b");
```
### Regression model with the interaction term
Now, run a regression model with your interaction term added. Name this model `model2.`
<!--
BEGIN QUESTION
name: q4c
-->
```
model2 = ...
model2.summary()
ok.grade("q4c");
```
Is `model2` with the interaction term better or worse than `model`? Explain below.
<!--
BEGIN QUESTION
name: q4d
-->
*Write your answer here, replacing this text.*
# Submit
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output.
**Please save before submitting!**
```
# Save your notebook first, then run this cell to submit.
ok.submit()
```
| github_jupyter |
# camera_calib_python
This is a python based camera calibration "library". Some things:
* Uses [nbdev](https://github.com/fastai/nbdev), which is an awesome and fun way to develop and tinker.
* Uses pytorch for optimization of intrinsic and extrinsic parameters. Each step in the model is modularized as its own pytorch `nn.module` in the `modules.ipynb` notebook.
* Optimization is carried out via the built in `LBFGS` optimizer. The `LBFGS` optimizer uses only the gradient to do a quasi second order optimization. However, I've noticed it's imperfect and can a take long time to converge in some cases.
* The use of pytorch allows the forward pass to be easily modified. It also allows the use of any differentiable loss function although I've noticed that sum of squared errors seems to give the best results of the losses I've tried.
* The fiducial point detector for my calibration board uses a pytorch neural net under the hood (more info [here](https://github.com/justinblaber/fiducial_detect)), which is easily integrated into this library since its python based.
# Tutorial
```
import camera_calib.api as api
```
Before calibration can be done, we need the following information:
1. Images and their respective camera and pose indices
2. Calibration board geometry
3. Fiducial point detector
4. Control point refiner
### 1) Images
```
import re
from pathlib import Path
files_img = list(Path('data/dot_vision_checker').glob('*.png'))
files_img
def _parse_name(name_img):
match = re.match(r'''SERIAL_(?P<serial>.*)_
DATETIME_(?P<date>.*)_
CAM_(?P<cam>.*)_
FRAMEID_(?P<frameid>.*)_
COUNTER_(?P<counter>.*).png''',
name_img,
re.VERBOSE)
return match.groupdict()
imgs = []
for file_img in files_img:
dict_group = _parse_name(file_img.name)
img = api.File16bitImg(file_img)
img.idx_cam = int(dict_group['cam'])-1
img.idx_cb = int(dict_group['counter'])-1
imgs.append(img)
for img in imgs: print(f'{img.name} - cam: {img.idx_cam} - cb: {img.idx_cb}')
```
### 2) Calibration board geometry
The calibration board geometry specifies where fiducial markers and control points are located. For this example, my dot vision checker board is used.
```
h_cb = 50.8
w_cb = 50.8
h_f = 42.672
w_f = 42.672
num_c_h = 16
num_c_w = 16
spacing_c = 2.032
cb_geom = api.CbGeom(h_cb, w_cb,
api.CpCSRGrid(num_c_h, num_c_w, spacing_c),
api.FmCFPGrid(h_f, w_f))
cb_geom.plot()
```
### 3) Fiducial detector
```
from pathlib import Path
```
This fiducial detector will take in an image and return the locations of the fiducial markers. The detector in this example is a neural net trained specifically on my calibration board. More info available at:
* https://github.com/justinblaber/fiducial_detect
```
file_model = Path('models/dot_vision_checker.pth')
detector = api.DotVisionCheckerDLDetector(file_model)
```
### 4) Control Point Refiner
The refiner will take in an image, initial guesses for control points, and the boundaries around the control points, and return a refined point. The boundaries help determine how much neighboring info can be used to refine the control point.
```
refiner = api.OpenCVCheckerRefiner(hw_min=5, hw_max=15, cutoff_it=20, cutoff_norm=1e-3)
```
## Calibrate
Now, we can calibrate
```
calib = api.multi_calib(imgs, cb_geom, detector, refiner)
```
From Bo Li's calibration paper, we know the coordinate graph of calibration board poses and cameras forms a bipartite graph. For debugging purposes this is displayed below.
```
api.plot_bipartite(calib)
```
Plot residuals
```
api.plot_residuals(calib);
```
Plot extrinsics; note that `%matplotlib notebook` can be used to make the plot interactive
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(20,20))
ax = fig.add_subplot(2, 2, 1, projection='3d')
api.plot_extrinsics(calib, ax=ax)
ax.view_init(elev=90, azim=-90)
ax = fig.add_subplot(2, 2, 2, projection='3d')
api.plot_extrinsics(calib, ax=ax)
ax.view_init(elev=45, azim=-45)
ax = fig.add_subplot(2, 2, 3, projection='3d')
api.plot_extrinsics(calib, ax=ax)
ax.view_init(elev=0, azim=-90)
ax = fig.add_subplot(2, 2, 4, projection='3d')
api.plot_extrinsics(calib, ax=ax)
ax.view_init(elev=0, azim=0)
plt.subplots_adjust(wspace=0, hspace=0)
```
This matches pretty closely to my camera rig
## Save/Load
Save
```
api.save(calib, '/tmp/calib.pth')
```
Load
```
del calib
calib = api.load('/tmp/calib.pth')
```
# Build
```
from camera_calib.utils import convert_notebook
convert_notebook()
```
| github_jupyter |
# Highlighting Task - Event Extraction from Text
In this tutorial, we will show how *dimensionality reduction* can be applied over *both the media units and the annotations* of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an *open-ended extraction task*, where the crowd was asked to highlight words or phrases in a text that refer to events or actions. The task was executed on [FigureEight](https://www.figure-eight.com/). For more crowdsourcing annotation task examples, click [here](https://raw.githubusercontent.com/CrowdTruth-core/tutorial/getting_started.md).
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: [template](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/Event-Text-Highlight/template.html), [css](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/Event-Text-Highlight/template.css), [javascript](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/Event-Text-Highlight/template.js).
This is how the task looked like to the workers:

A sample dataset for this task is available in [this file](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/data/event-text-highlight.csv), containing raw output from the crowd on FigureEight. Download the file and place it in a folder named `data` that has the same root as this notebook. The answers from the crowd are stored in the `tagged_events` column.
```
import pandas as pd
test_data = pd.read_csv("../data/event-text-highlight.csv")
test_data["tagged_events"][0:30]
```
Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on *row 2* annotated a chunk of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is `[NONE]`.
## A basic pre-processing configuration
Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations.
We set `remove_empty_rows = False` to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a *NONE* token in the annotation vector.
We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the `processJudgments` call:
```
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
```
The final configuration class `Config` is this:
```
import crowdtruth
from crowdtruth.configuration import DefaultConfig
class Config(DefaultConfig):
inputColumns = ["doc_id", "sentence_id", "events", "events_count", "original_sententce", "processed_sentence", "tokens"]
outputColumns = ["tagged_events"]
open_ended_task = True
annotation_separator = ","
remove_empty_rows = False
def processJudgments(self, judgments):
# build annotation vector just from words
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
# normalize vector elements
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('[',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(']',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('"',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(',,,',','))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(',,',','))
return judgments
```
Now we can pre-process the data and run the CrowdTruth metrics:
```
data_with_stopwords, config_with_stopwords = crowdtruth.load(
file = "../data/event-text-highlight.csv",
config = Config()
)
processed_results_with_stopwords = crowdtruth.run(
data_with_stopwords,
config_with_stopwords
)
```
## Removing stopwords from Media Units and Annotations
A more complex dimensionality reduction technique involves removing the stopwords from both the *media units* and the crowd *annotations*. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them.
The first step is to build a function that removes stopwords from strings. We will use the `stopwords` corpus in the `nltk` package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation.
The function `remove_stop_words` does all of these things:
```
import nltk
from nltk.corpus import stopwords
import string
stopword_set = set(stopwords.words('english'))
stopword_set.update(['s'])
def remove_stop_words(words_string, sep):
'''
words_string: string containing all words
sep: separator character for the words in words_string
'''
words_list = words_string.split(sep)
corrected_words_list = ""
for word in words_list:
if word not in stopword_set:
if corrected_words_list != "":
corrected_words_list += sep
corrected_words_list += word
return corrected_words_list
```
In the new configuration class `ConfigDimRed`, we apply the function we just built to both the column that contains the media unit text (`inputColumns[2]`), and the column containing the crowd annotations (`outputColumns[0]`):
```
import pandas as pd
class ConfigDimRed(Config):
def processJudgments(self, judgments):
judgments = Config.processJudgments(self, judgments)
# remove stopwords from input sentence
for idx in range(len(judgments[self.inputColumns[2]])):
judgments.at[idx, self.inputColumns[2]] = remove_stop_words(
judgments[self.inputColumns[2]][idx], " ")
for idx in range(len(judgments[self.outputColumns[0]])):
judgments.at[idx, self.outputColumns[0]] = remove_stop_words(
judgments[self.outputColumns[0]][idx], self.annotation_separator)
if judgments[self.outputColumns[0]][idx] == "":
judgments.at[idx, self.outputColumns[0]] = self.none_token
return judgments
```
Now we can pre-process the data and run the CrowdTruth metrics:
```
data_without_stopwords, config_without_stopwords = crowdtruth.load(
file = "../data/event-text-highlight.csv",
config = ConfigDimRed()
)
processed_results_without_stopwords = crowdtruth.run(
data_without_stopwords,
config_without_stopwords
)
```
## Effect on CrowdTruth metrics
Finally, we can compare the effect of the stopword removal on the CrowdTruth *sentence quality score*.
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
processed_results_with_stopwords["units"]["uqs"],
processed_results_without_stopwords["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
```
The red line in the plot runs through the diagonal. All sentences above the line have a higher *sentence quality score* when the stopwords were removed.
The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the *worker quality scores*.
```
plt.scatter(
processed_results_with_stopwords["workers"]["wqs"],
processed_results_without_stopwords["workers"]["wqs"],
)
plt.plot([0, 0.8], [0, 0.8], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
```
The quality of the majority of workers also has increased in the configuration where we removed the stopwords. However, because of the inter-linked nature of the CrowdTruth quality metrics, the annotations of these workers now has a greater weight when calculating the *sentence quality score*. So the stopword removal process had the effect of removing some of the noise in the annotations and therefore increasing the quality scores, but also of *amplifying the true ambiguity in the sentences*.
```
data_with_stopwords["units"]
```
| github_jupyter |
# Import necessary library
In this tutorial, we are going to use pytorch, the cutting-edge deep learning framework to complete our task.
```
import torch
import torchvision
#for reproducibility
torch.manual_seed(0)
import numpy as np
np.random.seed(0)
## Create dataloader, in PyTorch, we feed the trainer data with use of dataloader
## We create dataloader with dataset from torchvision,
## and we dont have to download it seperately, all automatically done
# Define batch size, batch size is how much data you feed for training in one iteration
batch_size_train = 64 # We use a small batch size here for training
batch_size_test = 1024 #
# define how image transformed
image_transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])
#image datasets
train_dataset = torchvision.datasets.MNIST('/home/minh/mnist-data/',
train=True,
download=True,
transform=image_transform)
test_dataset = torchvision.datasets.MNIST('/home/minh/mnist-data/',
train=False,
download=True,
transform=image_transform)
#data loaders
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size_train,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size_test,
shuffle=True)
## Now we can start to build our CNN model
## We first import the pytorch nn module and optimizer
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
## Then define the model class
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
#input channel 1, output channel 10
self.conv1 = nn.Conv2d(1, 10, kernel_size=5, stride=1)
#input channel 10, output channel 20
self.conv2 = nn.Conv2d(10, 20, kernel_size=5, stride=1)
#dropout layer
#fully connected layer
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = self.conv1(x)
x = F.max_pool2d(x, 2)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x, 2)
x = F.relu(x)
x = x.view(-1, 320)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return F.log_softmax(x,dim=0)
## create model and optimizer
learning_rate = 0.01
momentum = 0.5
device = "cuda"
model = CNN().to(device) #using gpu here
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=momentum)
from tqdm import tqdm_notebook as tqdm
##define train function
def train(model, device, train_loader, optimizer, epoch, log_interval=10000):
model.train()
tk0 = tqdm(train_loader, total=int(len(train_loader)))
counter = 0
for batch_idx, (data, target) in enumerate(tk0):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
counter += 1
tk0.set_postfix(loss=(loss.item()*data.size(0) / (counter * train_loader.batch_size)))
##define test function
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
num_epoch = 5
for epoch in range(1, num_epoch + 1):
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
# from torchsummary import summary
# summary(model, (1, 28, 28))
```
| github_jupyter |
```
import nltk
import string
import numpy as np
%matplotlib inline
from nltk import word_tokenize
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
enstop = stopwords.words('english')
punct = string.punctuation
def tokenizer(sent):
sent = sent.lower()
tmp = word_tokenize(sent)
res = []
for word in tmp:
if word not in enstop and word not in punct:
res.append(word)
return res
```
### AGnews data read:
```
import torch
import torch.nn as nn
from torchtext import data
from torchtext import vocab
text_field = data.Field(tokenize=tokenizer, lower=True, include_lengths=True, fix_length=256)
label_field = data.Field(sequential=False, use_vocab=False, dtype=torch.long)
train, valid, test = data.TabularDataset.splits(path='AGnews',
train='train_ag1.csv',
validation='val_ag1.csv',
test='test_ag1.csv',
format='csv', skip_header=True,
fields=[('sentence', text_field), ('label', label_field)])
```
### Read the GloVe word vector:
```
vec = vocab.Vectors(name='glove.6B.300d.txt')
text_field.build_vocab(train, valid, test, max_size=250000, vectors=vec,
unk_init=torch.Tensor.normal_)
label_field.build_vocab(train, valid, test)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iter, valid_iter, test_iter = data.BucketIterator.splits((train, valid, test), batch_sizes=(128, 128, 128),
sort_key=lambda x: len(x.sentence),
sort_within_batch=True,
repeat=False, shuffle=True,
device=device)
```
### Model training function (standard training and DropAttack adversarial training)
```
def train(model, train_iter, dev_iter, num_epoch, opt, criterion, eva, out_model_file):
print("Training begin!")
model.train()
loss_list = []
dev_acc = []
train_acc = []
best_dev_acc = 0.
for epoch in range(num_epoch):
total_loss = 0.
for batch in train_iter:
output = model(batch.sentence)
loss = criterion(output, batch.label)
opt.zero_grad()
loss.backward()
opt.step()
total_loss += loss.item()
loss_list.append(total_loss)
dev_acc.append(eva(model, dev_iter))
train_acc.append(eva(model,train_iter))
print(f"Epoch: {epoch+1}/{num_epoch}. Total loss: {total_loss:.3f}.Train_Acc: {train_acc[-1]:.3%}. Validation Set Acc: {dev_acc[-1]:.3%}.")
if dev_acc[-1] > best_dev_acc:
best_dev_acc = dev_acc[-1]
torch.save(model.state_dict(), out_model_file)
return loss_list, dev_acc
# import torch
class DropAttack():
def __init__(self, model):
self.model = model
self.param_backup = {}
self.grad_backup = {}
self.mask_backup = {}
def attack(self, epsilon=5.0, p_attack =0.5, param_name='embed.weight', is_first_attack=False):
# The emb_name parameter should be replaced with the name of the parameter to be attacked in your model
for name, param in self.model.named_parameters():
if param.requires_grad and param_name == name:
if is_first_attack:
self.param_backup[name] = param.data.clone()
mask = np.random.binomial(n=1, p=p_attack, size= param.grad.shape)
mask = torch.from_numpy(mask).float() # attack mask
self.mask_backup['mask'] = mask.clone()
else: mask = self.mask_backup['mask']
norm = torch.norm(param.grad)
if norm != 0 and not torch.isnan(norm):
r_at = epsilon * param.grad / norm
r_at *= mask.cuda() # Randomly attack some of the parameters
param.data.add_(r_at)
def restore(self, param_name='embed.weight'):
for name, param in self.model.named_parameters():
if param.requires_grad and param_name == name:
assert name in self.param_backup
param.data = self.param_backup[name]
param_backup = {}
def backup_grad(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
self.grad_backup[name] = param.grad.clone()
def restore_grad(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
param.grad = self.grad_backup[name]
grad_backup = {}
def train_DA(model, train_iter, dev_iter, num_epoch, opt, criterion, eva, out_model_file):
K = 1
print(f'Adversarial training begin! (DropAttack-{K})')
model.train()
dropattack = DropAttack(model)
loss_list = []
dev_acc = []
best_dev_acc = 0.
for epoch in range(num_epoch):
total_loss = 0.
model.train()
for batch in train_iter:
output = model(batch.sentence)
loss = criterion(output, batch.label)
loss.backward(retain_graph=True) # Calculate the original gradient
dropattack.backup_grad() # Backup the initial gradient
# Attack the embedding layer
for t in range(K):
dropattack.attack(5, 0.7, 'embed.weight', is_first_attack=(t==0)) # Add adversarial disturbance to the parameters, backup param.data for the first attack
output = model(batch.sentence)
loss_adv1 = criterion(output, batch.label)/K
loss_adv1.backward(retain_graph=True) # # Backpropagation, and accumulate the gradient of the adversarial training based on the normal grad
loss += loss_adv1
dropattack.restore('embed.weight') # # Restore the disturbed parameters
dropattack.restore_grad()
# Attack the hidden layer
for t in range(K):
dropattack.attack(5, 0.7, 'rnn.rnn.weight_ih_l0', is_first_attack=(t==0)) # Add adversarial disturbance to the parameters, backup param.data for the first attack
output = model(batch.sentence)
loss_adv2 = criterion(output, batch.label)/K
loss_adv2.backward(retain_graph=True) # Backpropagation, and accumulate the gradient of the adversarial training based on the normal grad
loss += loss_adv2
dropattack.restore('rnn.rnn.weight_ih_l0') # Restore the disturbed parameters
opt.zero_grad()
loss.backward()
opt.step() # Update parameters
# loss = loss + loss_adv1 + loss_adv2
total_loss += loss.item()
opt.zero_grad()
loss_list.append(total_loss)
dev_acc.append(eva(model, dev_iter))
print(f"Epoch: {epoch+1}/{num_epoch}. Total loss: {total_loss:.3f}. Validation Set Acc: {dev_acc[-1]:.3%}.")
if dev_acc[-1] > best_dev_acc:
best_dev_acc = dev_acc[-1]
torch.save(model.state_dict(), out_model_file)
return loss_list, dev_acc
```
### Model evaluation function
```
def eva(model, data_iter):
correct, count = 0, 0
with torch.no_grad():
for batch in data_iter:
pred = model(batch.sentence)
pred = torch.argmax(pred, dim=-1)
correct += (pred == batch.label).sum().item()
count += len(pred)
return correct / count
```
### BiGRU model
```
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bidirectional):
super(LSTM, self).__init__()
self.rnn = nn.LSTM(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, bidirectional=bidirectional)
def forward(self, x, length):
packed_x = nn.utils.rnn.pack_padded_sequence(x, length)
packed_output, (hidden, cell) = self.rnn(packed_x)
output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
return hidden, output
class GRU(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bidirectional):
super(GRU, self).__init__()
self.rnn = nn.GRU(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, bidirectional=bidirectional)
def forward(self, x, length):
packed_x = nn.utils.rnn.pack_padded_sequence(x, length)
packed_output, hidden = self.rnn(packed_x)
output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
return hidden, output
class TextRNN(nn.Module):
def __init__(self, embed_size, hidden_size, num_layers, bidirectional, out_dim,
pretrained_embed, use_gru=False, freeze=True,
random_embed=False, vocab_size=None):
super(TextRNN, self).__init__()
if random_embed:
self.embed = nn.Embedding(vocab_size, embed_size)
else:
self.embed = nn.Embedding.from_pretrained(pretrained_embed, freeze=False)
if use_gru:
self.rnn = GRU(embed_size, hidden_size, num_layers, bidirectional)
else:
self.rnn = LSTM(embed_size, hidden_size, num_layers, bidirectional)
self.proj = nn.Linear(2*hidden_size, out_dim)
def forward(self, x):
text, text_length = x # text: [seq_len, bs]
text = text.permute(1, 0) # text: [bs, seq_len]
embed_x = self.embed(text) # embed_x: [bs, seq_len, embed_dim]
embed_x = embed_x.permute(1, 0, 2) # embed_x: [seq_len, bs, embed_dim]
hidden, _ = self.rnn(embed_x, text_length) # hidden: [2*num_layers, bs, hidden_size]
hidden = torch.cat((hidden[-1,:,:], hidden[-2,:,:]), dim=1)
return self.proj(hidden)
```
### Training
```
embed_size = 300
hidden_size = 300
num_layers = 2
bidirectional = True
out_dim = 4
pretrained_embed = text_field.vocab.vectors
lr = 0.001
num_epoch = 5
freeze = False
use_gru = True
random_embed = False
vocab_size = len(text_field.vocab.stoi)
out_model_file = 'textrnn_AG_DA.pt'
# ————————————————————————————————————————————————————————
use_dropattack = True # Whether to use DropAttack
# ————————————————————————————————————————————————————————
model = TextRNN(embed_size, hidden_size, num_layers, bidirectional, out_dim,
pretrained_embed, use_gru=use_gru, freeze=freeze,
random_embed=random_embed, vocab_size=None).to(device)
opt = torch.optim.Adam(model.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()
if use_dropattack:
loss_list, dev_acc_list = train_DA(model, train_iter, valid_iter, num_epoch, opt, criterion, eva, out_model_file)
else:
loss_list, dev_acc_list = train(model, train_iter, valid_iter, num_epoch, opt, criterion, eva, out_model_file)
model = TextRNN(embed_size, hidden_size, num_layers, bidirectional, out_dim,
pretrained_embed, use_gru=use_gru, freeze=freeze,
random_embed=random_embed, vocab_size=None).to(device)
model.load_state_dict(torch.load('textrnn_AG_DA.pt'))
print(f"Test set acc: {eva(model, test_iter):.3%}")
```
| github_jupyter |
# Sinais periódicos
Neste notebook avaliaremos os sinais periódicos e quais são as condições necessárias para periodicidade.
Esta propriedade dos sinais está ligada ao ***deslocamento no tempo***, uma transformação da variável independente.
Um sinal periódico, contínuo, é aquele para o qual a seguinte propriedade é válida:
\begin{equation}
x(t) = x(t \pm mT_p),
\end{equation}
ou seja, o valor do sinal no instante $t$ [s] é o mesmo para o instante $t \pm mT_p$ [s]. Dessa forma, o sinal se repete a cada
período $T_p$.
$T_p$ é o chamado período fundamental do sinal periódico. Neste caso, $x(t) = x(t \pm T_p) = x(t \pm 2T_p) = ... = x(t \pm kT_p)$.
Para os sinais discretos a definição é análoga:
\begin{equation}
x[n] = x[n \pm m N_p],
\end{equation}
com $N_p$ sendo um número de amostras inteiro.
Um sinal que não é periódico é chamado de aperiódico.
Vamos ver alguns exemplos de sinais periódicos contínuos e discretos.
```
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
from scipy import signal # some signals
import IPython.display as ipd # to play signals
# Configurações gerais
fs = 44100
t = np.linspace(0, 1, fs) # vetor temporal
freq = 1000 # Frequencia fundamental
# seno ou cosseno
xt = np.sin(2*np.pi*freq*t)
# Figura
plt.figure()
plt.title('Seno')
plt.plot(t, xt, '-b', linewidth = 2, label = 'seno - 1000 [Hz]')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/1000))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
```
## Um seno com 2 frequências
Se tivermos um sinal
\begin{equation}
x(t) = x(t) = \mathrm{sin}(2 \pi \ m_1 \ f t) + \mathrm{sin}(2 \pi \ m_2 \ f t),
\end{equation}
ele será um sinal periódico desde que $\frac{m_2}{m_1}$ seja um número racional. Do contrário, o sinal será quase-periódico. Ele parecerá periódico, mas se você olhar os detalhes, vai notar que o sinal nunca se repete.
```
# seno ou cosseno - 2 frequencias
m = 3 #1.4*np.sqrt(2)
xt = np.sin(2*np.pi*freq*t) + np.sin(2*np.pi*m*freq*t)
# Figura
plt.figure()
plt.title('Seno')
plt.plot(t, xt, '-b', linewidth = 2, label = 'seno - 2 freq')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
# dente de serra
xt = signal.sawtooth(2 * np.pi * freq * t)
# Figura
plt.figure()
plt.title('Dente de serra')
plt.plot(t, xt, '-b', linewidth = 2, label = 'sawtooth')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
# onda quadrada
xt = signal.square(2 * np.pi * freq * t)
# Figura
plt.figure()
plt.title('Onda quadrada')
plt.plot(t, xt, '-b', linewidth = 2, label = 'square')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
N = 9
n = np.arange(N)
xn = [2, 1, -1, 2, 1, -1, 2, 1, -1]
# Figura
plt.figure()
plt.title('Sinal discreto periódico')
plt.stem(n, xn, '-b', basefmt=" ", use_line_collection= True)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Amostra [-]')
plt.ylabel('Amplitude [-]')
plt.ylim((-2.2, 2.2))
plt.tight_layout()
plt.show()
```
## Sinais discretos periódicos
A periodicidade em sinais discretos tem um limite prático. Pra pensar nisso, podemos imaginar um sinal contínuo $x(t) = \mathrm{cos}(\omega t)$. À medida que a frequência, $f$, do sinal aumenta, sua taxa de oscilação também aumenta. Mas, o que aconteceria no caso de um sinal do tipo
\begin{equation}
x[n] = \mathrm{cos}(\omega n) \ ?
\end{equation}
```
N = 50
n = np.arange(N)
w = 0
xn = np.cos(w*n)
# Figura
plt.figure()
plt.title('Cosseno discreto')
plt.stem(n, xn, '-b', label = r'\omega = {:.3} [rad/s]'.format(float(w)), basefmt=" ", use_line_collection= True)
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Amostra [-]')
plt.ylabel('Amplitude [-]')
plt.ylim((-1.2, 1.2))
plt.tight_layout()
plt.show()
```
| github_jupyter |

# terrainbento model Basic with variable $m$ steady-state solution
This model shows example usage of the Basic model from the TerrainBento package with a variable drainage-area exponent, $m$:
$\frac{\partial \eta}{\partial t} = - KQ^m S + D\nabla^2 \eta$
where $K$ and $D$ are constants, $Q$ is discharge, $S$ is local slope, $m$ is the drainage area exponent, and $\eta$ is the topography.
Note that the units of $K$ depend on $m$, so that the value of $K$ used in Basic cannot be meaningfully compared to other values of $K$ unless the valuess of $m$ are the same.
Refer to [Barnhart et al. (2019)](https://www.geosci-model-dev.net/12/1267/2019/) for further explaination. For detailed information about creating a Basic model, see [the detailed documentation](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.derived_models.model_basic.html).
This notebook (a) shows the initialization and running of this model, (b) saves a NetCDF file of the topography, which we will use to make an oblique Paraview image of the landscape, and (c) creates a slope-area plot at steady state.
```
# import required modules
import os
import numpy as np
from terrainbento import Basic
from landlab import imshow_grid
from landlab.io.netcdf import write_netcdf
import matplotlib.pyplot as plt
import matplotlib
np.random.seed(42)
#Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# create the parameter dictionary needed to instantiate the model
params = {
# create the Clock.
"clock": {
"start": 0,
"step": 10,
"stop": 1e7
},
# Create the Grid
"grid": {
"RasterModelGrid": [
(25, 40),
{
"xy_spacing": 40
},
{
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}]
}
}
}
},
]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"modify_core_nodes": True,
"lowering_rate": -0.001
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "output/basicVm",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.001,
"m_sp": 0.25,
"n_sp": 1.0,
"regolith_transport_parameter": 0.01,
}
# the tolerance here is high, so that this can run on binder and for tests. (recommended value = 0.001 or lower).
tolerance = .001
# we can use an output writer to run until the model reaches steady state.
class run_to_steady(object):
def __init__(self, model):
self.model = model
self.last_z = self.model.z.copy()
self.tolerance = tolerance
def run_one_step(self):
if model.model_time > 0:
diff = (self.model.z[model.grid.core_nodes] -
self.last_z[model.grid.core_nodes])
if max(abs(diff)) <= self.tolerance:
self.model.clock.stop = model._model_time
print("Model reached steady state in " +
str(model._model_time) + " time units\n")
else:
self.last_z = self.model.z.copy()
if model._model_time <= self.model.clock.stop - self.model.output_interval:
self.model.clock.stop += self.model.output_interval
# initialize the model using the Model.from_dict() constructor.
# We also pass the output writer here.
model = Basic.from_dict(params, output_writers={"class": [run_to_steady]})
# to run the model as specified, we execute the following line:
model.run()
#MAKE SLOPE-AREA PLOT
# plot nodes that are not on the boundary or adjacent to it
core_not_boundary = np.array(
model.grid.node_has_boundary_neighbor(model.grid.core_nodes)) == False
plotting_nodes = model.grid.core_nodes[core_not_boundary]
# assign area_array and slope_array
area_array = model.grid.at_node["drainage_area"][plotting_nodes]
slope_array = model.grid.at_node["topographic__steepest_slope"][plotting_nodes]
# instantiate figure and plot
fig = plt.figure(figsize=(6, 3.75))
slope_area = plt.subplot()
slope_area.scatter(area_array,
slope_array,
marker="o",
c="k",
label="Model Basic (m=0.25)")
# make axes log and set limits
slope_area.set_xscale("log")
slope_area.set_yscale("log")
slope_area.set_xlim(9 * 10**1, 3 * 10**5)
slope_area.set_ylim(1e-2, 1e0)
# set x and y labels
slope_area.set_xlabel(r"Drainage area [m$^2$]")
slope_area.set_ylabel("Channel slope [-]")
slope_area.legend(scatterpoints=1, prop={"size": 12})
slope_area.tick_params(axis="x", which="major", pad=7)
plt.show()
# Save stack of all netcdfs for Paraview to use.
# model.save_to_xarray_dataset(filename="basicVm.nc",
# time_unit='years',
# reference_time='model start',
# space_unit='meters')
# remove temporary netcdfs
model.remove_output_netcdfs()
# make a plot of the final steady state topography
plt.figure()
imshow_grid(model.grid, "topographic__elevation",cmap ='terrain',
grid_units=("m", "m"),var_name="Elevation (m)")
plt.show()
```
## Next Steps
- [Welcome page](../Welcome_to_TerrainBento.ipynb)
- There are three additional introductory tutorials:
1) [Introduction terrainbento](../example_usage/Introduction_to_terrainbento.ipynb)
2) [Introduction to boundary conditions in terrainbento](../example_usage/introduction_to_boundary_conditions.ipynb)
3) [Introduction to output writers in terrainbento](../example_usage/introduction_to_output_writers.ipynb).
- Five examples of steady state behavior in coupled process models can be found in the following notebooks:
1) [Basic](model_basic_steady_solution.ipynb) the simplest landscape evolution model in the terrainbento package.
2) **This Notebook**: [BasicVm](model_basic_var_m_steady_solution.ipynb) which permits the drainage area exponent to change
3) [BasicCh](model_basicCh_steady_solution.ipynb) which uses a non-linear hillslope erosion and transport law
4) [BasicVs](model_basicVs_steady_solution.ipynb) which uses variable source area hydrology
5) [BasisRt](model_basicRt_steady_solution.ipynb) which allows for two lithologies with different K values
6) [RealDEM](model_basic_realDEM.ipynb) Run the basic terrainbento model with a real DEM as initial condition.
| github_jupyter |
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/linreg/svi_linear_regression_1d_tfp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Stochastic variational inference for 1d linear regression using TFP.
Code Derived from
https://colab.sandbox.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb#scrollTo=5zCEYpzu7bDX
```
# Tensorflow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
print("tf version {}".format(tf.__version__))
tf.config.list_physical_devices('GPU')
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
import numpy as np
import matplotlib.pyplot as plt
import os
#figdir = "../figures"
#def save_fig(fname): plt.savefig(os.path.join(figdir, fname))
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
tfd = tfp.distributions
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
"""### Case 1: No Uncertainty"""
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
#save_fig('svi_regression_1d_mean.pdf')
plt.show()
#plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
"""### Case 2: Aleatoric Uncertainty"""
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
#plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
#save_fig('svi_regression_1d_mean_var.pdf')
plt.show()
"""### Case 3: Epistemic Uncertainty"""
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
#plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
#save_fig('svi_regression_1d_post_mean.pdf')
plt.show()
#Both Aleatoric & Epistemic Uncertainty
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
#plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
#save_fig('svi_regression_1d_post_mean_var.pdf')
plt.show()
```
| github_jupyter |
# Getting names and email id:
```
from bs4 import BeautifulSoup
import urllib.request
import pandas as pd
import re
df=pd.DataFrame(columns=['Name','Email','Department'])
```
### For applied mechanics
```
source = urllib.request.urlopen('https://am.iitd.ac.in/?q=node/24').read()
soup=BeautifulSoup(source,'lxml')
for table in soup.find_all('table'):
names=table.find('td',class_="rwspeopletitle")
#entries=table.find_all('td')
txt = table.text
txt=txt.replace('[at]','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not len(email):
email=re.findall("\w+@\w+\.{1}\w+", txt)
df.loc[len(df.index)] = [names.text,email[0],"Department of Applied Mechanics"]
```
### for Biochemical Engineering and Biotechnology
```
source = urllib.request.urlopen('https://beb.iitd.ac.in/faculty.html').read()
soup=BeautifulSoup(source,'lxml')
source = urllib.request.urlopen('https://beb.iitd.ac.in/faculty.html').read()
soup=BeautifulSoup(source,'lxml')
try:
for card in soup.find_all('div',class_="card"):
l=card.a.get('href')
link = urllib.request.urlopen('https://beb.iitd.ac.in/'+l).read()
page=BeautifulSoup(link,'lxml')
name=page.find('font',class_='name')
email=page.p.a
df.loc[len(df.index)] = [names.text,email.text,"Biochemical Engineering and Biotechnology"]
except:
pass
```
## for chemical engineering
```
source = urllib.request.urlopen('http://chemical.iitd.ac.in/people/').read()
soup=BeautifulSoup(source,'lxml')
for faculty in soup.find_all('div',class_='each'):
name=faculty.find('div',class_="faculty_responsive_val ml5").text
name=name.strip()
email=faculty.find('div',class_="Contact_div table_middle").a.text
df.loc[len(df.index)] = [name,email,"Chemical Engineering "]
```
## for chemistry
```
source = urllib.request.urlopen('https://chemistry.iitd.ac.in/faculty.html').read()
soup=BeautifulSoup(source,'lxml')
name="None"
try:
for people in soup.find_all('div',class_="row"):
if people.find('div',class_="col-sm-7"):
if people.find('div',class_="col-sm-7").h3.text.strip()==name:
continue
name= people.find('div',class_="col-sm-7").h3.text.strip()
txt=people.text
txt=txt.replace('(AT)','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
df.loc[len(df.index)] = [name,email[0],"Department of Chemistry"]
except:
pass
```
## for civil
```
source = urllib.request.urlopen('https://civil.iitd.ac.in/index.php?lmenuid=faculty').read()
soup=BeautifulSoup(source,'lxml')
for td in soup.find_all('td'):
if td.b:
email=td.text
email=email.replace('[at]','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", email)
#print(td.b.text,email[0])
df.loc[len(df.index)] = [td.b.text,email[0],"Civil Engg, IIT Delhi"]
```
## for computer science and engineering
```
source = urllib.request.urlopen('http://www.cse.iitd.ac.in/index.php/2011-12-29-23-14-30/faculty').read()
soup=BeautifulSoup(source,'lxml')
for tr in soup.find_all('tr'):
if tr.a:
email=tr.text
email=email.replace(' AT ','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", email)
if not email:
email=re.findall("\w+@\w+\.{1}\w+",tr.text.replace(' AT ','@'))
if not email:
continue
df.loc[len(df.index)] = [tr.a.text,email[0],"Department of Computer Science and Engineering"]
#print(tr.a.text,email[0])
```
## for electrical
```
source = urllib.request.urlopen('https://ee.iitd.ac.in/people/faculty.html').read()
soup=BeautifulSoup(source,'lxml')
for tags in soup.find_all('td'):
if tags.span:
txt=tags.text
txt=txt.replace('[AT]','@')
txt=txt.replace('[at]','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not email:
email=['none']
continue
df.loc[len(df.index)] = [tags.strong.text,email[0],"Department of Electrical Engineering"]
```
## for Management
```
source = urllib.request.urlopen('https://dms.iitd.ac.in/all-faculty-members/').read()
soup=BeautifulSoup(source,'lxml')
for people in soup.find_all('div',class_="item-content"):
name=people.h3.text
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+",str(people))
if not email:
email=re.findall("\w+@\w+\.{1}\w+",str(people))
if not email:
email=["none"]
df.loc[len(df.index)] = [name,email[0],"Department of Management Studies"]
```
## For Mathematics
```
source = urllib.request.urlopen('https://maths.iitd.ac.in/drupal/faculty').read()
soup=BeautifulSoup(source,'lxml')
for people in soup.find_all('tr'):
if people.p:
if not people.a:
continue
name=people.a.text
txt=people.text
txt=txt.replace('[at]','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not email:
email=re.findall("\w+@\w+\.{1}\w+",txt)
df.loc[len(df.index)] = [name,email[0],"Department of Mathematics"]
```
## For Mechanics
```
source = urllib.request.urlopen('https://mech.iitd.ac.in/faculty').read()
soup=BeautifulSoup(source,'lxml')
for t in soup.find_all('address'):
email=t.text.replace('[at]','@')
t=t.text.split('\n')
name=t[1]
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", email)
if not email:
continue
df.loc[len(df.index)] = [name,email[0],"Department of Mechanical Engineering"]
```
## For physics
```
source = urllib.request.urlopen('https://physics.iitd.ac.in/faculty').read()
soup=BeautifulSoup(source,'lxml')
for faculty in soup.find_all('div',class_="team-details ar-team-ht"):
email=faculty.find('span',class_="spamspan").text
email=email.replace('[dot]','.')
email=email.replace('[at]','@')
name=faculty.h4.text
df.loc[len(df.index)] = [name,email,"Department of Physics"]
```
## For textile and fibre
```
source = urllib.request.urlopen('https://textile.iitd.ac.in/faculty.php').read()
soup=BeautifulSoup(source,'lxml')
for faculty in soup.find_all('div',class_="faculty-info"):
link=faculty.a.get('href')
try:
conpage=urllib.request.urlopen('https://textile.iitd.ac.in/'+link+"#contact-details").read()
except:
continue
conpage_html=BeautifulSoup(conpage,'lxml')
em=conpage_html.find('ul',class_="faculty-contact").a.text
em=em.replace('[at]','@')
em=em.replace('[AT]','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", em)
if not email:
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ernet.in]+", em)
if not email:
email=["None"]
name=conpage_html.h2.text
df.loc[len(df.index)] = [name,email[0],"Department of Textile Technology"]
```
## For Design
```
source = urllib.request.urlopen('https://design.iitd.ac.in/faculty-contacts.html').read()
soup=BeautifulSoup(source,'lxml')
name="None"
for people in soup.find_all('div',class_="clearfix grpelem"):
if people.find('p',class_="General"):
if name==people.find('p',class_="General").text:
continue
txt=people.text
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not len(email):
email=re.findall("\w+@\w+\.{1}\w+", txt)
name=people.find('p',class_="General").text
df.loc[len(df.index)] = [name,email[0],"Department of Design"]
```
## For Humanities
```
source = urllib.request.urlopen('https://hss.iitd.ac.in/faculty').read()
soup=BeautifulSoup(source,'lxml')
name="None"
for people in soup.find_all('div'):
if people.find('div',class_="views-field views-field-title"):
txt=people.text
txt=txt.replace('[at]','@')
txt=txt.replace(' [dot] ','.')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not email:
txt=txt.replace(' @ ','@')
email=re.findall("\w+@\w+\.{1}\w+[.iitd.ac.in]+", txt)
if not email:
email=["None"]
if name==people.find('div',class_="views-field views-field-title").text:
continue
name=people.find('div',class_="views-field views-field-title").text
df.loc[len(df.index)] = [name,email[0],"Humanities & Social Sciences"]
df.tail()
df.to_json('Faculty_info.json',orient='table',index=False)
pdf=pd.read_json('Faculty_info.json',orient='table')
pdf
df.to_csv('Faculty_info.csv',index=False)
```
| github_jupyter |
# Universal Sentence Encoder Baseline for IDAT
In this notebook, we will walk you through the process of reproducing the Universal Sentence Encoder baseline for the IDAT Irony detection task.
## Loading Required Modules
We start by loading the needed python libraries.
```
import os
import tensorflow as tf
from tensorflow import keras
import tensorflow_hub as hub
import pandas as pd
import tensorflow_text
from tensorflow import keras
from sklearn.metrics import f1_score
```
## Loading Data
Using pandas, we can load and inspect the training and testing datasets as follows:
```
df_train = pd.read_csv("../../data/idat/IDAT_training_text.csv")
df_test = pd.read_csv("../../data/idat/IDAT_test_text.csv")
```
Below we list the 5 first entries in the training data.
```
df_train.head()
```
Below we list the 5 first entries in the testing data.
```
df_test.head()
```
## Model Preparation
We start by setting the randomisation seed:
```
tf.random.set_seed(123)
```
Next we load the Universal Sentence Encoder (WARNING: This will download and cache a huge model of around 1 GB in size)
```
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-multilingual/3")
```
Then we define the input and output to the model:
```
sentence_input = keras.Input(shape=512, name='sentence')
label = keras.Input(shape=(1,), name='label')
```
This is followed by defining the structure of the network:
```
logits = keras.layers.Dense(512, activation=tf.nn.tanh)(sentence_input)
logits = keras.layers.Dense(512, activation=tf.nn.tanh)(logits)
logits = keras.layers.Dense(512, activation=tf.nn.tanh)(logits)
logits = keras.layers.Dense(1, activation=tf.nn.sigmoid)(logits)
```
Then we construct and compile the model:
```
model = keras.Model(inputs=sentence_input, outputs=logits)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
## Model Training
First we perpare the inputs and outputs to be fed to the model during training:
```
X_train = embed(df_train["text"])
Y_train = df_train["label"]
```
Next we fit the data:
```
model.fit(X_train, Y_train, epochs=5, batch_size=32)
```
## Submission Preperation
We perpare the features for each testset instance as follows:
```
X_test = embed(df_test["text"])
Y_test = df_test["label"]
```
We predict and evaluate the prediction as follows:
```
predictions = (model.predict(X_test)>0.5).astype(int)
f1_score(Y_test, predictions, average="macro")
```
We perpare the predictions as a pandas dataframe.
```
df_preds = pd.DataFrame(data=predictions, columns=["prediction"], index=df_test["id"])
df_preds.reset_index(inplace=True)
if not os.path.exists("predictions"):
os.mkdir("predictions")
df_preds.to_csv("./predictions/irony.tsv", index=False, sep="\t")
```
| github_jupyter |
```
import os
import random
pos_texts = os.listdir('fix_pos')
neg_texts = os.listdir('fix_neg')
print 'postive samples %d' % len(pos_texts)
print 'negtive samples %d' % len(neg_texts)
print 'total samples %d' % (len(pos_texts) + len(neg_texts))
```
**pos** 评价一览
```python
pos_samples = random.sample(pos_texts, 10)
for item in pos_samples:
with open(os.path.join('fix_pos', item), 'r') as f:
s = f.read()
print item
print s
```
*output*
```
pos.2486.txt
酒店设施较好,地理位置靠近中关村,周边饮食餐饮很方便。自助早餐很丰富,但是价钱比较贵。这次正好赶上国际柔道比赛的代表团入住,酒店对普通住客的服务有所下降。酒店大堂秩序比较乱。
补充点评 2007年11月15日 : 携程定的酒店房间价格比较贵,我到北京后朋友说他定的价格比我便宜100元,而且还含早餐。希望携程网有所改进。不要每次我我们这些会员当成冤大头!!!
pos.892.txt
我是8月1日入住该酒店的,来香港主要为了SHOPPING,所以从地点来讲,酒店算是在较偏的位置,但觉得对得起我订的价格HKD550元/晚.
房间很干净,而且我所入住的半海景房真的能看到很宽阔的海景,感觉很不错,卫生间很阔落,非常舒服.唯一欠缺的就是电视机,安置在床的侧面,看起来不是很方便,不过还好啦,反正不是为了看电视.
交通方面,我没有遇到多大问题,只是那天来回上环,中环,酒店很多回,觉得酒店确是离购物点远了点.一般来说我从酒店出门就是看准它提供的SHUTTLEBUS的时间出发到港铁站(即中环站)转乘, 回来则是搭到上环再转电车(俗称叮叮).因为沿路都有海景,感觉很是挺惬意的.
对于订房我还有一点要补充:就是一定要提早定啊,我是一直在比价才选了8月1日出发(当然那几天还是动漫节则更一举两得),后来我接近8/1再查看时,价格已经变成1千多了,酒店是海鲜价因此深有体会.
pos.461.txt
帮朋友订的,在上海来说价格是比较实惠的了。不过朋友住了后没有特别说很好也没有不满,呵呵!
pos.1510.txt
中餐厅绝对不是五星:晚8点多时去的2楼中餐厅,没什么人用餐,餐厅环境犹如一般小店,餐具也很一般,桌上没有餐巾纸,没有牙签。上菜也慢。
pos.1501.txt
我是7月29日入住,5楼服务员有5星水准,赞。管家部经理和蔼可亲,也有5颗星,赞。只是冲淋水笼头高度太低,我要弯腰洗头,算美中不足吧。
pos.661.txt
预订之后才看的别人点评,立刻感觉有点紧张,甚至都想取消预订。真正入住以后,才发现没有网友反映那么不好。房间设施还是可以的,灯光设计比较舒服,宽带上网、免费长话、房间服务都还不错,早餐也还可以。店前确实在修路,但没有那么恐怖,我住的临街房,双层玻璃完全可以隔音的,当然我是住在15层。打车也还没那么困难,只要不在早晚高峰,还是比较方便的。周遭餐饮也还丰富,但要走对方向。368的房价,相比安徽消费水平来说,的确有点不低,但总体还是可以选择入住的。
pos.2334.txt
我住的是豪华房,酒店环境很不错,服务满好,饭菜也过得去,按此级别的酒店早餐算满好了,位置也还方便,唯一的不便就是上网按30元收费,觉得很不便,下次再去如果只是路过一天还是住这里,如果住几天就肯定换酒店了,查询旅游信息和上网联络很不划算
pos.329.txt
环境安静,离火车站近。房间布置不错,早餐以中餐为主,丰富。下次接着住
pos.126.txt
我在上海几天里住的最好的一个酒店,同为四星级,整体感觉是最有品质的,订房台给的是转角的2715房,非常舒服,儿子玩得非常高兴,但酒店的服务人员态度比较冷漠,对待内外宾还是有区别的,同样拦出租车,门僮居然对我说,请让我们的客人,我回头一看是一对外国人,同住这家酒店, 难道我不是客人?出于礼貌我让了,但心里很不舒服,希望携程的价格能更低点,
pos.2208.txt
定了两个标准房,总体还满意。早餐还不错。前台服务也不错。
```
**neg** 评价一览
```python
neg_samples = random.sample(neg_texts, 10)
for item in neg_samples:
with open(os.path.join('fix_neg', item), 'r') as f:
s = f.read()
print item
print s
```
*output*
```
neg.2426.txt
虽然说是4星级,不过除了大厅感觉够气派以外,其余仅仅只能说一般~~
房间里面居然有好多黑色的硬壳小虫,让我们觉得很是恶心阿...
当然也有可能因为我们住的是3号楼,是不是有点成旧的关系?早餐虽然是免费的,但是看到票价也要20呢,菜品实在是.....
不过门口那部劳斯莱斯幻影倒是的确让人震撼!看得出,老板还是很有实力的!
neg.174.txt
对天津这个城市,对天津的老饭店,是很有感情的,但这一次确实让我失望了。曾经住过利顺德,就对老饭店有一种美好的憧憬,但是这一家的确有太多要改进的!!要不然怎么能让顾客再次登门呢?第一饭店的地理位置很好,晚上很安静,房间很高,但是走廊里就有很浓的烟味,换了两个房间,依然如此,从晚上9点进房一直开窗开门换气到夜里12点,关上门窗睡觉,依然觉得难受,可能我不抽烟,对烟味非常反感的缘故吧,俺只好把随身带的香水放在枕边,这样才睡着。气味不好,让我对这家饭店的洁净程度的印象大打折扣。
服务员,一点都不服务,换房时,让俺自己拖着行李跑上跑下,跟我说,前台没人了,是在走不开,我下去一看,三个姐姐呢,咋说“没人”?我说房间有味,她跟我说“不可能!”难道我自己折腾自己不成?
早餐呢,的确实很贵,太对不住它的价钱。总之,很失望,是不会再去住了。
neg.998.txt
我们去盐城的时候那里的最低气温只有4度,晚上冷得要死,居然还不开空调,投诉到酒店客房部,得到的答复是现在还没有领导指示需要开暖气,如果冷到话可以多给一床被子,太可怜了。。。
neg.911.txt
房间很一般,小,且让人感觉脏,隔音效果差,能听到走廊的人讲话,走廊光线昏暗,旁边没有什么可吃
neg.389.txt
怎么说呢。以北京这种地方的房价以及房间质量来说。这价格已经算便宜的了。因为先前住的几个北京的宾馆,都是又贵服务又差而且房间相当小。平安府的房间也不大。特别是厕所,太狭小了,房间窗户对着墙壁和暖水设备,根本开不了窗太吵了,也晒不进太阳,而且宾馆里没有电梯的,得自己提着拉箱抬上三楼房间,中间服务人员也没人来帮忙。去结帐时,前台送了张VIP卡,说下次您来直接给我们打电话预定,我们的价要比携程要便宜!
neg.2254.txt
酒店不开夜床,我大约晚上11到酒店,发现房间和出门时一样的乱,没人来打扫,浴袍是坏的(有两处破损的地方)也没人检查,早上用餐时被告之只含单早,后来把大堂经理叫来才弄明白,要登记的时候出示两张身份证才能含双早,(但问题是我在办理入住时是给了两张身份证,但大堂的登记那位说只要一张身份证就可),堂堂的世界一流酒店管理集团希尔顿酒店会发生这样的气愤事件可让人无奈。。。。。。。。。
neg.2303.txt
由于飞机晚点,我到达酒店的时候已经凌晨2点,酒店的服务生好象根本就没有想到那个时候还会有客人入住,(也许是一个临时的员工),一切的手续办理都显得不熟练.
neg.1032.txt
1酒店比较偏 打车不方便 2酒店楼上有KTV,晚上非常吵,特别是对住在五楼的房客。3前台接待,态度生硬。不太欢迎通过携程网预订的人。接待我的小姐,还给我一张名片,让我订房直接打宾馆电话就可以了。说携程发传真来还要回传比较烦。那位小姐好像姓冯。4房价水份太多。房价标价为516元,但一般预订只要讲价的话通常会以318元成交。5在入住后的第二天,我退房后,因为有事还要入住,前台一听是通过携程订的直接回答没房预订(该市当天并没有什么重大活动,也不是旅游旺季)6房间里的价格比外面高许多,一瓶外面一元钱的矿泉水,房间里标价为5元。而且酒店附近没有购物的地方。7服务意识太差,我询问前台工作人员当地天气预报,回答说是不知道。8早餐较差,部分菜还是冷的 该酒店有好的设施但缺乏有效的管理,服务意识较差,甚至有点店大欺客的味道,虽然,它并不是什么大店
neg.2453.txt
对这个酒店的失望是从踏入酒店后的第一分钟开始,一直持续到离开这个酒店后的一个半月,
1)1月22日入住,下雪,下了车拿着行李,门童视而不见;
2)大堂暖气不足,温度很低
3)入住花了差不多二十分钟,其实前台客人并不多,主要是工作人员明显是生手,而且态度还很生硬
4)半夜一点洗澡,没热水,打N多次电话,过了差不多半小时才有一个工程人员上来帮助解决问题
5)上网,网络是坏的,同样需要工作人员搬开家具来维修
6)终于要睡觉了,才发觉过道的等无法关掉,接着又让维修人员上来维修,结果是维修人员也无法解决问题,只能拔掉灯泡
7)真正的噩梦是1月23日离开店结帐,首先我要找大堂经理对昨晚这些问题问个所以然,结果过了半小时才见到大堂经理,又过了半小时才决定给我打8折.结帐时,涮了第一次卡号称POS机坏了,我明明听到是连通了,但工作人员硬说没连通,结果又刷了第2次,结果我打电话到银行问,证明酒店的确是刷了我2次672元.大堂经理答应马上传真通知银行取消一次,结果离开武汉后酒店一直没退款,又打了无数次长途,终于退了款.我去银行核查,酒店是2月20日把手续发到银行的,款是2月28日到我帐的.
真的是不明白,这种酒店怎么能开业接客.可不可思议的是这种酒店管理公司怎么还有人用?居然还五星标准,我看是4星的硬件,招待所的服务水准,连快捷酒店都比不上
neg.697.txt
此酒店涉嫌欺骗:
ctrip上显示行政房免费上网,但是实事并非如此,只有去行政酒廊去上网才是免费,房间内上网要收费,每分钟2元,而且如果去行政酒廊上网,前提要你在房间先上网激活帐号才可以。也就是说你最少要花两元才能上网,而且不再房间内,这个不是欺骗是啥。早知如此说啥也不会定在这里,1400多的房费,住那里不比这里好,就是想上网。其他服务实在一般。距离市区较远,出行很不方便。周围设施少,同等价位酒店市里多了去了,下次不会住了,还有早餐实在一般。check in速度慢,有老外提问题,就放下和我check in的事情过去搭理人家,这点让人感觉很差。check out也是拖拉。
```
| github_jupyter |
```
import numpy as np
from itertools import product
radius = 1737400
alt = 2000000
ground = 7.5
exposure = 0.005
samples = 1000
lines = 1000
sensor_rad = radius + alt
angle_per_line = ground / radius
angle_per_samp = angle_per_line
angle_per_Second = angle_per_line / exposure
line_vec = np.arange(0, lines+0.00000001)
sample_vec = np.arange(0, samples+0.00000001)
# From here on, matrix indexing is [line, sample, (xyz)]
sample_mat, line_mat = np.meshgrid(line_vec, sample_vec)
positions = sensor_rad * np.vstack([np.cos(-angle_per_line * line_vec), np.zeros(line_vec.shape), np.sin(-angle_per_line * line_vec)]).T
# Note: The chain rule results in an extra negative on the velocity calculations
velocities = sensor_rad * np.vstack([np.sin(-angle_per_line * line_vec), np.zeros(line_vec.shape), -np.cos(-angle_per_line * line_vec)]).T
print('Positions')
for pos in positions[::int(0.25/exposure)]:
print(list(pos))
print('Velocities')
for vel in velocities[::int(0.25/exposure)]:
print(list(vel))
lat = -angle_per_line * line_mat
# Image is a right look, so the longitude goes negative
lon = -angle_per_samp * sample_mat
ground_points = radius * np.stack([np.multiply(np.cos(lat), np.cos(lon)), np.multiply(np.cos(lat), np.sin(lon)), np.sin(lat)], axis=-1)
# print("Ground point at line: 500, sample: 500")
# print(ground_points[500, 500])
slant_range = np.array([[np.linalg.norm(point) for point in row] for row in ground_points - positions[:, None, :]])
ground_range = radius * np.abs(lon)
# Start with a crude linear approximations
starting_ground = ground_range[0,0]
starting_slant = slant_range[0,0]
ending_ground = ground_range[0, -1]
ending_slant = slant_range[0, -1]
guess_slope = (ending_slant - starting_slant) / (ending_ground - starting_ground)
guess_intercept = starting_slant - guess_slope * starting_ground
guess_coeffs = np.array([guess_intercept, guess_slope, 0.0, 0.0])
print("Ground range to slant range polynomial coefficients")
print(list(guess_coeffs))
test_line, test_sample = (500, 500)
test_ground_range = test_sample * ground
test_slant_range = np.polynomial.polynomial.polyval(test_ground_range, guess_coeffs)
v_hat = velocities[test_line] / np.linalg.norm(velocities[test_line])
t_hat = positions[test_line] - np.dot(positions[test_line], v_hat) * v_hat
t_hat = t_hat / np.linalg.norm(t_hat)
u_hat = np.cross(v_hat, t_hat)
ct = np.dot(positions[test_line], t_hat)
cv = np.dot(positions[test_line], v_hat)
c = np.linalg.norm(positions[test_line])
alpha = (radius * radius - test_slant_range * test_slant_range - c * c) / (2 * ct)
beta = np.sqrt(test_slant_range * test_slant_range - alpha * alpha)
test_ground_pt = alpha * t_hat + beta * u_hat + positions[test_line]
print("Test image point:", test_line, test_sample)
print("Test ground point:", list(test_ground_pt))
print(test_slant_range)
print(test_slant_range - guess_coeffs[0])
sc_pos = positions[test_line]
sc_vel = velocities[test_line]
off_ground_pt = test_ground_pt - np.array([100, 100, 100])
look_vec = off_ground_pt - sc_pos
zero_doppler_look_vec = look_vec - np.dot(look_vec, sc_vel) / np.dot(sc_vel, sc_vel) * sc_vel
locus_point = sc_pos + np.linalg.norm(test_ground_pt - sc_pos) / np.linalg.norm(zero_doppler_look_vec) * zero_doppler_look_vec
# Image is a right look, so do look X velocity
locus_direction = np.cross(zero_doppler_look_vec, sc_vel)
locus_direction = 1.0 / np.linalg.norm(locus_direction) * locus_direction
print("Input point:", list(off_ground_pt))
print("Locus point:", list(locus_point))
print("Locus direction:", list(locus_direction))
remote_look_vec = -np.linalg.norm(test_ground_pt - sc_pos) / sensor_rad * sc_pos
remote_zero_doppler_look_vec = remote_look_vec - np.dot(remote_look_vec, sc_vel) / np.dot(sc_vel, sc_vel) * sc_vel
remote_locus_point = sc_pos + remote_zero_doppler_look_vec
remote_locus_direction = np.cross(remote_zero_doppler_look_vec, sc_vel)
remote_locus_direction = 1.0 / np.linalg.norm(remote_locus_direction) * remote_locus_direction
print("Remote locus point:", list(remote_locus_point))
print("Remote locus direction:", list(remote_locus_direction))
```
| github_jupyter |
```
from netCDF4 import Dataset, num2date
import numpy as np
import json
# import data
dataset = Dataset('netcdf/echam_daily.nc')
# interrogate dimensions
print(dataset.dimensions.keys())
# interrogate variable structure
print(dataset.variables['u10'])
# interrogate variables
# find the u and v wind data
print("Check variables:")
print(dataset.variables.keys())
# USER input names for u and v wind variables
u_var = 'u10'
v_var = 'v10'
temp_var = 'tsurf'
print("Check units:")
print(dataset.variables[u_var].units)
print(dataset.variables[temp_var].units)
print("Check dimensions:")
print(dataset.variables[u_var].dimensions, dataset.variables[u_var].shape)
print(dataset.variables[u_var].dimensions, dataset.variables[temp_var].shape)
# set header variables for wind
nx = dataset.variables[u_var].shape[1]
ny = dataset.variables[u_var].shape[2]
dx = 360 / nx
dy = 180 / ny
tot = nx * ny
# get data for u wind
a = dataset.variables[u_var][:][0]
A = np.matrix(a)
b = A.flatten()
c = np.ravel(b).T
u_data = c.tolist()
# get data for v wind
a = dataset.variables[v_var][:][0]
A = np.matrix(a)
b = A.flatten()
c = np.ravel(b).T
v_data = c.tolist()
# format JSON
wind_data = [{
"header": {
"parameterNumberName": "eastward_wind",
"parameterUnit": "m.s-1",
"parameterNumber": 2,
"parameterCategory": 2,
"nx": nx,
"ny": ny,
"numberPoints": tot,
"dx": dx,
"dy": dy,
"la1": 90.0,
"lo1": 0.0,
"la2": -90.0,
"lo2": 360.0,
"refTime": "2017-02-01 23:00:00"
},
"data": u_data
}, {
"header": {
"parameterNumberName": "northward_wind",
"parameterUnit": "m.s-1",
"parameterNumber": 3,
"parameterCategory": 2,
"nx": nx,
"ny": ny,
"numberPoints": tot,
"dx": dx,
"dy": dy,
"la1": 90.0,
"lo1": 0.0,
"la2": -90.0,
"lo2": 360.0,
"refTime": "2017-02-01 23:00:00"
},
"data": v_data
}]
# write JSON for leaflet-velocity input
with open('wind.json', 'w') as outfile:
json.dump(wind_data, outfile, separators=(',', ':'))
# get data for temp from netCDF
temps = dataset.variables[temp_var][:][0]
# get data for lat and lon
lats = dataset.variables['lat'][:]
lons = dataset.variables['lon'][:]
# loop through and create array
# temp is scaled from Kelvin to 0-1 with range 200K to 350K
# USER can edit display options
temp_data = [[0,0,0] for i in range(len(lats) * len(lons))]
for i in range(len(lats)):
for j in range(len(lons)):
temp_data[j + (i * len(lons))][0] = lats[i]
temp_data[j + (i * len(lons))][1] = lons[j]
temp_data[j + (i * len(lons))][2] = (temps[i,j] - 273.15) # + 273.15 for K
#temp_data[j + (i * len(lons))][2] = str((temps[i,j] - 200)/150) if string is necessary
# apply non-overlapping moving window average to reduce data size by factor of 144
# USER can edit grouping parameter
# number of points should not be more than several hundred for best performance
group = 12
lats_sm = lats.reshape(-1, group).mean(axis=1)
lons_sm = lons.reshape(-1, group).mean(axis=1)
# create new smaller temperature array
temp_array = [[0] for i in range(len(lats) * len(lons))]
for i in range(len(temp_data)):
temp_array[i] = temp_data[i][2]
temps_sm = np.array(temp_array).reshape(-1, group * group).mean(axis=1)
# reformat array to [lat, lon, temp]
temp_data_sm = [[0,0,0] for i in range(len(lats_sm) * len(lons_sm))]
for i in range(len(lats_sm)):
for j in range(len(lons_sm)):
temp_data_sm[j + (i * len(lons_sm))][0] = lats_sm[i]
temp_data_sm[j + (i * len(lons_sm))][1] = lons_sm[j] -180
temp_data_sm[j + (i * len(lons_sm))][2] = temps_sm[j + (i * len(lons_sm))] + 40
# write Javascript file for Leaflet.idw input
with open('temps_sm.js', 'w') as filehandle:
filehandle.write('var addressPoints = ' + str(temp_data_sm))
```
| github_jupyter |
# Collecting VerbNet Terms
This notebook parses all the VerbNet .XML definitions - extracting all the possible PREDicates in the FRAME SEMANTICS and the ARG type-value tuples. This will allow DNA to understand/account for all the semantics that can be expressed.
An example XML structure is:
```
<VNCLASS xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="dedicate-79" ...>
<MEMBERS>
<MEMBER name="dedicate" wn="dedicate%2:32:00" grouping="dedicate.01"/>
<MEMBER name="devote" wn="devote%2:32:00" grouping="devote.01"/>
<MEMBER name="commit" wn="commit%2:32:01 commit%2:40:00" grouping="commit.02"/>
</MEMBERS>
<THEMROLES>
...
</THEMROLES>
<FRAMES>
<FRAME>
<DESCRIPTION descriptionNumber="8.1" primary="NP V NP S_ING" secondary="NP-P-ING-SC; to-PP" .../>
<EXAMPLES>
<EXAMPLE>I dedicated myself to the cause.</EXAMPLE>
</EXAMPLES>
<SYNTAX>
<NP value="Agent">
<SYNRESTRS/>
</NP>
<VERB/>
<NP value="Theme">
<SYNRESTRS/>
</NP>
<PREP value="to">
<SYNRESTRS/>
</PREP>
<NP value="Goal">
<SYNRESTRS/>
</NP>
</SYNTAX>
<SEMANTICS>
<PRED value="dedicate">
<ARGS>
<ARG type="Event" value="during(E)"/>
<ARG type="ThemRole" value="Agent"/>
<ARG type="ThemRole" value="Theme"/>
<ARG type="ThemRole" value="Goal"/>
</ARGS>
</PRED>
</SEMANTICS>
</FRAME>
<FRAME>
<DESCRIPTION descriptionNumber="0.2" primary="NP V NP PP.goal" secondary="NP-PP; to-PP" .../>
<EXAMPLES>
<EXAMPLE>I dedicated myself to the cause.</EXAMPLE>
</EXAMPLES>
<SYNTAX>
<NP value="Agent">
<SYNRESTRS/>
</NP>
<VERB/>
<NP value="Theme">
<SYNRESTRS/>
</NP>
<PREP value="to">
<SELRESTRS/>
</PREP>
<NP value="Goal">
<SYNRESTRS>
<SYNRESTR Value="-" type="sentential"/>
</SYNRESTRS>
</NP>
</SYNTAX>
<SEMANTICS>
<PRED value="dedicate">
<ARGS>
<ARG type="Event" value="during(E)"/>
<ARG type="ThemRole" value="Agent"/>
<ARG type="ThemRole" value="Theme"/>
<ARG type="ThemRole" value="Goal"/>
</ARGS>
</PRED>
</SEMANTICS>
</FRAME>
</FRAMES>
<SUBCLASSES/>
</VNCLASS>
```
The above results in capturing the following detail:
* The possible PREDicates in the FRAME SEMANTICS => 'dedicate'
* The ARG type-value tuples =>
* 'Event', 'during(E)'
* 'ThemRole', 'Agent'
* 'ThemRole', 'Theme'
* 'ThemRole', 'Goal'
```
# Imports
from pathlib import Path
import xml.etree.ElementTree as ET
# Constants
verbnet_dir = '/Users/andreaw/Documents/VerbNet3.3'
preds = set()
args = set()
def get_arg_details(etree):
for arg in etree.findall('./FRAMES/FRAME/SEMANTICS/PRED/ARGS/ARG'):
args.add((arg.attrib["type"], arg.attrib["value"]))
# Recursively process the subclasses
for subclass in etree.findall('./SUBCLASSES/VNSUBCLASS'):
get_arg_details(subclass)
def get_pred_details(etree):
for pred in etree.findall('./FRAMES/FRAME/SEMANTICS/PRED'):
preds.add(pred.attrib["value"])
# Recursively process the subclasses
for subclass in etree.findall('./SUBCLASSES/VNSUBCLASS'):
get_pred_details(subclass)
# Process each of the VerbNet files
file_list = Path(verbnet_dir).glob('**/*.xml')
for file_path in file_list:
file_str = str(file_path)
with open(file_str, 'r') as xml_file:
xml_in = xml_file.read()
# Create the tree
vn_class = ET.fromstring(xml_in)
# Process from the top down, recursively
get_pred_details(vn_class)
get_arg_details(vn_class)
print(sorted(preds))
print()
print(sorted(args))
# Process again for VerbNet 3.4
verbnet_dir = '/Users/andreaw/Documents/VerbNet3.4'
# Process each of the VerbNet files
file_list = Path(verbnet_dir).glob('**/*.xml')
for file_path in file_list:
file_str = str(file_path)
with open(file_str, 'r') as xml_file:
xml_in = xml_file.read()
# Create the tree
vn_class = ET.fromstring(xml_in)
# Process from the top down, recursively
get_pred_details(vn_class)
get_arg_details(vn_class)
print(sorted(preds))
print()
print(sorted(args))
```
| github_jupyter |
```
import pandas as pd
import json
import requests
import plotly.express as px
import plotly.graph_objects as go
urlPersonsJson = 'https://findmentor.network/persons.json'
requestData = requests.get(urlPersonsJson)
dataJson = json.loads(requestData.content)
personsDF = pd.DataFrame(dataJson)
list(personsDF.columns)
sumPeople = personsDF.mentor.index.value_counts().sum()
#İkisi de = Both role
bothRolePeople = personsDF['mentor'].str.contains("İkisi de")
bothRoleSum = personsDF[bothRolePeople].index.value_counts().sum()
mentorPeople = personsDF['mentor'].str.contains("Mentor")
mentorSum = personsDF[mentorPeople].index.value_counts().sum()
menteePeople = personsDF['mentor'].str.contains("Mentee")
menteeSum = personsDF[menteePeople].index.value_counts().sum()
personsDF['mentor'].value_counts()
roleLabels = ['Mentee','Both Role','Mentor']
roleValues = [menteeSum, bothRoleSum, mentorSum]
titleText = "Participants Role in Community"
#The next line is for graph objects version.
#figPie = go.Figure(data=[go.Pie(labels=labels, values=values)],layout_title_text=titleText)
figPie = px.pie(values=roleValues,names=roleLabels,title=titleText)
figPie.update_traces(textposition='outside', textinfo='label+value+percent')
figPie.show()
#After an overview of participants' input in the interests section in persons.json,
#these keywords saved in a list that related to their interest fields.
frontendKeywords = 'frontend|front-end|front end|fe|html|css|vue|vuejs|nuxtjs|nuxt|Angular'
backendKeywords = 'backend|beckend|back-end|back end|php|.net|spring|django|rails|node|nodejs|node.js|node js|flask|laravel|symfony'
fullstackKeywords = 'web|full-stack|fullstack|full stack|fullstack-web-developer'
mobileKeywords = 'andorid|ionic|swift ui|swiftui|swift-ui|android|ios|react native|reactnative|react-native|flutter|kotlin|swift|mobile|mobil'
cyberKeywords = 'network|secops|cyber|cyberr|security'
aiKeywords = 'ml|dl|nlp|r lang|data|r-lang|AI|artificial intelligence|computer vision|artificialintelligence|artificial-intelligence|deep-learning|deeplearning|deep learning|data science|sql|tableau|datascience|data-science|machinelearning|machine-learning|machine learning'
devopsKeywords = 'sysadmin|continuous delivery|ci-cd|microservices|scalability|scaling|distributed systems|aws|cloud|gcp|go|golang|go lang|go-lang|rust|devops|dev ops|dev-ops|docker|k8s|kubernetes|serverless'
#Filtering keywords in interests from keywords lists.
frontendPeople = personsDF['interests'].str.contains(frontendKeywords)
backendPeople = personsDF['interests'].str.contains(backendKeywords)
fullstackPeople = personsDF['interests'].str.contains(fullstackKeywords)
mobilePeople = personsDF['interests'].str.contains(mobileKeywords)
cyberPeople = personsDF['interests'].str.contains(cyberKeywords)
aiPeople = personsDF['interests'].str.contains(aiKeywords)
devopsPeople = personsDF['interests'].str.contains(devopsKeywords)
#It will be converting to dataframes and getting the sum of them each.
frontendPeopleSum = personsDF[frontendPeople].index.value_counts().sum()
backendPeopleSum = personsDF[backendPeople].index.value_counts().sum()
#There is an exceptional status, some people write their full-stack interest as a split way.
#So there will be using and gate for between backend and frontend.
fullstackPeopleSum = personsDF[(fullstackPeople)|((frontendPeople)&(backendPeople))].index.value_counts().sum()
mobilePeopleSum = personsDF[mobilePeople].index.value_counts().sum()
cyberPeopleSum = personsDF[cyberPeople].index.value_counts().sum()
aiPeopleSum = personsDF[aiPeople].index.value_counts().sum()
devopsPeopleSum = personsDF[devopsPeople].index.value_counts().sum()
#Filtering frontend's keywords and getting sum for mentor and mentees.
frontMentor = personsDF[frontendPeople & mentorPeople]
frontMentorSum = frontMentor.index.value_counts().sum()
frontMentee = personsDF[frontendPeople & menteePeople]
frontMenteeSum = frontMentee.index.value_counts().sum()
#Filtering backend's keywords and getting sum for mentor and mentees.
backMentor = personsDF[backendPeople & mentorPeople]
backMentorSum = backMentor.index.value_counts().sum()
backMentee = personsDF[backendPeople & menteePeople]
backMenteeSum = backMentee.index.value_counts().sum()
#Filtering devops' keywords and getting sum for mentor and mentees.
devMentor = personsDF[devopsPeople & mentorPeople]
devMentorSum = devMentor.index.value_counts().sum()
devMentee = personsDF[devopsPeople & menteePeople]
devMenteeSum = devMentee.index.value_counts().sum()
#Filtering AI's keywords and getting sum for mentor and mentees.
aiMentor = personsDF[aiPeople & mentorPeople]
aiMentorSum = aiMentor.index.value_counts().sum()
aiMentee = personsDF[aiPeople & menteePeople]
aiMenteeSum = aiMentee.index.value_counts().sum()
#Filtering cyber security's keywords and getting sum for mentor and mentees.
cyberMentor = personsDF[cyberPeople & mentorPeople]
cyberMentorSum = cyberMentor.index.value_counts().sum()
cyberMentee = personsDF[cyberPeople & menteePeople]
cyberMenteeSum = cyberMentee.index.value_counts().sum()
#Filtering fullstack's keywords and getting sum for mentor and mentees.
fullstackMentor = personsDF[fullstackPeople & mentorPeople]
fullstackMentorSum = fullstackMentor.index.value_counts().sum()
fullstackMentee = personsDF[fullstackPeople & menteePeople]
fullstackMenteeSum = fullstackMentee.index.value_counts().sum()
#Filtering mobile dev's keywords and getting sum for mentor and mentees.
mobileMentor = personsDF[mobilePeople & mentorPeople]
mobileMentorSum = mobileMentor.index.value_counts().sum()
mobileMentee = personsDF[mobilePeople & menteePeople]
mobileMenteeSum = mobileMentee.index.value_counts().sum()
#In our plots, this list's items will be using for values' labels.
labelsAll = ['Devops','AI','Mobile','Fullstack','Frontend','Backend','Cyber Security']
valuesParticipant = [
devopsPeopleSum,
aiPeopleSum,
mobilePeopleSum,
fullstackPeopleSum,
frontendPeopleSum,
backendPeopleSum,
cyberPeopleSum,
]
figBar = px.bar(
x=labelsAll,
y=valuesParticipant,
color=labelsAll,
text=valuesParticipant,
title="Interests of All Participants",
)
figBar.update_traces(textposition="outside")
figBar.update_layout(
xaxis={"categoryorder": "total descending"},
yaxis={"categoryorder": "total descending"},
legend_title="",
)
# labelsMentor = ['Mobile','Fullstack','Cyber Security','AI','Devops','Backend','Frontend']
valuesMentor = [
devMentorSum,
aiMentorSum,
mobileMentorSum,
fullstackMentorSum,
frontMentorSum,
backMentorSum,
cyberMentorSum,
]
mentorTitle = "Interests of Mentors"
figMentorPie = go.Figure(
data=[go.Pie(labels=labelsAll, values=valuesMentor, hole=0.3, sort=True)],
layout_title_text=mentorTitle,
)
figMentorPie.update_traces(textposition="outside", textinfo="label+value+percent")
figMentorPie.show()
valuesMentee = [
devMenteeSum,
aiMenteeSum,
mobileMenteeSum,
fullstackMenteeSum,
frontMenteeSum,
backMenteeSum,
cyberMenteeSum,
]
menteeTitle = "Interests of Mentees"
figMenteePie = go.Figure(
data=[go.Pie(labels=labelsAll, values=valuesMentee, hole=0.3, sort=True)],
layout_title_text=menteeTitle,
)
figMenteePie.update_traces(textposition="outside", textinfo="label+value+percent")
figMenteePie.show()
comparisonTitle = "Mentor-Mentees' Comparison Based on Interests"
figComparison = go.Figure()
figComparison.add_trace(
go.Bar(
x=labelsAll,
y=valuesMentee,
name="Mentee",
texttemplate=valuesMentee,
textposition="outside",
textfont_color="black",
marker_color="#4361ee",
)
)
figComparison.add_trace(
go.Bar(
x=labelsAll,
y=valuesMentor,
name="Mentor",
texttemplate=valuesMentor,
textposition="outside",
textfont_color="blue",
marker_color="#f72585",
)
)
figComparison.update_layout(
title_text=comparisonTitle,
xaxis={"categoryorder": "total descending"},
yaxis={"categoryorder": "total descending"},
legend_title="",
barmode="group",
bargap=0.15,
)
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-split-oldnew/hold-out_5.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4 * FACTOR
WARMUP_LEARNING_RATE = 1e-3 * FACTOR
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE,
total_steps=TOTAL_STEPS_1st,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_1st,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [cosine_lr_1st]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
callbacks=callback_list,
verbose=2).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [es, cosine_lr_2nd]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6))
ax1.plot(cosine_lr_1st.learning_rates)
ax1.set_title('Warm up learning rates')
ax2.plot(cosine_lr_2nd.learning_rates)
ax2.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
## Save model
```
model.save_weights('../working/effNetB5_img224.h5')
```
| github_jupyter |
# Homework 3
## 1. Implement L1 norm regularization as a custom loss function
```
import torch
def lasso_reg(params, l1_lambda):
l1_penalty = torch.nn.L1Loss(size_average=False)
reg_loss = 0
for param in params:
reg_loss += l1_penalty(param)
loss += l1_lambda * reg_loss
return loss
```
## 2. The third-to-last paragraph in the notebook is concerning early stopping, an "old" regularization technique which involves the stopping of training earlier than the number of epochs would suggest. Read the paragraph and download the paper from Prechelt et al.
### a. Implement early stopping in the $E_{opt}$ specification
In the paper, the value $E_{opt}$ is defned to be the lowest validation set error obtained in epochs up to $t$: $$E_{opt}(t) = \min_{t \le t'} E_{va}(t')$$ where $E_{va}$ is the validation error, i.e. the corresponding error on the validation set. As per instructions, I'm going to use the test data as validation.
```
# import in Colab
import sys
sys.path.append('/content/mnist.py')
sys.path.append('/content/train_utils.py')
import mnist
from train_utils import accuracy, AverageMeter
from torch import nn
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.flat = nn.Flatten()
self.h1 = nn.Linear(28*28, 16)
self.h2 = nn.Linear(16, 32)
self.h3 = nn.Linear(32, 24)
self.out = nn.Linear(24, 10)
def forward(self, X, activ_hidden=nn.functional.relu):
out = self.flat(X)
out = activ_hidden(self.h1(out))
out = activ_hidden(self.h2(out))
out = activ_hidden(self.h3(out))
out = self.out(out)
return out
def train_epoch(model, dataloader, loss_fn, optimizer, loss_meter, performance_meter, performance):
for X, y in dataloader:
optimizer.zero_grad()
y_hat = model(X)
loss = loss_fn(y_hat, y)
loss.backward()
optimizer.step()
acc = performance(y_hat, y)
loss_meter.update(val=loss.item(), n=X.shape[0])
performance_meter.update(val=acc, n=X.shape[0])
def train_model(model, dataloader1, dataloader2, loss_fn, optimizer, num_epochs, performance=accuracy):
model.train()
E = {
"epoch": [],"training perf": [], "validation perf": [], "parameters": [], "optimizer": []
}
for epoch in range(num_epochs):
loss_meter = AverageMeter()
performance_meter = AverageMeter()
train_epoch(model, dataloader1, loss_fn, optimizer, loss_meter, performance_meter, performance)
fin_loss, fin_perf = test_model(model, dataloader2, loss_fn=loss_fn)
E["epoch"].append(epoch)
E["training perf"].append(performance_meter)
E["validation perf"].append(fin_perf)
E["parameters"].append(model.state_dict())
E["optimizer"].append(optimizer.state_dict())
return loss_meter.sum, performance_meter.avg, E
def test_model(model, dataloader, performance=accuracy, loss_fn=None):
# create an AverageMeter for the loss if passed
if loss_fn is not None:
loss_meter = AverageMeter()
performance_meter = AverageMeter()
model.eval()
with torch.no_grad():
for X, y in dataloader:
y_hat = model(X)
loss = loss_fn(y_hat, y) if loss_fn is not None else None
acc = performance(y_hat, y)
if loss_fn is not None:
loss_meter.update(loss.item(), X.shape[0])
performance_meter.update(acc, X.shape[0])
# get final performances
fin_loss = loss_meter.sum if loss_fn is not None else None
fin_perf = performance_meter.avg
return fin_loss, fin_perf
minibatch_size_train = 256
minibatch_size_test = 512
trainloader, testloader, trainset, testset = mnist.get_data(batch_size_train=minibatch_size_test, batch_size_test=minibatch_size_test)
learn_rate = 0.1
num_epochs = 30
model = MLP()
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)
train_loss, train_acc, E = train_model(model, trainloader, testloader, loss_fn, optimizer, num_epochs)
```
Since `Validation_error = 1 - Validation_performance`, minimizing the error is equivalent to maximizing the performance.
```
from matplotlib import pyplot as plt
val_list = list(E["validation perf"])
maxval = max(E["validation perf"])
index = val_list.index(max(val_list)) + 1
plt.plot(E["epoch"], E["validation perf"] )
print(f"The best validation performance is {maxval}, obtained at epoch no. {index} out of {num_epochs}.")
```
### b$^*$. Implement early stopping in one of the additional specifications
A stopping criterion described in the paper is based on the *generalization loss*: $$ GL (t) = 100 * \big( \frac{E_{va}(t)}{E_{opt}(t)} -1 \big)$$ that is, the validation error over the minimum so far in percent. We should stop as soon as this value exceeds a certain threshold $\alpha$.
As reported in the paper, this criterion is used to maximize the probability to find a good solution, as opposed to maximizing the average quality of the solutions.
```
alpha = 1
E_opt = 1 - val_list[0]
for i in range(num_epochs):
E_va = 1 - val_list[i]
if E_va < E_opt:
E_opt = E_va
GL = 100 * (E_va/E_opt - 1)
if GL > alpha:
print(f"This stopping criterion halts the computation at epoch {i+1}")
break
```
As we can see, this criterion stops very early, at the first epoch with lower performance. A solution is to add momentum to SGD to minimize oscillations:
```
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate, momentum=0.9)
num_epochs = 15
train_loss_m, train_acc_m, E_m = train_model(model, trainloader, testloader, loss_fn, optimizer, num_epochs)
from matplotlib import pyplot as plt
val_list = list(E_m["validation perf"])
maxval = max(E_m["validation perf"])
index = val_list.index(max(val_list)) + 1
plt.plot(E_m["epoch"], E_m["validation perf"] )
print(f"The best validation performance is {maxval}, obtained at epoch no. {index} out of {num_epochs}.")
alpha = 2
E_opt = 1 - val_list[0]
for i in range(num_epochs):
E_va = 1 - val_list[i]
if E_va < E_opt:
E_opt = E_va
GL = 100 * (E_va/E_opt - 1)
if GL > alpha:
print(f"This stopping criterion halts the computation at epoch {i+1}")
break
```
From the plot we can see that SGD with momentum performs a lot better than without, reducing oscillations. Nevertheless, this criterion stops very early anyways.
| github_jupyter |
# 07_model_descriptive_statistics
## Build model and generate data
```
import numpy as np
import pyopencl as cl
import nengo
import nengo_ocl
from srnn_pfc.lmu import make_lmu_dms
srate = 1000
model_kwargs = {
'n_trials_per_cond': 2,
'seed': 1337, # ensemble seed
'trial_seed': 1337,
'out_transform': None,
'q': 6,
'theta': 7.0,
'tau': 0.2, # Ignored if hetero_tau is True
'n_neurons': 1200,
'max_rates': ['default', 'uniform_low', 'data'][0],
'dales_law': False,
'hetero_tau': False,
'ssp_dim': 0
}
# cl context
cl_plat = [_ for _ in cl.get_platforms() if _.vendor.upper().startswith('NVIDIA')][0]
cl_ctx = cl.Context(dev_type=cl.device_type.ALL,
properties=[(cl.context_properties.PLATFORM, cl_plat)])
# Generate training data
model, probes = make_lmu_dms(**model_kwargs)
n_train_trials = model_kwargs['n_trials_per_cond'] * 8 * 2 # 16 conditions
with nengo_ocl.Simulator(model, context=cl_ctx) as sim:
sim.run(6 * n_train_trials) # 6 seconds per trial
#filt = nengo.synapses.Lowpass(0.01)
filt = nengo.Alpha(0.05)
spikes = sim.data[probes['ensemble']]
rates = filt.filt(spikes)
tvec = sim.trange()
```
Take a look at the spiketrains.
```
import matplotlib.pyplot as plt
from nengo.utils.matplotlib import rasterplot
from nengo.utils.ensemble import sorted_neurons, tuning_curves
sort_idx = sorted_neurons(model.ensembles[0], sim)
rasterplot(tvec[tvec < 18], spikes[:, sort_idx][tvec < 18, ::10])
```
## Calculate statistics of ensemble spiking
```
import quantities as pq
from neo.core import SpikeTrain
from elephant.statistics import isi, cv, mean_firing_rate
from elephant.conversion import BinnedSpikeTrain
from elephant.spike_train_correlation import spike_train_timescale
from pingouin import circ_corrcl
bin_size = 1 * pq.ms
max_tau = 500 * pq.ms
t_stop = (tvec[-1] + 0.001) * pq.s
n_neur = model_kwargs['n_neurons']
cv_out = np.nan * np.ones((n_neur, 1))
rate_out = np.nan * np.ones((n_neur, 1))
tscale_out = np.nan * np.ones((n_neur, 1))
for n_ix in range(n_neur):
spike_inds = np.where(spikes[:, n_ix])[0]
if len(spike_inds) > 1:
spike_times = tvec[spike_inds]
spiketrain = SpikeTrain(spike_times * pq.s, t_stop)
bin_spiketrain = BinnedSpikeTrain(spiketrain, bin_size=bin_size, t_stop=t_stop, tolerance=None)
cv_out[n_ix] = cv(isi(spiketrain))
rate_out[n_ix] = mean_firing_rate(spiketrain)
tscale_out[n_ix] = spike_train_timescale(bin_spiketrain, max_tau).rescale(pq.s).magnitude
```
#### Distribution of CVs
```
tmp = cv_out[np.logical_and(~np.isnan(cv_out), cv_out > 0.1)]
plt.hist(tmp, bins=50)
plt.xlabel('CV')
plt.ylabel('# Neurons')
plt.title(f"CV = {np.nanmean(tmp):.2f} +/- {np.nanstd(tmp):.2f}")
plt.show()
```
#### Distribution of firing rates. (Note the log-scale in the plot.)
```
avg_rate, sigma_rate = 10**np.nanmean(np.log10(rate_out)), 10**np.nanstd(np.log10(rate_out))
plt.hist(rate_out, bins=10 ** np.linspace(np.log10(0.01), np.log10(100), 50))
plt.xlabel('Firing Rate (Hz)')
plt.ylabel('# Neurons')
plt.gca().set_xscale("log")
plt.axvline(avg_rate, ls='--', color='k')
plt.title(f"Average rate = {avg_rate:.2f} +/- {sigma_rate:.2f} Hz; Max={np.nanmax(rate_out):.2f} Hz")
plt.show()
```
#### Distribution of correlation timescales
```
tmp = np.sort(tscale_out[~np.isnan(tscale_out)])[::-1]
tmp = tmp[int(0.1 * tmp.size):int(0.9*tmp.size)]
plt.hist(tmp, bins=10 ** np.linspace(np.log10(0.1), np.log10(200), 50))
plt.xlabel('Timescale (s)')
plt.ylabel('# Neurons')
plt.gca().set_xscale("log")
plt.axvline(np.nanmean(tmp), ls='--', color='k')
plt.title(f"TS = {np.nanmean(tmp):.2f} +/- {np.nanstd(tmp):.2f} s")
plt.show()
```
## Tuning Curves
```
from srnn_pfc.lmu import make_ldn_B_A
ldn, B_full, A_full = make_ldn_B_A(theta=model_kwargs['theta'], q=model_kwargs['q'],
size_in=2)
x = np.random.uniform(-1, 1, size=(500, 2))
x_converted, a = nengo.utils.ensemble.tuning_curves(model.ensembles[0], sim,
x @ B_full.T)
plt.figure(figsize=(8,6))
for i in range(20):
plt.subplot(4, 5, i+1)
plt.tricontourf(x[:,0], x[:,1], a[:,i], cmap='gray_r')
plt.xticks([])
plt.yticks([])
plt.tight_layout()
```
The above method determines the tuning curves when there's no recurrent connection. Let's try a different approach that will give tuning curves with the recurrence.
The idea is to run the model and just give a single pulse of input at the beginning, and then see what the response of the neurons are over time. This is basically a peristimimulus time histogram. We'll do this twice, once for each dimension of the input. In theory, we could also do this for different combinations of this input, but the internal represenation is suppose to be at least approximately linear, so that shouldn't matter too much.
```
from srnn_pfc.lmu import make_ldn_B_A
ldn, B_full, A_full = make_ldn_B_A(theta=model_kwargs['theta'], q=model_kwargs['q'],
size_in=2)
model2 = nengo.Network()
ens = model.ensembles[0] # grab the ensemble out of the old model
model2.ensembles.append(ens) # and put it in the new one
model2.connections.append(model.connections[2]) # and include the recurrent connection
with model2:
stim = nengo.Node(lambda t: [100,0] if 0<t<0.02 else [0,0])
nengo.Connection(stim, ens, synapse=0.2, transform=B_full)
p = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model2)
with sim:
sim.run(10.0)
data_dim1 = sim.data[p]
# now change the stimulus and run it again
stim.output = lambda t: [0,100] if 0<t<0.02 else [0,0]
sim = nengo.Simulator(model2)
with sim:
sim.run(10.0)
data_dim2 = sim.data[p]
plt.figure(figsize=(12,5))
W = 4
H = 5
N = W*H
filt = nengo.synapses.Lowpass(0.01) # just to smooth the spike data a bit, rather than binning it
start = 20
t = sim.trange()[start:]
for i in range(N):
plt.subplot(H, W, i+1)
plt.plot(t, filt.filtfilt(data_dim1[start:,i]), label='stim 1')
plt.plot(t, filt.filtfilt(data_dim2[start:,i]), label='stim 2')
if i == 0:
plt.legend()
if i<N-W:
plt.xticks([])
else:
plt.xlabel('time (s)')
plt.ylim(0,400)
if i%W == 0:
plt.ylabel('Hz')
else:
plt.yticks([])
plt.tight_layout()
plt.show()
```
This shows the response of 20 different neurons, given just a pulse of input at the beginning. The blue line is the response to a pulse on the first dimension of the input, and the orange line is the second dimension. If it's given a pulse that's some combination of the two, it should give a response that's something like the combination of the two. (It won't be a perfect sum, because of the neuron non-linearity, but it should be equivalent to adding the currents for the two cases).
| github_jupyter |
```
import numpy as np
import random
import keras
import keras.backend as K
from keras import Model
from keras.layers import Dense, Input, Flatten, Conv1D, Reshape
from keras import optimizers
from keras import losses
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
import os
os.environ["CUDA_VISIBLE_DEVICES"]='5,6,7'
#PARAMS
%matplotlib notebook
#WINDOW_SIZE = 100
CLIP_LEN=1000
NUM_BLIPS = 2000
BLIP_LEN = 200
MAX_FREQ=BLIP_LEN//20
#MODEL
inp = Input(shape=(CLIP_LEN,))
inpr = Reshape(target_shape=(CLIP_LEN,1,))(inp)
h1 = Conv1D(CLIP_LEN//2,100,activation='relu')(inpr)
h2 = Conv1D(CLIP_LEN//4,200,activation='relu')(h1)
h3 = Conv1D(CLIP_LEN//8,100,activation='relu')(h2)
h4 = Conv1D(CLIP_LEN//16,200,activation='relu')(h3)
#h1 = Dense(CLIP_LEN//2,activation='relu')(inp)
h4flat = Flatten()(h4)
h5 = Dense(CLIP_LEN//32,activation='relu')(h4flat)
#h3 = Flatten()(h2)
out = Dense(4,activation='relu')(h5)
model = Model(inputs=inp,outputs=out)
def absolute_to_relative_params(params):
rel_params = np.zeros_like(params).astype(float)
for i in range(len(params)):
if i%2 == 0:
rel_params[i]=float(params[i])/float(CLIP_LEN-BLIP_LEN)
else:
rel_params[i]=float(params[i]-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
return rel_params
def relative_to_absolute_params(params):
abs_params = np.zeros_like(params).astype(int)
for i in range(len(params)):
if i%2 == 0:
abs_params[i]=int(float(params[i])*float(CLIP_LEN-BLIP_LEN))
else:
abs_params[i]=int(float(params[i])*float(BLIP_LEN-MAX_FREQ)+MAX_FREQ)
return abs_params
%matplotlib notebook
#DATAGEN
def generate_blip(start,length,half_wavlen,clip_size):
clip = np.zeros(clip_size)
osc = -1.
gaus=np.exp(-((np.arange(length)-length//2)/(length/4))**2)
#gaus=np.ones(length)
for i in range(start,start+length):
if (i-start)%(half_wavlen) == 0:
osc *= -1.
clip[i] = gaus[i-start]*np.sin(float(2*i*np.pi)/half_wavlen)
return clip
def generate_multi_blip(num_blips):
blip_sum = np.zeros(CLIP_LEN)
for j in range(num_blips):
s = np.random.randint(0,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
#print(s,w)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+generate_noise(CLIP_LEN)
#blip_clip = vec_gen_blip(np.array([s,w]))+generate_noise(CLIP_LEN)
blip_sum+=blip_clip
#y.append([float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
#params[2*j] = float(s)/float(CLIP_LEN-BLIP_LEN)
#params[2*j+1] = float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
return blip_sum
def generate_multi_blip_ordered(num_blips):
blip_sum = np.zeros(CLIP_LEN)
s=0
for j in range(num_blips):
s = np.random.randint(s,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
#print(s,w)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+generate_noise(CLIP_LEN)
#blip_clip = vec_gen_blip(np.array([s,w]))+generate_noise(CLIP_LEN)
blip_sum+=blip_clip
#y.append([float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
#params[2*j] = s#float(s)/float(CLIP_LEN-BLIP_LEN)
#params[2*j+1] = w#float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
return blip_sum
def generate_specific_multi_blip(params,num_blips):
abs_params = relative_to_absolute_params(params)
clip=np.zeros(CLIP_LEN)
for j in range(num_blips):
clip+=generate_blip(int(abs_params[2*j]),BLIP_LEN,int(abs_params[2*j+1]),CLIP_LEN)
return clip
def generate_noise(clip_size,lvl=0.05):
return (2*np.random.random(clip_size)-1)*lvl
def get_window(blip_clips,params,batch_size=32):
while True:
x=[]
y=[]
for i in range(batch_size):
ind = np.random.choice(range(len(blip_clips)))
blip_clip = blip_clips[ind]
s = np.random.randint(0,CLIP_LEN-WINDOW_SIZE)
f = s + WINDOW_SIZE
x.append(blip_clip[s:f])
y.append((params[ind,0],params[ind,2]))
yield (np.array(x), np.array(y))
def randomized_blip_generator(batch_size=32):
while True:
x=[]
y=[]
for i in range(batch_size):
s = np.random.randint(0,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
#print(s,w)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+generate_noise(CLIP_LEN)
#blip_clip = vec_gen_blip(np.array([s,w]))+generate_noise(CLIP_LEN)
x.append(blip_clip)
#y.append([float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
y.append([float(s)/float(CLIP_LEN-BLIP_LEN),float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
yield (np.array(x), np.array(y))
def randomized_multi_blip_generator(batch_size=32,num_blips=1):
while True:
x=[]
y=[]
for i in range(batch_size):
blip_sum = np.zeros(CLIP_LEN)
params = np.zeros(num_blips*2)
for j in range(num_blips):
s = np.random.randint(0,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
#print(s,w)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+generate_noise(CLIP_LEN)
#blip_clip = vec_gen_blip(np.array([s,w]))+generate_noise(CLIP_LEN)
blip_sum+=blip_clip
#y.append([float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
params[2*j] = s#float(s)/float(CLIP_LEN-BLIP_LEN)
params[2*j+1] = w#float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
x.append(blip_sum)
y.append(absolute_to_relative_params(params))
yield (np.array(x), np.array(y))
def randomized_multi_blip_generator_ordered(batch_size=32,num_blips=1):
while True:
x=[]
y=[]
for i in range(batch_size):
blip_sum = np.zeros(CLIP_LEN)
params = np.zeros(num_blips*2)
s=0
for j in range(num_blips):
s = np.random.randint(s,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
#print(s,w)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+generate_noise(CLIP_LEN)
#blip_clip = vec_gen_blip(np.array([s,w]))+generate_noise(CLIP_LEN)
blip_sum+=blip_clip
#y.append([float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)])
params[2*j] = s#float(s)/float(CLIP_LEN-BLIP_LEN)
params[2*j+1] = w#float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
x.append(blip_sum)
y.append(absolute_to_relative_params(params))
yield (np.array(x), np.array(y))
for x,y in randomized_multi_blip_generator(batch_size=1,num_blips=2):
plt.plot(np.arange(x[0].shape[0]),x[0])
plt.show()
#plt.xlim(y[0,0],y[0,0]+BLIP_LEN)
break
def vec_gen_blip(vec, length=BLIP_LEN,clip_size=CLIP_LEN,batch_size=32):
clip=K.zeros(clip_size)
xarg=K.arange(length,dtype='float')
gaus = K.exp(-4*((np.arange(length,dtype='float32')-length//2)/length))
blip_freq = 2*np.pi*xarg/vec[...,1]
blip_sub = K.sin(blip_freq)
#blip = K.expand_dims(gaus,0)*blip_sub
#print(K.dtype(gaus),K.dtype(blip_sub))
blip = gaus*blip_sub
print(blip.get_shape())
K.print_tensor(K.shape(blip))
clip_slice = tf.slice(clip, [K.cast(vec[...,0],dtype='int32')], [K.cast(vec[...,0]+length,dtype='int32')])
clip_slice.assign(blip)
return clip
def analysis_loss(y_true, y_pred):
diff = vec_gen_blip(y_pred)
diff -= vec_gen_blip(y_true)
return np.linalg.norm(diff,axis=-1)
def loss(y,yhat):
return K.sum((y-yhat)**2)
def loss_non_graph(y,yhat):
return sum((y-yhat)**2)
model.summary()
model.compile(optimizers.adam(lr=10e-8),loss=loss)
model.fit_generator(randomized_multi_blip_generator_ordered(batch_size=32,num_blips=2),steps_per_epoch=1000, epochs=1000)
%matplotlib notebook
other=random.random()
params=[other,random.random(),(random.random())*(1.-other)+other,random.random()]
#params=[0.2,.5,0.6,0.2]
noise=generate_noise(CLIP_LEN)
blip_clip = generate_specific_multi_blip(params,2)+noise
output = model.predict([[blip_clip]])
prediction=output[0]
print(np.stack([params,prediction,np.abs(params-prediction)]))
#print(prediction)
blip_clip_recon = generate_specific_multi_blip(prediction[0:2],1)+noise
blip_clip_recon2 = generate_specific_multi_blip(prediction[2:4],1)+noise
blip_clip_recon3 = generate_specific_multi_blip(prediction,2)+noise
print("LOSS: ", loss_non_graph(params,prediction))
plt.plot(blip_clip+4)
plt.plot(blip_clip_recon-1)
plt.plot(blip_clip_recon2-3)
plt.plot(blip_clip_recon3-5)
abs_params = relative_to_absolute_params(params)
abs_prediction = relative_to_absolute_params(prediction)
# plt.axvline(abs_params[0],c='r')
# plt.axvline(abs_params[2],c='r')
# plt.axvline(abs_prediction[0])
# plt.axvline(abs_prediction[2])
plt.show()
%matplotlib notebook
plt.plot(blip_clip+4)
plt.plot(blip_clip_recon-1)
plt.plot(blip_clip_recon2-3)
plt.plot(blip_clip_recon3-6)
plt.show()
%matplotlib notebook
s = np.random.randint(0,CLIP_LEN-BLIP_LEN)
w = np.random.randint(MAX_FREQ,BLIP_LEN)
ssc = float(s)/float(CLIP_LEN)
wsc = float(w-MAX_FREQ)/float(BLIP_LEN-MAX_FREQ)
noise=generate_noise(CLIP_LEN)
blip_clip = generate_blip(s,BLIP_LEN,w,CLIP_LEN)+noise
prediction = model.predict([[blip_clip]])
#print(prediction)
s_hat=prediction[0,0]
w_hat=prediction[0,1]
print(np.array([[s,ssc,w,wsc],
[(CLIP_LEN-BLIP_LEN)*s_hat,s_hat,w_hat*(BLIP_LEN-MAX_FREQ)+MAX_FREQ,w_hat]]))
#blip_clip_recon = generate_blip(0,BLIP_LEN,w_hat*(BLIP_LEN-MAX_FREQ)+MAX_FREQ,CLIP_LEN)+noise
blip_clip_recon = generate_blip(int(s_hat*(CLIP_LEN-BLIP_LEN)),BLIP_LEN,w_hat*(BLIP_LEN-MAX_FREQ)+MAX_FREQ,CLIP_LEN)+noise
print("LOSS: ", loss_non_graph(np.array([ssc,wsc]),np.array([s_hat,w_hat])))
plt.plot(blip_clip+1)
plt.plot(blip_clip_recon-1)
plt.show()
```
| github_jupyter |
# Barren Plateaus
<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
## Overview
In the training of classical neural networks, gradient-based optimization methods encounter the problem of local minimum and saddle points. Correspondingly, the Barren plateau phenomenon could potentially block us from efficiently training quantum neural networks. This peculiar phenomenon was first discovered by McClean et al. in 2018 [[arXiv:1803.11173]](https://arxiv.org/abs/1803.11173). In few words, when we randomly initialize the parameters in random circuit structure meets a certain degree of complexity, the optimization landscape will become very flat, which makes it difficult for the optimization method based on gradient descent to find the global minimum. For most variational quantum algorithms (VQE, etc.), this phenomenon means that when the number of qubits increases, randomly choose a circuit ansatz and randomly initialize the parameters of it may not be a good idea. This will make the optimization landscape corresponding to the loss function into a huge plateau, which makes the quantum neural network's training much more difficult. The initial random value for the optimization process is very likely to stay inside this plateau, and the convergence time of gradient descent will be prolonged.

The figure is generated through [Gradient Descent Viz](https://github.com/lilipads/gradient_descent_viz)
This tutorial mainly discusses how to show the barren plateau phenomenon with Paddle Quantum. Although it does not involve any algorithm innovation, it can improve readers' understanding about gradient-based training for quantum neural network. We first introduce the necessary libraries and packages:
```
import time
import numpy as np
from matplotlib import pyplot as plt
import paddle
from paddle import matmul
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import dagger
from paddle_quantum.state import density_op
```
## Random network structure
Here we follow the original method mentioned in the paper by McClean (2018) and build the following random circuit (currently we do not support the built-in control-Z gate, use CNOT instead):

First, we rotate all the qubits around the $y$-axis of the Bloch sphere with rotation gates $R_y(\pi/4)$.
The remaining structure forms a block, each block can be further divided into two layers:
- Build a layer of random rotation gates on all the qubits, where $R_{\ell,n} \in \{R_x, R_y, R_z\}$. The subscript $\ell$ means the gate is in the $\ell$-th repeated block. In the figure above, $\ell =1$. The second subscript $n$ indicates which qubit it acts on.
- The second layer is composed of CNOT gates, which act on adjacent qubits.
In Paddle Quantum, we can build this circuit with the following code:
```
def rand_circuit(theta, target, num_qubits):
# We need to convert Numpy array to Tensor in PaddlePaddle
const = paddle.to_tensor(np.array([np.pi/4]))
# Initialize the quantum circuit
cir = UAnsatz(num_qubits)
# ============== First layer ==============
# Fixed-angle Ry rotation gates
for i in range(num_qubits):
cir.ry(const, i)
# ============== Second layer ==============
# Target is a random array help determine rotation gates
for i in range(num_qubits):
if target[i] == 0:
cir.rz(theta[i], i)
elif target[i] == 1:
cir.ry(theta[i], i)
else:
cir.rx(theta[i], i)
# ============== Third layer ==============
# Build adjacent CNOT gates
for i in range(num_qubits - 1):
cir.cnot([i, i + 1])
return cir.U
```
## Loss function and optimization landscape
After determining the circuit structure, we also need to define a loss function to determine the optimization landscape. Following the same set up with McClean (2018), we take the loss function from VQE:
$$
\mathcal{L}(\boldsymbol{\theta})= \langle0| U^{\dagger}(\boldsymbol{\theta})H U(\boldsymbol{\theta}) |0\rangle,
\tag{1}
$$
The unitary matrix $U(\boldsymbol{\theta})$ is the quantum neural network with the random structure we built from the last section. For Hamiltonian $H$, we also take the simplest form $H = |00\cdots 0\rangle\langle00\cdots 0|$. After that, we can start sampling gradients with the two-qubit case - generate 300 sets of random network structures and different random initial parameters $\{\theta_{\ell,n}^{( i)}\}_{i=1}^{300}$. Each time the partial derivative with respect to the **first parameter $\theta_{1,1}$** is calculated according to the analytical gradient formula from VQE. Then analyze the mean and variance of these 300 sampled partial gradients. The formula for the analytical gradient is:
$$
\partial \theta_{j}
\equiv \frac{\partial \mathcal{L}}{\partial \theta_j}
= \frac{1}{2} \big[\mathcal{L}(\theta_j + \frac{\pi}{2}) - \mathcal{L}(\theta_j - \frac{\pi}{2})\big].
\tag{2}
$$
For a detailed derivation, see [arXiv:1803.00745](https://arxiv.org/abs/1803.00745).
```
# Hyper parameter settings
np.random.seed(42) # Fixed Numpy random seed
N = 2 # Set the number of qubits
samples = 300 # Set the number of sampled random network structures
THETA_SIZE = N # Set the size of the parameter theta
ITR = 1 # Set the number of iterations
LR = 0.2 # Set the learning rate
SEED = 1 # Fixed the randomly initialized seed in the optimizer
# Initialize the register for the gradient value
grad_info = []
paddle.seed(SEED)
class manual_gradient(paddle.nn.Layer):
# Initialize a list of learnable parameters and fill the initial value with a uniform distribution of [0, 2*pi]
def __init__(self, shape, param_attr= paddle.nn.initializer.Uniform(
low=0.0, high=2 * np.pi),dtype='float64'):
super(manual_gradient, self).__init__()
# Convert Numpy array to Tensor in PaddlePaddle
self.H = paddle.to_tensor(density_op(N))
# Define loss function and forward propagation mechanism
def forward(self):
# Initialize three theta parameter lists
theta_np = np.random.uniform(low=0., high= 2 * np.pi, size=(THETA_SIZE))
theta_plus_np = np.copy(theta_np)
theta_minus_np = np.copy(theta_np)
# Modified to calculate analytical gradient
theta_plus_np[0] += np.pi/2
theta_minus_np[0] -= np.pi/2
# Convert Numpy array to Tensor in PaddlePaddle
theta = paddle.to_tensor(theta_np)
theta_plus = paddle.to_tensor(theta_plus_np)
theta_minus = paddle.to_tensor(theta_minus_np)
# Generate random targets, randomly select circuit gates in rand_circuit
target = np.random.choice(3, N)
U = rand_circuit(theta, target, N)
U_dagger = dagger(U)
U_plus = rand_circuit(theta_plus, target, N)
U_plus_dagger = dagger(U_plus)
U_minus = rand_circuit(theta_minus, target, N)
U_minus_dagger = dagger(U_minus)
# Calculate the analytical gradient
grad = (paddle.real(matmul(matmul(U_plus_dagger, self.H), U_plus))[0][0]
- paddle.real(matmul(matmul(U_minus_dagger, self.H), U_minus))[0][0])/2
return grad
# Define the main block
def main():
# Set the dimension of QNN
sampling = manual_gradient(shape=[THETA_SIZE])
# Sampling to obtain gradient information
grad = sampling()
return grad.numpy()
# Record running time
time_start = time.time()
# Start sampling
for i in range(samples):
if __name__ == '__main__':
grad = main()
grad_info.append(grad)
time_span = time.time() - time_start
print('The main program segment has run in total ', time_span, ' seconds')
print("Use ", samples, " samples to get the mean value of the gradient of the random network's first parameter, and we have:", np.mean(grad_info))
print("Use ", samples, "samples to get the variance of the gradient of the random network's first parameter, and we have:", np.var(grad_info))
```
## Visualization of the Optimization landscape
Next, we use Matplotlib to visualize the optimization landscape. In the case of **two qubits**, we only have two parameters $\theta_1$ and $\theta_2$, and there are 9 possibilities for the random circuit structure in the second layer.

The plain structure shown in the $R_z$-$R_z$ layer from the last figure is something we should avoid. In this case, it's nearly impossible to converge to the theoretical minimum. If you want to try to draw some optimization landscapes yourself, please refer to the following code:
```
# Introduce the necessary packages
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import LinearLocator, FormatStrFormatter
time_start = time.time()
N = 2
# Set the image ratio Vertical: Horizontal = 0.3
fig = plt.figure(figsize=plt.figaspect(0.3))
# Generate points on the x, y axis
X = np.linspace(0, 2 * np.pi, 80)
Y = np.linspace(0, 2 * np.pi, 80)
# Generate 2D mesh
xx, yy = np.meshgrid(X, Y)
# Define the necessary logic gates
def rx(theta):
mat = np.array([[np.cos(theta/2), -1j * np.sin(theta/2)],
[-1j * np.sin(theta/2), np.cos(theta/2)]])
return mat
def ry(theta):
mat = np.array([[np.cos(theta/2), -1 * np.sin(theta/2)],
[np.sin(theta/2), np.cos(theta/2)]])
return mat
def rz(theta):
mat = np.array([[np.exp(-1j * theta/2), 0],
[0, np.exp(1j * theta/2)]])
return mat
def CNOT():
mat = np.array([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]])
return mat
# ============= The first figure =============
# We visualize the case where the second layer is kron(Ry, Ry)
ax = fig.add_subplot(1, 2, 1, projection='3d')
# Forward propagation to calculate loss function:
def cost_yy(para):
L1 = np.kron(ry(np.pi/4), ry(np.pi/4))
L2 = np.kron(ry(para[0]), ry(para[1]))
U = np.matmul(np.matmul(L1, L2), CNOT())
H = np.zeros((2 ** N, 2 ** N))
H[0, 0] = 1
val = (U.conj().T @ H@ U).real[0][0]
return val
# Draw an image
Z = np.array([[cost_yy([x, y]) for x in X] for y in Y]).reshape(len(Y), len(X))
surf = ax.plot_surface(xx, yy, Z, cmap='plasma')
ax.set_xlabel(r"$\theta_1$")
ax.set_ylabel(r"$\theta_2$")
ax.set_title("Optimization Landscape for Ry-Ry Layer")
# ============= The second figure =============
# We visualize the case where the second layer is kron(Rx, Rz)
ax = fig.add_subplot(1, 2, 2, projection='3d')
def cost_xz(para):
L1 = np.kron(ry(np.pi/4), ry(np.pi/4))
L2 = np.kron(rx(para[0]), rz(para[1]))
U = np.matmul(np.matmul(L1, L2), CNOT())
H = np.zeros((2 ** N, 2 ** N))
H[0, 0] = 1
val = (U.conj().T @ H @ U).real[0][0]
return val
Z = np.array([[cost_xz([x, y]) for x in X] for y in Y]).reshape(len(Y), len(X))
surf = ax.plot_surface(xx, yy, Z, cmap='viridis')
ax.set_xlabel(r"$\theta_1$")
ax.set_ylabel(r"$\theta_2$")
ax.set_title("Optimization Landscape for Rx-Rz Layer")
plt.show()
time_span = time.time() - time_start
print('The main program segment has run in total ', time_span, ' seconds')
```
## More qubits
Then, we will see what happens to the sampled gradients when we increase the number of qubits
```
# Hyper parameter settings
selected_qubit = [2, 4, 6, 8]
samples = 300
grad_val = []
means, variances = [], []
# Record operation time
time_start = time.time()
# Keep increasing the number of qubits
for N in selected_qubit:
grad_info = []
THETA_SIZE = N
for i in range(samples):
class manual_gradient(paddle.nn.Layer):
# Initialize a list of learnable parameters of length THETA_SIZE
def __init__(self, shape, param_attr=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),dtype='float64'):
super(manual_gradient, self).__init__()
# Convert to Tensor in PaddlePaddle
self.H = paddle.to_tensor(density_op(N))
# Define loss function and forward propagation mechanism
def forward(self):
# Initialize three theta parameter lists
theta_np = np.random.uniform(low=0., high= 2 * np.pi, size=(THETA_SIZE))
theta_plus_np = np.copy(theta_np)
theta_minus_np = np.copy(theta_np)
# Modify to calculate analytical gradient
theta_plus_np[0] += np.pi/2
theta_minus_np[0] -= np.pi/2
# Convert to Tensor in PaddlePaddle
theta = paddle.to_tensor(theta_np)
theta_plus = paddle.to_tensor(theta_plus_np)
theta_minus = paddle.to_tensor(theta_minus_np)
# Generate random targets, randomly select circuit gates in rand_circuit
target = np.random.choice(3, N)
U = rand_circuit(theta, target, N)
U_dagger = dagger(U)
U_plus = rand_circuit(theta_plus, target, N)
U_plus_dagger = dagger(U_plus)
U_minus = rand_circuit(theta_minus, target, N)
U_minus_dagger = dagger(U_minus)
# Calculate analytical gradient
grad = (paddle.real(matmul(matmul(U_plus_dagger, self.H), U_plus))[0][0]
- paddle.real(matmul(matmul(U_minus_dagger, self.H), U_minus))[0][0])/2
return grad
# Define the main program segment
def main():
# Set the dimension of QNN
sampling = manual_gradient(shape=[THETA_SIZE])
# Sampling to obtain gradient information
grad = sampling()
return grad.numpy()
if __name__ == '__main__':
grad = main()
grad_info.append(grad)
# Record sampling information
grad_val.append(grad_info)
means.append(np.mean(grad_info))
variances.append(np.var(grad_info))
time_span = time.time() - time_start
print('The main program segment has run in total ', time_span, ' seconds')
grad = np.array(grad_val)
means = np.array(means)
variances = np.array(variances)
n = np.array(selected_qubit)
print("We then draw the statistical results of this sampled gradient:")
fig = plt.figure(figsize=plt.figaspect(0.3))
# ============= The first figure =============
# Calculate the relationship between the average gradient of random sampling and the number of qubits
plt.subplot(1, 2, 1)
plt.plot(n, means, "o-.")
plt.xlabel(r"Qubit #")
plt.ylabel(r"$ \partial \theta_{i} \langle 0|H |0\rangle$ Mean")
plt.title("Mean of {} sampled gradient".format(samples))
plt.xlim([1,9])
plt.ylim([-0.06, 0.06])
plt.grid()
# ============= The second figure =============
# Calculate the relationship between the variance of the randomly sampled gradient and the number of qubits
plt.subplot(1, 2, 2)
plt.semilogy(n, variances, "v")
# Polynomial fitting
fit = np.polyfit(n, np.log(variances), 1)
slope = fit[0]
intercept = fit[1]
plt.semilogy(n, np.exp(n * slope + intercept), "r--", label="Slope {:03.4f}".format(slope))
plt.xlabel(r"Qubit #")
plt.ylabel(r"$ \partial \theta_{i} \langle 0|H |0\rangle$ Variance")
plt.title("Variance of {} sampled gradient".format(samples))
plt.legend()
plt.xlim([1,9])
plt.ylim([0.0001, 0.1])
plt.grid()
plt.show()
```
It should be noted that, in theory, only when the network structure and loss function we choose meet certain conditions (unitary 2-design), see paper [[1]](https://arxiv.org/abs/1803.11173), this effect will appear. Then we might as well visualize the influence of choosing different qubits on the optimization landscape:
 Optimization landscape sampled for 2,4,and 6 qubits from left to right in different z-axis scale. (b) Same landscape in a fixed z-axis scale.")
<div style="text-align:center">(a) Optimization landscape sampled for 2,4,and 6 qubits from left to right in different z-axis scale. (b) Same landscape in a fixed z-axis scale. </div>
$\theta_1$ and $\theta_2$ are the first two circuit parameters, and the remaining parameters are all fixed to $\pi$. This way, it helps us visualize the shape of this high-dimensional manifold. Unsurprisingly, the landscape becomes more flat as $n$ increases. **Notice the rapidly decreasing scale in the $z$-axis**. Compared with the 2-qubit case, the optimized landscape of 6 qubits is very flat.
_______
## References
[1] McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes. [Nat. Commun. 9, 4812 (2018).](https://www.nature.com/articles/s41467-018-07090-4)
[2] Cerezo, M., Sone, A., Volkoff, T., Cincio, L. & Coles, P. J. Cost-Function-Dependent Barren Plateaus in Shallow Quantum Neural Networks. [arXiv:2001.00550 (2020).](https://arxiv.org/abs/2001.00550)
| github_jupyter |
<a href="https://colab.research.google.com/github/dcastf01/creating_adversarial_images/blob/main/extract_data_from_models_to_adversarial_experiments.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Imports
```
import os, sys, math
import numpy as np
from matplotlib import pyplot as plt
if 'google.colab' in sys.modules: # Colab-only Tensorflow version selector
%tensorflow_version 2.x
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
AUTO = tf.data.experimental.AUTOTUNE
!nvidia-smi
```
#download dataset from drive
```
import time
start_time = time.time()
```
##primera forma
pendiente:añadir los arhivos correctos
```
import requests
def download_file_from_google_drive(id, destination):
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
URL = "https://docs.google.com/uc?export=download"
#URL=" https://docs.google.com/open?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
# %%capture
class datasets_active:
imagenet_validation=False #@param {type:"boolean"}
Cesar_image_perturbedpertBig_round1=False #@param {type:"boolean"}
Cesar_image_perturbedpertBig_round2=False #@param {type:"boolean"}
Cesar_image_perturbedpertBig_round3=False #@param {type:"boolean"}
Cesar_image_perturbedpertBig_round4=False#@param {type:"boolean"}
David_image_perturbedpertBig_round4=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_075=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_05=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_025=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_01=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_005=False#@param {type:"boolean"}
David_image_perturbedpertBig_round5_002=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_075=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_05=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_025=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_01=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_005=False#@param {type:"boolean"}
David_image_perturbedpertBig_round6_002=False#@param {type:"boolean"}
David_image_perturbedpertBig_mod_3_round1_002=False#@param {type:"boolean"}
David_image_perturbedpertBig_mod_3_round2_0025=True#@param {type:"boolean"}
destination="pertBig.zip"
def get_dictionary_file_id_and_destination(file_id,destination=destination):
return {
"file_id":file_id,
"destination":destination,
}
files_id_destination={
"imagenet_validation":get_dictionary_file_id_and_destination("1QA2HeOxusHicdbUxg54M8PG2GorB8ukG",
"ILSVRC2012_img_val.tar"),
"Cesar_image_perturbedpertBig_round1": get_dictionary_file_id_and_destination("1LLMgbtz1ENbZR1GFNvKIGEudMceqM4ZV" ),
# "Cesar_image_perturbedpertBig_round2":get_dictionary_file_id_and_destination("1lKXviEtQsXA481VLO-fPs5w723d186_D"),
# "Cesar_image_perturbedpertBig_round3":get_dictionary_file_id_and_destination("1LLMgbtz1ENbZR1GFNvKIGEudMceqM4ZV"),
# "Cesar_image_perturbedpertBig_round4":get_dictionary_file_id_and_destination("1-00I6LoDq9EOj8OMC7RU80XPSKF6LLbE"),
# "David_image_perturbedpertBig_round4":get_dictionary_file_id_and_destination("12TXiuSTKVqTgTa_xmK2akLtTGbp82vuO"),
"David_image_perturbedpertBig_round5_075":get_dictionary_file_id_and_destination("10MR8Q9BZhAiLpTtO8eqkO0dTlJMU0ry5"),
"David_image_perturbedpertBig_round5_05":get_dictionary_file_id_and_destination("1uDyCu2igIpITT8Isy1FW9n49sM1-oeDT"),
"David_image_perturbedpertBig_round5_025":get_dictionary_file_id_and_destination("1kog15Dmyj_d6t6omQudbsA2GU3SwFalN"),
"David_image_perturbedpertBig_round5_01":get_dictionary_file_id_and_destination("1l0DqIr76nJI63J7yzgxvyhyTjMIZqmMy"),
"David_image_perturbedpertBig_round5_005":get_dictionary_file_id_and_destination("1IVsB5QwGt-5GfefbrvSVGFI2cGW7yIGx"),
"David_image_perturbedpertBig_round5_002":get_dictionary_file_id_and_destination("1MuJfPRm14dWzDXAV_JjEyRjZ3iZOh6Zw"),
"David_image_perturbedpertBig_round6_075":get_dictionary_file_id_and_destination("1__vhYkd-FpNNCIYuLSyyS2huvTOfsAZT"),
"David_image_perturbedpertBig_round6_05":get_dictionary_file_id_and_destination("19szuYAsFPVA922UpOlypz_0PFX0H7IPa"),
"David_image_perturbedpertBig_round6_025":get_dictionary_file_id_and_destination("1RZ-VC5hx4HV9pYrvGKQ5IRAbTbzRsiBI"),
"David_image_perturbedpertBig_round6_01":get_dictionary_file_id_and_destination("1VxSr6rZvpDxVVAmMeV7najEeiAPrtq1Z"),
"David_image_perturbedpertBig_round6_005":get_dictionary_file_id_and_destination("1lXkFZT7rMu2urCS3U3kwy0Oubp2yfGUc"),
"David_image_perturbedpertBig_round6_002":get_dictionary_file_id_and_destination("1x4-pVseLzzAusfUo7lFkA5PEsV28U38p"),
"David_image_perturbedpertBig_mod_3_round1_002":get_dictionary_file_id_and_destination("1Ce8mcLqTU9ogoByCZw65FyjJxKwCVR8j"),
"David_image_perturbedpertBig_mod_3_round2_0025":get_dictionary_file_id_and_destination("1aRXJOZZf0BOUBmQEuJtY7NkK83Y-2j7H"),
}
for dataset_name, dictionary_fileid_destination in files_id_destination.items():
is_active=getattr(datasets_active,dataset_name)
if is_active:
destination=dictionary_fileid_destination["destination"]
file_id=dictionary_fileid_destination["file_id"]
download_file_from_google_drive(file_id, destination)
!unzip -q /content/pertBig.zip
print("the dataset :",dataset_name, "was download and unzip")
label_id="1PKH4QWZe_VCOhu19oOhbV9z-YKrKACO7"
destination_label_id="/content/label.txt"
download_file_from_google_drive(label_id, destination_label_id)
label_real_name="1sxe3eunq5U4EwsHlLaeRmwcnaZjCEcnh"
destination_label_real_name="/content/name_real_label.txt"
download_file_from_google_drive(label_real_name, destination_label_real_name)
#confirmar que se descarga bien ya que si se ha descargado varias veces al ser tan grande luego no te deja
```
##upload the drive to storage the information
```
from google.colab import drive
drive.mount('/content/drive')
```
El tiempo que se necesita en setup es del
#Device detection
```
#@title Detect hardware
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
except ValueError:
tpu = None
gpus = tf.config.experimental.list_logical_devices("GPU")
# Select appropriate distribution strategy for hardware
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
print('Running on TPU ', tpu.master())
elif len(gpus) > 0:
#strategy = tf.distribute.MirroredStrategy(gpus) # this works for 1 to multiple GPUs
print('Running on ', len(gpus), ' GPU(s) ')
strategy=tf.distribute.OneDeviceStrategy(gpus[0])
else:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on CPU')
# How many accelerators do we have ?
print("Number of accelerators: ", strategy.num_replicas_in_sync)
```
#Functions Globals
```
#@title functions to create model
from tensorflow.keras.models import Model
import os
import pandas as pd
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
#plt.title(real_class[(CLASS_NAMES[label_batch[n]==1][0])].title())
plt.title(real_class[np.argmax([label_batch[n]])].title())
plt.axis('off')
def get_models_made():
path_base="/content/drive/My Drive/AI/Ejemplos adversariales/resultados2"
files=os.listdir(path_base)
models_made= [ model[:-4].lower() for model in files]
return models_made
def create_model(name_model,classes=1000):
with strategy.scope():
model=getattr(tf.keras.applications,name_model)(include_top=True, weights='imagenet', classes=classes) #quite input shape por motivos varios
opt = tf.keras.optimizers.RMSprop(learning_rate=0.0005, decay=1e-6)
METRICS=[
tf.keras.metrics.CategoricalAccuracy(name="accuracy"),
tf.keras.metrics.Accuracy(name="accuracy_binary"),
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc',num_thresholds=500),#cambiar el auc
tf.keras.metrics.TopKCategoricalAccuracy( k=3, name='top_3_categorical_accuracy'),
tf.keras.metrics.TopKCategoricalAccuracy( k=5, name='top_5_categorical_accuracy'),
tf.keras.metrics.TopKCategoricalAccuracy( k=10, name='top_10_categorical_accuracy')
]
# model.compile(loss='categorical_crossentropy',
# optimizer=opt,
# metrics=METRICS)
return model
def train_model(model,train_data,validation_data=None,callbacks=None):
model.fit(train_data,
#steps_per_epoch =( train_generator.samples // BATCH_SIZE),
validation_data = validation_data,
#alidation_steps = (validation_generator.samples // BATCH_SIZE),
epochs = 100,
#callbacks=callbacks
)
```
#Models API Tensorflow
##functions necessary
```
import numpy as np
import pathlib
import time
```
```
#@title creating functions to create model and extract preprocesing input
def clean_list_of_models_names(lists_of_private_object):
#todos los modelos empiezan por mayuscula y no son privados, es decir no pueden empezar con "_"
prev_clean_list=list()
clean_list=list()
for element in lists_of_private_object:
if element[0]=="_" or not element[0].isupper():
pass
else:
prev_clean_list.append(element)
return prev_clean_list
def clean_list_of_familys(lists_of_private_object):
#todos los modelos empiezan por mayuscula y no son privados, es decir no pueden empezar con "_"
prev_clean_list=list()
clean_list=list()
for element in lists_of_private_object:
if element[0]=="_" or not element[0].islower() or element=="imagenet_utils":
pass
else:
prev_clean_list.append(element)
return prev_clean_list
def get_preprocesing_input(ALL_FAMILYS,model):
def check_which_is_family_is_the_family_to_which_it_belongs(ALL_FAMILYS,model):
for family in ALL_FAMILYS:
family_module=getattr(tf.keras.applications,family)
if model in dir(family_module):
return family_module
return "ERROR, NO SE HA ENCONTRADO EN NINGUNA FAMILIA"
def get_function_preprocess_input(module_family_net):
return getattr(module_family_net,"preprocess_input")
family_module=check_which_is_family_is_the_family_to_which_it_belongs(ALL_FAMILYS,model)
preprocess_input=(get_function_preprocess_input(family_module))
return preprocess_input
def check_is_csv_is_created(model_name):
path_to_check=path_result+"/"+model_name+".csv"
if os.path.isfile(path_to_check):
return True
return False
def create_model(name_model,classes=1000):
with strategy.scope():
model=getattr(tf.keras.applications,name_model)(include_top=True, weights='imagenet', classes=classes) #quite input shape por motivos varios
opt = tf.keras.optimizers.RMSprop(learning_rate=0.0005, decay=1e-6)
METRICS=[
tf.keras.metrics.CategoricalAccuracy(name="accuracy"),
tf.keras.metrics.Accuracy(name="accuracy_binary"),
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc',num_thresholds=500),#cambiar el auc
tf.keras.metrics.TopKCategoricalAccuracy( k=3, name='top_3_categorical_accuracy'),
tf.keras.metrics.TopKCategoricalAccuracy( k=5, name='top_5_categorical_accuracy'),
tf.keras.metrics.TopKCategoricalAccuracy( k=10, name='top_10_categorical_accuracy')
]
# model.compile(loss='categorical_crossentropy',
# optimizer=opt,
# metrics=METRICS)
return model
#@title funcionts to create list and predict
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
plt.title(CLASS_NAMES[label_batch[n]==1][0].title())
plt.axis('off')
def create_list_file_ds(path):
Imagenet_root = pathlib.Path(path)
list_ds = tf.data.Dataset.list_files(str(Imagenet_root/'*/*'))
return list_ds
def create_list_of_dataset_with_preprocess(filename):
parts = tf.strings.split(filename, os.sep)
label = parts[-2]
image = tf.io.read_file(filename,)
################ try tf.cast como tiene cesar###
#probar pasandolo a pil
#probar usando tf.io.decode
image = tf.io.decode_jpeg(image,channels=3)
print(image)
# image = tf.image.decode_jpeg(image,channels=3)
image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT])
filename_string=parts[-1]
try:
image=function_to_preprocess(image)
except:
print("error")
print(filename)
print(image.shape)
print(filename_string)
return image,label,filename_string
def predict_model(model,map_de_dataset,df,model_name):
#testeando el metodo lista, el anterior metodo está repetido con otro nombre más abajo
start_time=time.time()
init_batch=time.time()
count_batch=0
count_image=0
list_filename=list()
list_code_predict=list()
list_probability_class=list()
list_class_name_predict=list()
list_class_code_real=list()
list_class_name_real=list()
list_probability_class_real=list()
list_model=list()
list_id_image=list()
list_count=list()
for image_batch, label_batch,filename_batch in map_de_dataset.batch(nbatch).take(niteration):
# print("filename: ", filename.numpy())
count_batch+=1
prediction=model.predict(image_batch.numpy())
for image_prediction,label,filename_like_bytes in zip(prediction,label_batch,filename_batch):
count_image+=1
class_code_predict=int(np.argmax(image_prediction))
list_code_predict.append(class_code_predict)
# print("predictionlabel: ",np.argmax(image_prediction))
probability_class=image_prediction[class_code_predict]
list_probability_class.append(probability_class)
# print("probability_class: ",probability_class)
class_name_predict=real_class[class_code_predict]
list_class_name_predict.append(class_name_predict)
# print("class_name_predict: ",class_name_predict)
class_code_real=int(label.numpy().decode("UTF-8"))
list_class_code_real.append(class_code_real)
class_name_real=real_class[class_code_real]
list_class_name_real.append(class_name_real)
probability_class_real=image_prediction[class_code_real]
list_probability_class_real.append(probability_class_real)
# print("class_code_real: ", class_code_real)
# print("class_name_real: ",class_name_real)
# print("probability_class_real: ", probability_class_real)
filename=filename_like_bytes.numpy().decode('UTF-8')
list_filename.append(filename)
id_image=filename[-13:-5]
list_model.append(model_name)
list_id_image.append(id_image)
list_count.append(count_image)
# print("filename: ",filename)
# print("id_image: ",id_image)
time_batch=(time.time() - init_batch)
if count_batch%250==0:
print("--- %s segundos es lo que ha tardado este batch en ser predecido---" % (time_batch))
print("--- %s minutos es lo que tardaría en predecir todo el conjunto de imágenes---" % (time_batch*niteration/60))
print("--- %s batch analizado---" % (round(count_batch/niteration*100,2)))
init_batch=time.time()
time_create_dataframe=time.time()
df=pd.DataFrame(list(zip(list_filename,list_code_predict,list_probability_class,list_class_name_predict,
list_class_code_real,list_class_name_real,list_probability_class_real,list_model,list_id_image)),
columns=["filename","code_predict","probability_class","class_name_predict","class_code_real",
"class_name_real","probability_class_real","model","id_image"],index=[list_count])
duration_creation_dataframe=(time.time()-time_create_dataframe)
print("--- %s segundos es lo que ha tardado en crearse el dataframe---" % (duration_creation_dataframe))
time_total=(time.time() - start_time)
print("------ %s minutos es lo que se tardo en predecir todo el conjunto de imágenes------" % (time_total/60))
return df
```
#study perturbed image
##setup
se tienen que haber descargado los datos , luego se tiene unlog.txt con la información de las imagenes y una carpeta con las imágenes
```
#@title get labels
import pandas as pd
import pathlib
import glob
import json
import math
import time
def get_dictionary_with_real_class():
path_json_real_label="/content/name_real_label.txt" #el json es el bueno
real_class_prev=json.loads(open(path_json_real_label).read())
real_class=dict()
for k in real_class_prev:
real_class[int(k)]=real_class_prev[k][1] #dictionary key:number value:string
return real_class
def change_key_per_value_in_dictionary(dictionary):
new_dict=dict()
for k,v in dictionary.items():
if v in new_dict.keys():
print(k)
new_dict[v]=k
return new_dict
def get_code_name_prediction(name):
return real_class_inverted[name]
real_class=get_dictionary_with_real_class()
real_class_format_to_generator= {str(k): v for k, v in real_class.items()}
real_class_inverted=change_key_per_value_in_dictionary(real_class)
#@title for when we test various experiments
logs=[file_txt[:-4] for file_txt in glob.glob("*.txt") if file_txt.split("_")[0]=="log"]
epsilons_available=[log.split("_")[-1] for log in logs]
#@title predict all images with all the models
for epsilon in epsilons_available:
print("doing", epsilon, "experiment")
data_dir_raw = "/content/result_adversarial"#@param {type:"string"}
data_dir_raw=data_dir_raw+epsilon
validation_labels_file = "/content/label.txt"#@param {type:"string"}
data_dir="/content/perturbed_order"#@param {type:"string"}
data_dir=data_dir+epsilon
log_dir="/content/log_e_{}.txt".format(epsilon)
mod1=False #@param {type:"boolean"}
mod3=True #@param {type:"boolean"}
if mod1:
data = pd.read_csv(log_dir, sep=",",index_col=False, header=None,
names=["originalfilename","original_prediction","confidence_original","epsilon","pertubed_prediction","confidence_pertubed","name_new_file"])
data["pertubed_code"]=data["pertubed_prediction"].apply(get_code_name_prediction)
elif mod3:
data = pd.read_csv(log_dir, sep=",",index_col=False, header=None,
names=["model_name",
"target",
"originalfilename",
"original_prediction",
"confidence_original",
"epsilon",
"all_models_had_exist_creating_adversarial",
"confidence_pertubed",
"name_new_file"])
data["original_code"]=data["original_prediction"].apply(get_code_name_prediction)
result_dir = pathlib.Path(data_dir_raw)
labels=data["original_code"].unique().tolist()
unique_labels = set(labels)
for label in unique_labels:
labeled_data_dir = os.path.join(data_dir, str(label))
# Catch error if sub-directory exists
try:
os.makedirs(labeled_data_dir)
except OSError as e:
print(e)
for index, row in data.iterrows():
try:
perturbed_filename=row["name_new_file"]
original_code=row["original_code"]
perturbed_path = os.path.join(data_dir_raw, perturbed_filename)
new_filename = os.path.join(data_dir, str(original_code), perturbed_filename)
os.rename(perturbed_path, new_filename)
except OSError as e:
print(e)
##Prediction
path_result="/content/drive/MyDrive/AI/Ejemplos adversariales/mod3/round2/" #@param {type:"string"}
path_result=path_result+"{}/CSV individuales".format("0"+str(epsilon.split(".")[-1]))
path_width_data= data_dir
AUTOTUNE = tf.data.experimental.AUTOTUNE
nbatch=50
nImages=data.shape[0]
niteration=math.ceil(nImages/nbatch)
all_models_in_keras_without_clean=dir(tf.keras.applications)
ALL_MODELS=clean_list_of_models_names(all_models_in_keras_without_clean)
ALL_MODELS=ALL_MODELS[:15] +ALL_MODELS[17:]
ALL_FAMILYS=clean_list_of_familys(all_models_in_keras_without_clean)
for model_name in ALL_MODELS:
print(model_name)
init_model_time=time.time()
if check_is_csv_is_created(model_name)==False:
preprocess_input=get_preprocesing_input(ALL_FAMILYS,model_name)
function_to_preprocess=preprocess_input
model=create_model(model_name)
IMG_WIDTH=model.input.shape[1]
IMG_HEIGHT =model.input.shape[2]
list_ds=create_list_file_ds(path_width_data)
labeled_ds = list_ds.map(create_list_of_dataset_with_preprocess,num_parallel_calls=AUTOTUNE)
df=pd.DataFrame()
df=predict_model(model,labeled_ds,df,model_name)
if df.shape[0]>0:
df.to_csv(path_result+"/"+model_name+"perturbed.csv")
else:
print("En el modelo "+ model_name+" hubo un error, por lo que ver que paso")
del (list_ds)
del (labeled_ds)
tf.keras.backend.clear_session() #quizá para limpiar la ram
print("------ %s minutos es lo que se tardo en realizar todo el modelo------" % ((time.time()-init_model_time)/60))
# result_Directory="/content/drive/My Drive/AI/Ejemplos adversariales/resultados_api_tensorflow/prediction_models_v2"
result_Directory=path_result
result_dir = pathlib.Path(result_Directory)
# CLASS_NAMES = np.array([item.name for item in result_dir.glob('.csv') ]
filenames=[item.name for item in result_dir.glob('*.csv') ]
path_result_join= "/content/drive/MyDrive/AI/Ejemplos adversariales/mod3/round2/" #@param {type:"string"}
path_result_join=path_result_join+"{}".format("0"+str(epsilon.split(".")[-1]))
```
##Prediction
##ánalisis de imágenes perturbadas
```
# # result_Directory="/content/drive/My Drive/AI/Ejemplos adversariales/resultados_api_tensorflow/prediction_models_v2"
# result_Directory=path_result
# result_dir = pathlib.Path(result_Directory)
# # CLASS_NAMES = np.array([item.name for item in result_dir.glob('.csv') ]
# filenames=[item.name for item in result_dir.glob('*.csv') ]
# # path_result_join= "/content/drive/My Drive/AI/Ejemplos adversariales/Round 5/002" #@param {type:"string"}
#@title threshold
threshold = 0.1 #@param {type:"slider", min:0, max:1, step:0.01}
df_global=pd.DataFrame()
path_base=result_Directory+"/"
for filename in filenames:
df_aux=pd.read_csv(path_base+filename)
df_aux.drop(columns="Unnamed: 0",inplace=True)
df_aux["model"]=filename[:-4]#only this time
df_global=pd.concat([df_global,df_aux],ignore_index=True)
condition_class_predict_is_real=df_global["code_predict"]==df_global["class_code_real"]
condition_prediction_is_more_than_threshold=df_global["probability_class_real"]>=threshold
conditions=condition_class_predict_is_real & condition_prediction_is_more_than_threshold
df_global["epsilon"]= [text[9:14] for text in df_global["filename"]] #careful here with the number of digits
df_global["filename"]= [text[15:] for text in df_global["filename"]]
df_global["model"]=[text[:-9] for text in df_global["model"]]
df_global["isright"]=np.where(conditions ,1,0)
df_global["ispertubed"]=True
```
Realizar un multilevel con image y epsilon
```
df_global.to_csv(path_result_join+"/perturbed_complete_df.csv")
df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright")
df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright").to_csv(path_result_join+"/matriz_bool_ejemplos_adversariales_perturbadas.csv")
df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="probability_class_real").to_csv(path_result_join+"/matriz_porcentaje_ejemplos_adversariales_perturbadas.csv")
df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright")
print("el modelo que tiene que acertar todo Mobilenet sus resultados son un accuracy de:",df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright").MobileNet.mean())
print("el modelo que tiene que fallar todo ResNet50V2 sus resultados son un accuracy de:",df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright").ResNet50V2.mean())
print("el modelo que tiene que fallar un 50% y acertar el otro 50% todo DenseNet121 sus resultados son un accuracy de:",df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright").DenseNet121.mean())
#@title threshold
threshold = 0.1 #@param {type:"slider", min:0, max:1, step:0.01}
df_global=pd.DataFrame()
path_base=result_Directory+"/"
for filename in filenames:
df_aux=pd.read_csv(path_base+filename)
df_aux.drop(columns="Unnamed: 0",inplace=True)
df_aux["model"]=filename[:-4]#only this time
df_global=pd.concat([df_global,df_aux],ignore_index=True)
condition_class_predict_is_real=df_global["class_code_real"]==df_global["class_code_real"]
condition_prediction_is_more_than_threshold=df_global["probability_class"]>=threshold
conditions=condition_class_predict_is_real & condition_prediction_is_more_than_threshold
df_global["epsilon"]= [text[9:14] for text in df_global["filename"]] #careful here with the number of digits
df_global["filename"]= [text[15:] for text in df_global["filename"]]
df_global["model"]=[text[:-9] for text in df_global["model"]]
df_global["isright"]=np.where(conditions ,1,0)
df_global["ispertubed"]=True
df_global
df_global.pivot_table(index=['filename', 'epsilon'],columns="model",values="isright").mean()
```
| github_jupyter |
# Лабораторная работа №4
## Разработка программного средства на основе алгоритма задачи группового выбора вариантов.
### Основные теоретические положения
Групповой выбор сочетает в себе субъективные и объективные аспекты. Предпочтения каждого конкретного ЛПР субъективно и зависит от присущей данному человеку системы ценностей. В то же время объединение нескольких индивидуальных предпочтений в одно коллективное предпочтение должно осуществляться по возможности более объективным способом с помощью явно определенных формальных процедур.
Проблема коллективного выбора заключается в том как агрегировать индивидуальные предпочтения в интегральные оценки качества решения, на основе которых ищется наиболее предпочтительное решение.
### Метод предпочтений
Пусть имеется $m$ экспертов: $E_{1}, E_{2}, ..., E_{m}$ и $n$ целей: $Z_{1}, Z_{2}, ..., Z_{n}$.
Каждый эксперт проводит оценку целей, пользуясь числами натурального ряда. Наиболее важной цели присваивается 1, менее важно -2 и т.д. В этих условиях веса целей определяются следующим образом:
1. Составляется исходная матрица предпочтений:
$E_{j}/Z_{i}$ | $Z_{1}$ | $Z_{2}$ | ... | $Z_{n}$
-- | ------ | -------- | --- | -------
$E_{1}$ | $k_{11}$ | $k_{12}$ | ... | $k_{1n}$
$E_{2}$ | $k_{21}$ | $k_{22}$ | ... | $k_{2n}$
... | ... | ... | ... | ...
$E_{m}$ | $k_{m1}$ | $k_{m2}$ | ... | $k_{mn}$
$ 1 \leqslant k_{ji} \leqslant n, (j = 1,m, i = 1,n) $
2. Составляется модифицированная матрица предпочтений. С оценками
$K_{ji} = n - k_{ji} (1<j<m; 1<i<n)$
3. Находятся суммарные оценки предпочтений по каждой цели:
$k_{ji} = \sum_{}^{} k_{ji} (i = 1,n)$
4. Вычисляются исходные веса целей
$w_{i}K_{i}/\sum_{}^{} K_{i} (i = 1,n)$, где $\sum_{}^{} w_{i} = 1$
#### Задание
Собрана группа экспертов в составе 3-х человек для выбора объекта инвестирования. Были предложены варианты:
- Минский автомобильный завод
- Минский завод холодильников «Атлант»
- Кондитерская фабрика «Витьба»
- ОАО «Нафтан»
- «Белкоммунмаш»
- Минская швейная фабрика «элема»
Оценки экспертов прибыльности предприятий приведены в матрице:
$E_{j}/Z_{i}$ | $Z_{1}$ | $Z_{2}$ | $Z_{3}$ | $Z_{4}$ | $Z_{5}$ | $Z_{6}$
------------- | ------- | ------- | ------- | ------- | ------- | -------
$E_{1}$ | 1 | 5 | 4 | 2 | 6 | 3
$E_{2}$ | 3 | 4 | 1 | 6 | 5 | 2
$E_{3}$ | 5 | 2 | 4 | 6 | 3 | 1
Где $E_{1} ... i$ — эксперты, $Z_{1} ... j$ — проекты
Определить наиболее перспективный объект инвестирования.
##### Решение задачи
*Входные данные:*
```
import json
import numpy as np
with open("data/preference.json") as json_file:
raw_data = json.load(json_file)
value = np.array(raw_data['value'], dtype=int)
print(f'Оценки экспертов:\n {value}')
```
*Исходный код решения:*
```
# модифицированная матрица
mod = len(value[0]) - value
# суммарные оценки по целям
sumMod = mod.sum(axis=0)
# сумма всех оценок
sumV = sumMod.sum()
# веса целей
w = np.array([])
for el in sumMod:
w = np.append(w, (el/sumV))
# вывод вариантов в порядке убывания по оптимальности
result = ""
w_c = w.copy()
for el in w:
indMax = w_c.argmax()
result = result + f'Z{indMax+1} > '
w_c[indMax] = 0
```
*Результат выполнения:*
```
print(f'Модифицированная матрица:\n{mod}')
print(f'Суммарные оценки по целям: {sumMod}')
print(f'Веса целей: {w}')
print(f'Оптимальные варианты в порядке убывания: {result}')
```
### Метод рангов
Пусть имеется $m$ экспертов: $E_{1}, E_{2}, ..., E_{m}$ и $n$ целей: $Z_{1}, Z_{2}, ..., Z_{n}$.
Каждый эксперт проводит оценку целей, пользуясь 10-бальной шкалой, причем оценки могут быть как целыми, так и дробными. В этих условиях веса целей определяются следующим образом:
1. Составляется матрица оценок экспертов:
$E_{j}/Z_{i}$ | $Z_{1}$ | $Z_{2}$ | ... | $Z_{n}$
-- | ------ | -------- | --- | -------
$E_{1}$ | $S_{11}$ | $S_{12}$ | ... | $S_{1n}$
$E_{2}$ | $S_{21}$ | $S_{22}$ | ... | $S_{2n}$
... | ... | ... | ... | ...
$E_{m}$ | $S_{m1}$ | $S_{m2}$ | ... | $S_{mn}$
$ 0 \leqslant p_{ji} \leqslant 10, (j = 1,m, i = 1,n) $
2. Составляется матрица нормированных оценок:
$w = p_{ji}/\sum_{}^{} p_{ji} (j = 1,m, i = 1,n)$
3. Вычисляются искомые веса целей:
$w_{i} = \sum_{}^{} w_{ij}/\sum_{}^{}\sum_{}^{} w_{ij} (i = 1,n) \sum_{}^{} w_{i} = 1$
#### Задание
Анализ результатов экономической деятельности предприятия показал его неспособность функционировать на рынке. Пригласили 4-х экспертов для помощи руководству принять решение о выходе из сложившейся ситуации. Рассматриваются следующие варианты:
- Ликвидировать предприятие
- Выставить на продажу
- Объявить банкротом
- Провести санацию
Оценки экспертов прибыльности предприятий приведены в матрице:
$E_{j}/Z_{i}$ | $Z_{1}$ | $Z_{2}$ | $Z_{3}$ | $Z_{4}$
------------- | ------- | ------- | ------- | -------
$E_{1}$ | 2 | 3 | 4 | 1
$E_{2}$ | 3 | 1 | 2 | 4
$E_{3}$ | 1 | 4 | 3 | 2
$E_{4}$ | 1 | 3 | 4 | 2
Где $E_{1} ... i$ — эксперты, $Z_{1} ... j$ — проекты
Выяснить оптимальный путь дальнейшего развития предприятия.
##### Решение задачи
*Входные данные:*
```
with open("data/rank.json") as json_file:
raw_data = json.load(json_file)
value = np.array(raw_data['value'], dtype=float)
print(f'Оценки экспертов:\n {value}')
```
*Исходный код решения:*
```
# модифицированная матрица
i=0
j=0
norm = np.copy(value)
for row in value:
s = row.sum()
for el in row:
norm[i,j]=value[i,j]/s
j = j + 1
j=0
i=i+1
# веса целей
w = norm.sum(axis=0)
w = w/norm.shape[0]
# вывод вариантов в порядке убывания по оптимальности
result = ""
w_c = w.copy()
for el in w:
indMax = w_c.argmax()
result = result + f'Z{indMax+1} > '
w_c[indMax] = 0
```
*Результат выполнения:*
```
print(f'Матрица нормированных оценок:\n{norm}')
print(f'Веса целей: {w}')
print(f'Оптимальные варианты в порядке убывания: {result}')
```
Полезные ссылки: http://victor-safronov.ru/systems-analysis/lectures/zhivickaya/29.html
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
## Extractive Summarization on CNN/DM Dataset using Transformer Version of BertSum
### Summary
This notebook demonstrates how to fine tune Transformers for extractive text summarization. Utility functions and classes in the NLP Best Practices repo are used to facilitate data preprocessing, model training, model scoring, result postprocessing, and model evaluation.
BertSum refers to [Fine-tune BERT for Extractive Summarization](https://arxiv.org/pdf/1903.10318.pdf) with [published example](https://github.com/nlpyang/BertSum/). And the Transformer version of Bertsum refers to our modification of BertSum and the source code can be accessed at (https://github.com/daden-ms/BertSum/).
Extractive summarization are usually used in document summarization where each input document consists of mutiple sentences. The preprocessing of the input training data involves assigning label 0 or 1 to the document sentences based on the give summary. The summarization problem is also simplfied to classifying whether a document sentence should be included in the summary.
The figure below illustrates how BERTSum can be fine tuned for extractive summarization task. [CLS] token is inserted at the beginning of each sentence, so is [SEP] token at the end. Interval segment embedding and positional embedding are added upon the token embedding as the input of the BERT model. The [CLS] token representation is used as sentence embedding and only the [CLS] tokens are used as the input for the summarization model. The summarization layer predicts the probability for each sentence being included in the summary. Techniques like trigram blocking can be used to improve model accuarcy.
<img src="https://nlpbp.blob.core.windows.net/images/BertSum.PNG">
### Before You Start
The running time shown in this notebook is on a Standard_NC24s_v3 Azure Ubuntu Virtual Machine with 4 NVIDIA Tesla V100 GPUs.
> **Tip**: If you want to run through the notebook quickly, you can set the **`QUICK_RUN`** flag in the cell below to **`True`** to run the notebook on a small subset of the data and a smaller number of epochs.
Using only 1 NVIDIA Tesla V100 GPUs, 16GB GPU memory configuration,
- for data preprocessing, it takes around 1 minutes to preprocess the data for quick run. Otherwise it takes ~2 hours to finish the data preprocessing. This time estimation assumes that the chosen transformer model is "distilbert-base-uncased" and the sentence selection method is "greedy", which is the default. The preprocessing time can be significantly longer if the sentence selection method is "combination", which can achieve better model performance.
- for model fine tuning, it takes around 10 minutes for quick run. Otherwise, it takes around ~3 hours to finish. This estimation assumes the chosen encoder method is "transformer". The model fine tuning time can be shorter if other encoder method is chosen, which may result in worse model performance.
```
## Set QUICK_RUN = True to run the notebook on a small subset of data and a smaller number of epochs.
QUICK_RUN = True
## Set USE_PREPROCSSED_DATA = True to skip the data preprocessing
USE_PREPROCSSED_DATA = False
```
### Configuration
```
%load_ext autoreload
%autoreload 2
import os
import shutil
import sys
from tempfile import TemporaryDirectory
import torch
nlp_path = os.path.abspath("../../")
if nlp_path not in sys.path:
sys.path.insert(0, nlp_path)
from utils_nlp.dataset.cnndm import CNNDMBertSumProcessedData, CNNDMSummarizationDataset
from utils_nlp.eval.evaluate_summarization import get_rouge
from utils_nlp.models.transformers.extractive_summarization import (
ExtractiveSummarizer,
ExtSumProcessedData,
ExtSumProcessor,
)
import pandas as pd
import scrapbook as sb
```
### Configuration: choose the transformer model to be used
Several pretrained models have been made available by [Hugging Face](https://github.com/huggingface/transformers). For extractive summarization, the following pretrained models are supported.
```
pd.DataFrame({"model_name": ExtractiveSummarizer.list_supported_models()})
# Transformer model being used
MODEL_NAME = "distilbert-base-uncased"
```
Also, we need to install the dependencies for pyrouge.
# dependencies for ROUGE-1.5.5.pl
Run the following commands in your terminal to install XML parsing C library.
1. sudo apt-get update
1. sudo apt-get install expat
1. sudo apt-get install libexpat-dev -y
Run the following commands in your terminal to install other pre-requistes for using pyrouge.
1. sudo cpan install XML::Parser
1. sudo cpan install XML::Parser::PerlSAX
1. sudo cpan install XML::DOM
Download ROUGE-1.5.5 from https://github.com/andersjo/pyrouge/tree/master/tools/ROUGE-1.5.5.
Run the following command in your terminal.
* pyrouge_set_rouge_path $ABSOLUTE_DIRECTORY_TO_ROUGE-1.5.5.pl
### Data Preprocessing
The dataset we used for this notebook is CNN/DM dataset which contains the documents and accompanying questions from the news articles of CNN and Daily mail. The highlights in each article are used as summary. The dataset consits of ~289K training examples, ~11K valiation examples and ~11K test examples. You can choose the [Option 1] below preprocess the data or [Option 2] to use the preprocessed version at [BERTSum published example](https://github.com/nlpyang/BertSum/). You don't need to manually download any of these two data sets as the code below will handle downloading. Functions defined specific in [cnndm.py](../../utils_nlp/dataset/cnndm.py) are unique to CNN/DM dataset that's preprocessed by harvardnlp. However, it provides a skeleton of how to preprocessing text into the format that model preprocessor takes: sentence tokenization and work tokenization.
##### Details of Data Preprocessing
The purpose of preprocessing is to process the input articles to the format that model finetuning needed. Assuming you have (1) all articles and (2) target summaries, each in a file and line-breaker separated, the steps to preprocess the data are:
1. sentence tokenization
2. word tokenization
3. **label** the sentences in the article with 1 meaning the sentence is selected and 0 meaning the sentence is not selected. The algorithms for the sentence selection are "greedy" and "combination" and can be found in [sentence_selection.py](../../utils_nlp/dataset/sentence_selection.py)
3. convert each example to the desired format for extractive summarization
- filter the sentences in the example based on the min_src_ntokens argument. If the lefted total sentence number is less than min_nsents, the example is discarded.
- truncate the sentences in the example if the length is greater than max_src_ntokens
- truncate the sentences in the example and the labels if the total number of sentences is greater than max_nsents
- [CLS] and [SEP] are inserted before and after each sentence
- wordPiece tokenization or Byte Pair Encoding (BPE) subword tokenization
- truncate the example to 512 tokens
- convert the tokens into token indices corresponding to the transformer tokenizer's vocabulary.
- segment ids are generated and added
- [CLS] token positions are logged
- [CLS] token labels are truncated if it's greater than 512, which is the maximum input length that can be taken by the transformer model.
Note that the original BERTSum paper use Stanford CoreNLP for data preprocessing, here we use NLTK for data preprocessing.
##### [Option 1] Preprocess data (Please skil this part if you choose to use preprocessed data)
The code in following cell will download the CNN/DM dataset listed at https://github.com/harvardnlp/sent-summary/.
```
# the data path used to save the downloaded data file
DATA_PATH = TemporaryDirectory().name
# The number of lines at the head of data file used for preprocessing. -1 means all the lines.
TOP_N = 1000
CHUNK_SIZE=200
if not QUICK_RUN:
TOP_N = -1
CHUNK_SIZE = 2000
train_dataset, test_dataset = CNNDMSummarizationDataset(top_n=TOP_N, local_cache_path=DATA_PATH)
```
Preprocess the data and save the data to disk.
```
processor = ExtSumProcessor(model_name=MODEL_NAME)
ext_sum_train = processor.preprocess(train_dataset, train_dataset.get_target(), oracle_mode="greedy")
ext_sum_test = processor.preprocess(test_dataset, test_dataset.get_target(),oracle_mode="greedy")
save_path = os.path.join(DATA_PATH, "processed")
train_files = ExtSumProcessedData.save_data(
ext_sum_train, is_test=False, save_path=save_path, chunk_size=CHUNK_SIZE
)
test_files = ExtSumProcessedData.save_data(
ext_sum_test, is_test=True, save_path=save_path, chunk_size=CHUNK_SIZE
)
train_files
test_files
train_dataset, test_dataset = ExtSumProcessedData().splits(root=save_path)
```
#### Inspect Data
```
import torch
bert_format_data = torch.load(train_files[0])
print(len(bert_format_data))
bert_format_data[0].keys()
bert_format_data[0]['labels']
```
##### [Option 2] Reuse Preprocessed data from [BERTSUM Repo](https://github.com/nlpyang/BertSum)
```
# the data path used to downloaded the preprocessed data from BERTSUM Repo.
# if you have downloaded the dataset, change the code to use that path where the dataset is.
PROCESSED_DATA_PATH = TemporaryDirectory().name
#data_path = "./temp_data5/"
#PROCESSED_DATA_PATH = data_path
if USE_PREPROCSSED_DATA:
CNNDMBertSumProcessedData.download(local_path=PROCESSED_DATA_PATH)
train_dataset, test_dataset = ExtSumProcessedData().splits(root=PROCESSED_DATA_PATH)
```
### Model training
To start model training, we need to create a instance of ExtractiveSummarizer.
#### Choose the transformer model.
Currently ExtractiveSummarizer support two models:
- distilbert-base-uncase,
- bert-base-uncase
Potentionally, roberta-based model and xlnet can be supported but needs to be tested.
#### Choose the encoder algorithm.
There are four options:
- baseline: it used a smaller transformer model to replace the bert model and with transformer summarization layer
- classifier: it uses pretrained BERT and fine-tune BERT with **simple logistic classification** summarization layer
- transformer: it uses pretrained BERT and fine-tune BERT with **transformer** summarization layer
- RNN: it uses pretrained BERT and fine-tune BERT with **LSTM** summarization layer
```
# notebook parameters
# the cache data path during find tuning
CACHE_DIR = TemporaryDirectory().name
# batch size, unit is the number of tokens
BATCH_SIZE = 3000
# GPU used for training
NUM_GPUS = 2
# Encoder name. Options are: 1. baseline, classifier, transformer, rnn.
ENCODER = "transformer"
# Learning rate
LEARNING_RATE=2e-3
# How often the statistics reports show up in training, unit is step.
REPORT_EVERY=100
# total number of steps for training
MAX_STEPS=1e3
# number of steps for warm up
WARMUP_STEPS=5e2
if not QUICK_RUN:
MAX_STEPS=5e4
WARMUP_STEPS=5e3
summarizer = ExtractiveSummarizer(MODEL_NAME, ENCODER, CACHE_DIR)
summarizer.fit(
train_dataset,
num_gpus=NUM_GPUS,
batch_size=BATCH_SIZE,
gradient_accumulation_steps=2,
max_steps=MAX_STEPS,
learning_rate=LEARNING_RATE,
warmup_steps=WARMUP_STEPS,
verbose=True,
report_every=REPORT_EVERY,
clip_grad_norm=False,
)
summarizer.save_model("extsum_modelname_{0}_usepreprocess{1}_steps_{2}.pt".format(MODEL_NAME, USE_PREPROCSSED_DATA, MAX_STEPS))
# for loading a previous saved model
# import torch
# summarizer.model = torch.load("cnndm_transformersum_distilbert-base-uncased_bertsum_processed_data.pt")
```
### Model Evaluation
[ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric)), or Recall-Oriented Understudy for Gisting Evaluation has been commonly used for evaluating text summarization.
```
target = [i['tgt_txt'] for i in test_dataset]
len(target)
test_dataset[0].keys()
%%time
prediction = summarizer.predict(test_dataset, num_gpus=NUM_GPUS, batch_size=128)
len(prediction)
RESULT_DIR = TemporaryDirectory().name
rouge_score = get_rouge(prediction, target, RESULT_DIR)
test_dataset[0]['tgt_txt']
prediction[0]
test_dataset[0]['src_txt']
# for testing
sb.glue("rouge_2_f_score", rouge_score['rouge_2_f_score'])
```
## Clean up temporary folders
```
if os.path.exists(DATA_PATH):
shutil.rmtree(DATA_PATH, ignore_errors=True)
if os.path.exists(PROCESSED_DATA_PATH):
shutil.rmtree(PROCESSED_DATA_PATH, ignore_errors=True)
if os.path.exists(CACHE_DIR):
shutil.rmtree(CACHE_DIR, ignore_errors=True)
if os.path.exists(RESULT_DIR):
shutil.rmtree(RESULT_DIR, ignore_errors=True)
```
| github_jupyter |
```
import speech_recognition as sr
from pydub import AudioSegment
import os
from pydub import AudioSegment
from pydub.silence import split_on_silence
# convert mp3 file to wav
# src=("C:\\Users\\pyjpa\\Desktop\\22.mp3")
# sound = AudioSegment.from_mp3(src)
# sound.export("C:\\Users\\pyjpa\\Desktop\\22.flac", format="flac")
file_audio = sr.AudioFile("C:\\Users\\pyjpa\\Desktop\\22.flac")
# use the audio file as the audio source
r = sr.Recognizer()
with file_audio as source:
audio_text = r.record(source)
audio_text
a = r.recognize_sphinx(audio_text, language="zh-CN")
a
def silence_based_conversion(path="alice-medium.wav"):
song = AudioSegment.from_wav(path)
fh = open("recognized.txt", "w")
chunks = split_on_silence(song,
# must be silent for at least 0.5 seconds
# or 500 ms. adjust this value based on user
# requirement. if the speaker stays silent for
# longer, increase this value. else, decrease it.
min_silence_len = 500,
# consider it silent if quieter than -16 dBFS
# adjust this per requirement
silence_thresh = -25
)
# create a directory to store the audio chunks.
try:
os.mkdir('audio_chunks')
except(FileExistsError):
pass
os.chdir('audio_chunks')
print(chunks)
i = 0
# process each chunk
for chunk in chunks:
# Create 0.5 seconds silence chunk
chunk_silent = AudioSegment.silent(duration = 10)
# add 0.5 sec silence to beginning and
# end of audio chunk. This is done so that
# it doesn't seem abruptly sliced.
audio_chunk = chunk_silent + chunk + chunk_silent
# export audio chunk and save it in
# the current directory.
print("saving chunk{0}.wav".format(i))
# specify the bitrate to be 192 k
audio_chunk.export("./chunk{0}.wav".format(i), bitrate ='192k', format ="wav")
# the name of the newly created chunk
filename = 'chunk'+str(i)+'.wav'
print("Processing chunk "+str(i))
# get the name of the newly created chunk
# in the AUDIO_FILE variable for later use.
file = filename
# create a speech recognition object
r = sr.Recognizer()
# recognize the chunk
with sr.AudioFile(file) as source:
# remove this if it is not working
# correctly.
r.adjust_for_ambient_noise(source)
audio_listened = r.listen(source)
try:
# try converting it to text
rec = r.recognize_sphinx(audio_listened)
# write the output to the file.
fh.write(rec+". ")
# catch any errors.
except sr.UnknownValueError:
print("Could not understand audio")
except sr.RequestError as e:
print("Could not request results. check your internet connection")
i += 1
os.chdir('..')
path = "C:\\Users\\pyjpa\\Desktop\\examples_chinese.flac"
silence_based_conversion(path)
```
| github_jupyter |
# Variational Autoencoders (Toy dataset)
Skeleton code from https://github.com/tudor-berariu/ann2018
## 1. Miscellaneous
```
import torch
from torch import Tensor
assert torch.cuda.is_available()
import matplotlib.pyplot as plt
from math import ceil
def show_images(X: torch.Tensor, nrows=3):
ncols = int(ceil(len(X) / nrows))
ratio = nrows / ncols
fig, axs = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 10 * ratio))
for idx, img in enumerate(X):
r, c = idx // ncols, idx % ncols
axs[r][c].imshow(img[0].numpy(), aspect='equal', vmin=0, vmax=1, cmap='binary')
for row_axs in axs:
for ax in row_axs:
ax.set_aspect('equal', 'box')
ax.set_yticklabels([])
ax.set_xticklabels([])
fig.tight_layout()
```
## 2. Our dataset
```
def get_dataset(n, idxs):
X = torch.randn(n * 16) * .1
X[idxs] += 1
X = (X - X.min()) / (X.max() - X.min())
X.clamp_(0, 1)
X = X.reshape(n, 1, 4, 4)
return X
n = 15
idxs = [2, 6, 8, 9, 10, 11, 14, 17, 21, 24, 25, 26, 27, 29, 35, 39, 43, 44, 45,
46, 47, 48, 49, 50, 51, 52, 56, 60, 64, 68, 69, 70, 71, 72, 76, 80, 81,
82, 83, 84, 88, 92, 98, 102, 104, 105, 106, 107, 110, 112, 113, 114,
115, 116, 120, 124, 131, 135, 139, 140, 141, 142, 143, 147, 151, 155,
156, 157, 158, 159, 162, 166, 168, 169, 170, 171, 174, 178, 182, 186,
188, 189, 190, 191, 193, 196, 197, 198, 199, 201, 205, 209, 212, 213,
214, 215, 217, 221, 225, 228, 229, 230, 231, 233, 237]
X = get_dataset(n, idxs)
show_images(X)
print(X.shape)
```
## 3. The Variational Auto-encoder
The encoder computes $q_{\phi}\left(z \mid x\right)$ predicting:
- $\mu_{\phi}\left(x\right)$ and
- $\log \sigma_{\phi}^2\left(x\right)$.
The decoder computes $p_{\theta}\left(x \mid z\right)$.
```
import torch.nn as nn
import torch.nn.functional as F
class VAE(nn.Module):
def __init__(self, nz: int = 1) -> None:
super(VAE, self).__init__()
self.nz = nz # The number of dimensions in the latent space
self.encoder = nn.Sequential(nn.Linear(16, 64), nn.ReLU())
self.mean = nn.Linear(64, nz) # predicts the mean of p(z|x)
self.log_var = nn.Linear(64, nz) # predicts the log-variance of p(z|x)
self.decoder = nn.Sequential(nn.Linear(nz, 64), nn.ReLU(),
nn.Linear(64, 16))
def forward(self, x):
x = x.view(-1, 16) # Drop this if you use convolutional encoders
# Encoding x into mu, and log-var of p(z|x)
x = self.encoder(x)
mean = self.mean(x)
log_var = self.log_var(x)
# ----------------------------------------------------------------
# TODO 1: compute z = (eps * std) + mean (reparametrization trick)
std = torch.exp(log_var / 2)
eps = torch.randn_like(std)
noise = eps * std + mean
# ----------------------------------------------------------------
# Decoding z into p(x|z)
x = self.decoder(noise)
x = torch.sigmoid(x)
return x.view(-1, 1, 4, 4), mean, log_var
def generate(self, nsamples: int = None, noise: Tensor = None) -> Tensor:
# Generate some data
with torch.no_grad():
if noise is None:
noise = torch.randn(nsamples, self.nz)
x = self.decoder(noise)
x = torch.sigmoid(x)
return x.view(-1, 1, 4, 4)
```
## 4. Training the model
The optimization criterion has two components.
- the KL divergence between $q_{\phi}\left(z \mid x\right)$ and $p\left(z\right)$
* both are diagonal gaussians, therefore we have a simple formula for the KL divergence: [wiki](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Examples)
- the reconstruction loss computed using the [binary cross entropy](https://pytorch.org/docs/stable/nn.html#binary-cross-entropy)
```
import torch.optim as optim
import numpy as np
def train(vae: VAE, X: torch.Tensor, nsteps: int = 200000):
bce_trace, kl_trace = [], []
optimizer = optim.Adam(vae.parameters(), lr=.001)
for step in range(nsteps):
optimizer.zero_grad()
rec, mean, log_var = vae(X + torch.randn_like(X) * .05)
# -----------------------------------------------
# TODO 2: compute the two losses (do not average)
std = torch.exp(log_var / 2)
bce = F.binary_cross_entropy(rec, X, reduction='sum')
kl = 0.5 * torch.sum(std ** 2 + mean ** 2 - log_var - 1)
# -----------------------------------------------
(bce + kl).backward()
optimizer.step()
# Chestiuni pentru afișare
bce_trace.append(bce.item())
kl_trace.append(kl.item())
if (step + 1) % 100 == 0:
print(f"\rStep {step + 1:d}: BCE={np.mean(bce_trace):7.5f} "
f"KL={np.mean(kl_trace):7.5f}", end="")
bce_trace.clear()
kl_trace.clear()
if (step + 1) % 2500 == 0:
print("")
%%time
vae = VAE()
train(vae, X)
```
## 5. Evaluating the model
### 5.1 Reconstructions
```
with torch.no_grad():
recon, _, _ = vae(X)
show_images(recon)
```
### 5.2 Samples from the model
```
X_gen = vae.generate(nsamples=15)
show_images(X_gen)
```
### 5.3 Walk the latent space :)
```
N = 36
noise = torch.linspace(-2, 2, N).unsqueeze(1)
X_gen = vae.generate(noise=noise)
show_images(X_gen, nrows=6)
```
| github_jupyter |
# Parameter identification example
Here is a simple toy model that we use to demonstrate the working of the inference package
$\emptyset \xrightarrow[]{k_1(I)} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$
$ k_1(I) = \frac{k_1 I^2}{K_R^2 + I^2}$
```
%matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
%matplotlib inline
import bioscrape as bs
from bioscrape.types import Model
from bioscrape.simulator import py_simulate_model
import numpy as np
import pylab as plt
import pandas as pd
species = ['I','X']
reactions = [(['X'], [], 'massaction', {'k':'d1'}), ([], ['X'], 'hillpositive', {'s1':'I', 'k':'k1', 'K':'KR', 'n':2})]
k1 = 50.0
d1 = 0.5
params = [('k1', k1), ('d1', d1), ('KR', 20)]
initial_condition = {'X':0, 'I':0}
M = Model(species = species, reactions = reactions, parameters = params,
initial_condition_dict = initial_condition)
```
# Generate experimental data for multiple initial conditions
1. Simulate bioscrape model
2. Add Gaussian noise of non-zero mean and non-zero variance to the simulation
3. Create appropriate Pandas dataframes
4. Write the data to a CSV file
```
num_trajectories = 4 # each with different initial condition
initial_condition_list = [{'I':5},{'I':10},{'I':15},{'I':20}]
timepoints = np.linspace(0,5,100)
result_list = []
for init_cond in initial_condition_list:
M.set_species(init_cond)
result = py_simulate_model(timepoints, Model = M)['X']
result_list.append(result)
plt.plot(timepoints, result, label = 'I =' + str(list(init_cond.values())[0]))
plt.xlabel('Time')
plt.ylabel('[X]')
plt.legend()
plt.show()
exp_data = pd.DataFrame()
exp_data['timepoints'] = timepoints
for i in range(num_trajectories):
exp_data['X' + str(i)] = result_list[i] + np.random.normal(5, 1, size = np.shape(result))
plt.plot(timepoints, exp_data['X' + str(i)], 'r', alpha = 0.3)
plt.plot(timepoints, result_list[i], 'k', linewidth = 3)
plt.xlabel('Time')
plt.ylabel('[X]')
plt.show()
```
## CSV looks like:
```
exp_data.to_csv('../data/birth_death_data_multiple_conditions.csv')
exp_data
```
# Run the bioscrape MCMC algorithm to identify parameters from the experimental data
```
from bioscrape.inference import py_inference
# Import data from CSV
# Import a CSV file for each experiment run
exp_data = []
for i in range(num_trajectories):
df = pd.read_csv('../data/birth_death_data_multiple_conditions.csv', usecols = ['timepoints', 'X'+str(i)])
df.columns = ['timepoints', 'X']
exp_data.append(df)
prior = {'k1' : ['uniform', 0, 100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 5000, sim_type = 'stochastic',
params_to_estimate = ['k1'], prior = prior, plot_show = False, convergence_check = False)
pid.plot_mcmc_results(sampler, convergence_check = False);
```
### Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis.
# OR
### You can also plot the results as follows
```
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.1)
plt.plot(timepoints, result, "k", label="original model")
plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
```
## Let us now try to fit all three parameters to see if results improve:
```
# prior = {'d1' : ['gaussian', 0, 10, 1e-3], 'k1' : ['gaussian', 0, 50, 1e-4]}
prior = {'d1' : ['uniform', 0.1, 10],'k1' : ['uniform',0,100],'KR' : ['uniform',0,100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 10000, sim_type = 'stochastic',
params_to_estimate = ['d1','k1','KR'], prior = prior, plot_show = True, convergence_check = False)
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.2)
plt.plot(timepoints, result_list[i], "k")
# plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
```
## Alll methods above have other advanced options that you can use. Refer to Parameter Identification Tools and Advanced Examples notebook for more details. There are many other tools available such as for multiple initial conditions and timepoints for each trajectory, options for the estimator etc.
| github_jupyter |
##### Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# How to build a simple text classifier with TF-Hub
<table align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/text_classification_with_tf_hub.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/text_classification_with_tf_hub.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td></table>
TF-Hub is a platform to share machine learning expertise packaged in reusable resources, notably pre-trained **modules**. This tutorial is organized into two main parts.
** *Introduction:* Training a text classifier with TF-Hub**
We will use a TF-Hub text embedding module to train a simple sentiment classifier with a reasonable baseline accuracy. We will then analyze the predictions to make sure our model is reasonable and propose improvements to increase the accuracy.
** *Advanced:* Transfer learning analysis **
In this section, we will use various TF-Hub modules to compare their effect on the accuracy of the estimator and demonstrate advantages and pitfalls of transfer learning.
## Optional prerequisites
* Basic understanding of Tensorflow [premade estimator framework](https://www.tensorflow.org/get_started/premade_estimators).
* Familiarity with [Pandas](https://pandas.pydata.org/) library.
## Preparing the enviroment
```
# Install the latest Tensorflow version.
!pip install --quiet "tensorflow>=1.7"
# Install TF-Hub.
!pip install tensorflow-hub
!pip install seaborn
```
More detailed information about installing Tensorflow can be found at [https://www.tensorflow.org/install/](https://www.tensorflow.org/install/).
```
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
```
# Getting started
## Data
We will try to solve the [Large Movie Review Dataset v1.0](http://ai.stanford.edu/~amaas/data/sentiment/) task from Mass et al. The dataset consists of IMDB movie reviews labeled by positivity from 1 to 10. The task is to label the reviews as **negative** or **positive**.
```
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
train_df, test_df = download_and_load_datasets()
train_df.head()
```
## Model
### Input functions
[Estimator framework](https://www.tensorflow.org/get_started/premade_estimators#overview_of_programming_with_estimators) provides [input functions](https://www.tensorflow.org/api_docs/python/tf/estimator/inputs/pandas_input_fn) that wrap Pandas dataframes.
```
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
test_df, test_df["polarity"], shuffle=False)
```
### Feature columns
TF-Hub provides a [feature column](https://github.com/tensorflow/hub/blob/master/docs/api_docs/python/hub/text_embedding_column.md) that applies a module on the given text feature and passes further the outputs of the module. In this tutorial we will be using the [nnlm-en-dim128 module](https://tfhub.dev/google/nnlm-en-dim128/1). For the purpose of this tutorial, the most important facts are:
* The module takes **a batch of sentences in a 1-D tensor of strings** as input.
* The module is responsible for **preprocessing of sentences** (e.g. removal of punctuation and splitting on spaces).
* The module works with any input (e.g. **nnlm-en-dim128** hashes words not present in vocabulary into ~20.000 buckets).
```
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
```
### Estimator
For classification we can use a [DNN Classifier](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier) (note further remarks about different modelling of the label function at the end of the tutorial).
```
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003))
```
### Training
Train the estimator for a reasonable amount of steps.
```
# Training for 1,000 steps means 128,000 training examples with the default
# batch size. This is roughly equivalent to 5 epochs since the training dataset
# contains 25,000 examples.
estimator.train(input_fn=train_input_fn, steps=1000);
```
# Prediction
Run predictions for both training and test set.
```
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print("Training set accuracy: {accuracy}".format(**train_eval_result))
print("Test set accuracy: {accuracy}".format(**test_eval_result))
```
## Confusion matrix
We can visually check the confusion matrix to undestand the distribution of misclassifications.
```
def get_predictions(estimator, input_fn):
return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)]
LABELS = [
"negative", "positive"
]
# Create a confusion matrix on training data.
with tf.Graph().as_default():
cm = tf.confusion_matrix(train_df["polarity"],
get_predictions(estimator, predict_train_input_fn))
with tf.Session() as session:
cm_out = session.run(cm)
# Normalize the confusion matrix so that each row sums to 1.
cm_out = cm_out.astype(float) / cm_out.sum(axis=1)[:, np.newaxis]
sns.heatmap(cm_out, annot=True, xticklabels=LABELS, yticklabels=LABELS);
plt.xlabel("Predicted");
plt.ylabel("True");
```
# Further improvements
1. **Regression on sentiment**: we used a classifier to assign each example into a polarity class. But we actually have another categorical feature at our disposal - sentiment. Here classes actually represent a scale and the underlying value (positive/negative) could be well mapped into a continuous range. We could make use of this property by computing a regression ([DNN Regressor](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNRegressor)) instead of a classification ([DNN Classifier](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNClassifier)).
2. **Larger module**: for the purposes of this tutorial we used a small module to restrict the memory use. There are modules with larger vocabularies and larger embedding space that could give additional accuracy points.
3. **Parameter tuning**: we can improve the accuracy by tuning the meta-parameters like the learning rate or the number of steps, especially if we use a different module. A validation set is very important if we want to get any reasonable results, because it is very easy to set-up a model that learns to predict the training data without generalizing well to the test set.
4. **More complex model**: we used a module that computes a sentence embedding by embedding each individual word and then combining them with average. One could also use a sequential module (e.g. [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/1) module) to better capture the nature of sentences. Or an ensemble of two or more TF-Hub modules.
5. **Regularization**: to prevent overfitting, we could try to use an optimizer that does some sort of regularization, for example [Proximal Adagrad Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/ProximalAdagradOptimizer).
# Advanced: Transfer learning analysis
Transfer learning makes it possible to **save training resources** and to achieve good model generalization even when **training on a small dataset**. In this part, we will demonstrate this by training with two different TF-Hub modules:
* **[nnlm-en-dim128](https://tfhub.dev/google/nnlm-en-dim128/1)** - pretrained text embedding module,
* **[random-nnlm-en-dim128](https://tfhub.dev/google/random-nnlm-en-dim128/1)** - text embedding module that has same vocabulary and network as **nnlm-en-dim128**, but the weights were just randomly initialized and never trained on real data.
And by training in two modes:
* training **only the classifier** (i.e. freezing the module), and
* training the **classifier together with the module**.
Let's run a couple of trainings and evaluations to see how using a various modules can affect the accuracy.
```
def train_and_evaluate_with_module(hub_module, train_module=False):
embedded_text_feature_column = hub.text_embedding_column(
key="sentence", module_spec=hub_module, trainable=train_module)
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003))
estimator.train(input_fn=train_input_fn, steps=1000)
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
training_set_accuracy = train_eval_result["accuracy"]
test_set_accuracy = test_eval_result["accuracy"]
return {
"Training accuracy": training_set_accuracy,
"Test accuracy": test_set_accuracy
}
results = {}
results["nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1")
results["nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1", True)
results["random-nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1")
results["random-nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1", True)
```
Let's look at the results.
```
pd.DataFrame.from_dict(results, orient="index")
```
We can already see some patterns, but first we should establish the baseline accuracy of the test set - the lower bound that can be achieved by outputting only the label of the most represented class:
```
estimator.evaluate(input_fn=predict_test_input_fn)["accuracy_baseline"]
```
Assigning the most represented class will give us accuracy of **50%**. There are a couple of things to notice here:
1. Maybe surprisingly, **a model can still be learned on top of fixed, random embeddings**. The reason is that even if every word in the dictionary is mapped to a random vector, the estimator can separate the space purely using its fully connected layers.
2. Allowing training of the module with **random embeddings** increases both training and test accuracy as oposed to training just the classifier.
3. Training of the module with **pre-trained embeddings** also increases both accuracies. Note however the overfitting on the training set. Training a pre-trained module can be dangerous even with regularization in the sense that the embedding weights no longer represent the language model trained on diverse data, instead they converge to the ideal representation of the new dataset.
```
```
| github_jupyter |
# Datasets - Reduced data, IRFs, models
## Introduction
`gammapy.datasets` are a crucial part of the gammapy API. `datasets` constitute `DL4` data - binned counts, IRFs, models and the associated likelihoods. `Datasets` from the end product of the `makers` stage, see [makers notebook](makers.ipynb), and are passed on to the `Fit` or estimator classes for modelling and fitting purposes.
To find the different types of `Dataset` that are supported see [Datasets home](../../datasets/index.rst#Types-of-supported-datasets)
## Setup
```
import numpy as np
import astropy.units as u
from astropy.time import Time
from regions import CircleSkyRegion
from astropy.coordinates import SkyCoord
from gammapy.datasets import (
MapDataset,
SpectrumDataset,
SpectrumDatasetOnOff,
Datasets,
FluxPointsDataset,
)
from gammapy.data import DataStore, GTI
from gammapy.maps import WcsGeom, RegionGeom, MapAxes, MapAxis, Map
from gammapy.modeling.models import (
SkyModel,
PowerLawSpectralModel,
FoVBackgroundModel,
)
from gammapy.estimators import FluxPoints
from gammapy.utils.scripts import make_path
%matplotlib inline
```
## MapDataset
The counts, exposure, background, masks, and IRF maps are bundled together in a data structure named `MapDataset`. While the `counts`, and `background` maps are binned in reconstructed energy and must have the same geometry, the IRF maps can have a different spatial (coarsely binned and larger) geometry and spectral range (binned in true energies). It is usually recommended that the true energy bin should be larger and more finely sampled and the reco energy bin.
### Creating an empty dataset
An empty `MapDataset` can be directly instantiated from any `WcsGeom` object:
```
energy_axis = MapAxis.from_energy_bounds(
1, 10, nbin=11, name="energy", unit="TeV"
)
geom = WcsGeom.create(
skydir=(83.63, 22.01),
axes=[energy_axis],
width=5 * u.deg,
binsz=0.05 * u.deg,
frame="icrs",
)
dataset_empty = MapDataset.create(geom=geom, name="my-dataset")
```
It is good practice to define a name for the dataset, such that you can identify it later by name. However if you define a name it **must** be unique. Now we can already print the dataset:
```
print(dataset_empty)
```
The printout shows the key summary information of the dataset, such as total counts, fit statistics, model information etc.
`MapDataset.from_geom` has additional keywords, that can be used to define the binning of the IRF related maps:
```
# choose a different true energy binning for the exposure, psf and edisp
energy_axis_true = MapAxis.from_energy_bounds(
0.1, 100, nbin=11, name="energy_true", unit="TeV", per_decade=True
)
# choose a different rad axis binning for the psf
rad_axis = MapAxis.from_bounds(0, 5, nbin=50, unit="deg", name="rad")
gti = GTI.create(0 * u.s, 1000 * u.s)
dataset_empty = MapDataset.create(
geom=geom,
energy_axis_true=energy_axis_true,
rad_axis=rad_axis,
binsz_irf=0.1,
gti=gti,
name="dataset-empty",
)
```
To see the geometry of each map, we can use:
```
dataset_empty.geoms
```
Another way to create a `MapDataset` is to just read an existing one from a FITS file:
```
dataset_cta = MapDataset.read(
"$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz", name="dataset-cta"
)
print(dataset_cta)
```
## Accessing contents of a dataset
To further explore the contents of a `Dataset`, you can use e.g. `.info_dict()`
```
# For a quick info, use
dataset_cta.info_dict()
# For a quick view, use
dataset_cta.peek()
```
And access individual maps like:
```
counts_image = dataset_cta.counts.sum_over_axes()
counts_image.smooth("0.1 deg").plot()
```
Of course you can also access IRF related maps, e.g. the psf as `PSFMap`:
```
dataset_cta.psf
```
And use any method on the `PSFMap` object:
```
dataset_cta.psf.plot_containment_radius_vs_energy()
edisp_kernel = dataset_cta.edisp.get_edisp_kernel()
edisp_kernel.plot_matrix()
```
The `MapDataset` typically also contains the information on the residual hadronic background, stored in `MapDataset.background` as a map:
```
dataset_cta.background
```
As a next step we define a minimal model on the dataset using the `.models` setter:
```
model = SkyModel.create("pl", "point", name="gc")
model.spatial_model.position = SkyCoord("0d", "0d", frame="galactic")
model_bkg = FoVBackgroundModel(dataset_name="dataset-cta")
dataset_cta.models = [model, model_bkg]
```
Assigning models to datasets is covered in more detail in the [Modeling notebook](model_management.ipynb). Printing the dataset will now show the mode components:
```
print(dataset_cta)
```
Now we can use `.npred()` to get a map of the total predicted counts of the model:
```
npred = dataset_cta.npred()
npred.sum_over_axes().plot()
```
To get the predicted counts from an individual model component we can use:
```
npred_source = dataset_cta.npred_signal(model_name="gc")
npred_source.sum_over_axes().plot()
```
`MapDataset.background` contains the background map computed from the IRF. Internally it will be combined with a `FoVBackgroundModel`, to allow for adjusting the backgroun model during a fit. To get the model corrected background, one can use `dataset.npred_background()`.
```
npred_background = dataset_cta.npred_background()
npred_background.sum_over_axes().plot()
```
### Using masks
There are two masks that can be set on a `Dataset`, `mask_safe` and `mask_fit`.
- The `mask_safe` is computed during the data reduction process according to the specified selection cuts, and should not be changed by the user.
- During modelling and fitting, the user might want to additionally ignore some parts of a reduced dataset, e.g. to restrict the fit to a specific energy range or to ignore parts of the region of interest. This should be done by applying the `mask_fit`. To see details of applying masks, please refer to [Masks-for-fitting](mask_maps.ipynb#Masks-for-fitting:-mask_fit)
Both the `mask_fit` and `mask_safe` must have the safe `geom` as the `counts` and `background` maps.
```
# eg: to see the safe data range
dataset_cta.mask_safe.plot_grid();
```
In addition it is possible to define a custom `mask_fit`:
```
# To apply a mask fit - in enegy and space
region = CircleSkyRegion(SkyCoord("0d", "0d", frame="galactic"), 1.5 * u.deg)
geom = dataset_cta.counts.geom
mask_space = geom.region_mask([region])
mask_energy = geom.energy_mask(0.3 * u.TeV, 8 * u.TeV)
dataset_cta.mask_fit = mask_space & mask_energy
dataset_cta.mask_fit.plot_grid(vmin=0, vmax=1, add_cbar=True);
```
To access the energy range defined by the mask you can use:
- `dataset.energy_range_safe` : energy range definedby the `mask_safe`
- `dataset.energy_range_fit` : energy range defined by the `mask_fit`
- `dataset.energy_range` : the final energy range used in likelihood computation
These methods return two maps, with the `min` and `max` energy values at each spatial pixel
```
e_min, e_max = dataset_cta.energy_range
# To see the lower energy threshold at each point
e_min.plot(add_cbar=True)
# To see the lower energy threshold at each point
e_max.plot(add_cbar=True)
```
Just as for `Map` objects it is possible to cutout a whole `MapDataset`, which will perform the cutout for all maps in parallel.Optionally one can provide a new name to the resulting dataset:
```
cutout = dataset_cta.cutout(
position=SkyCoord("0d", "0d", frame="galactic"),
width=2 * u.deg,
name="cta-cutout",
)
cutout.counts.sum_over_axes().plot()
```
It is also possible to slice a `MapDataset` in energy:
```
sliced = dataset_cta.slice_by_energy(
energy_min=1 * u.TeV, energy_max=5 * u.TeV, name="slice-energy"
)
sliced.counts.plot_grid();
```
The same operation will be applied to all other maps contained in the datasets such as `mask_fit`:
```
sliced.mask_fit.plot_grid();
```
### Resampling datasets
It can often be useful to coarsely rebin an initially computed datasets by a specified factor. This can be done in either spatial or energy axes:
```
downsampled = dataset_cta.downsample(factor=8)
downsampled.counts.sum_over_axes().plot()
```
And the same downsampling process is possible along the energy axis:
```
downsampled_energy = dataset_cta.downsample(
factor=5, axis_name="energy", name="downsampled-energy"
)
downsampled_energy.counts.plot_grid();
```
In the printout one can see that the actual number of counts is preserved during the downsampling:
```
print(downsampled_energy, dataset_cta)
```
We can also resample the finer binned datasets to an arbitrary coarser energy binning using:
```
energy_axis_new = MapAxis.from_energy_edges([0.1, 0.3, 1, 3, 10] * u.TeV)
resampled = dataset_cta.resample_energy_axis(energy_axis=energy_axis_new)
resampled.counts.plot_grid(ncols=2);
```
To squash the whole dataset into a single energy bin there is the `.to_image()` convenience method:
```
dataset_image = dataset_cta.to_image()
dataset_image.counts.plot()
```
## SpectrumDataset
`SpectrumDataset` inherits from a `MapDataset`, and is specially adapted for 1D spectral analysis, and uses a `RegionGeom` instead of a `WcsGeom`.
A `MapDatset` can be converted to a `SpectrumDataset`, by summing the `counts` and `background` inside the `on_region`, which can then be used for classical spectral analysis. Containment correction is feasible only for circular regions.
```
region = CircleSkyRegion(
SkyCoord(0, 0, unit="deg", frame="galactic"), 0.5 * u.deg
)
spectrum_dataset = dataset_cta.to_spectrum_dataset(
region, containment_correction=True, name="spectrum-dataset"
)
# For a quick look
spectrum_dataset.peek();
```
A `MapDataset` can also be integrated over the `on_region` to create a `MapDataset` with a `RegionGeom`. Complex regions can be handled and since the full IRFs are used, containment correction is not required.
```
reg_dataset = dataset_cta.to_region_map_dataset(
region, name="region-map-dataset"
)
print(reg_dataset)
```
## FluxPointsDataset
`FluxPointsDataset` is a `Dataset` container for precomputed flux points, which can be then used in fitting.
`FluxPointsDataset` cannot be read directly, but should be read through `FluxPoints`, with an additional `SkyModel`. Similarly, `FluxPointsDataset.write` only saves the `data` component to disk.
```
flux_points = FluxPoints.read(
"$GAMMAPY_DATA/tests/spectrum/flux_points/diff_flux_points.fits"
)
model = SkyModel(spectral_model=PowerLawSpectralModel(index=2.3))
fp_dataset = FluxPointsDataset(data=flux_points, models=model)
fp_dataset.plot_spectrum()
```
The masks on `FluxPointsDataset` are `np.array` and the data is a `FluxPoints` object. The `mask_safe`, by default, masks the upper limit points
```
fp_dataset.mask_safe # Note: the mask here is simply a numpy array
fp_dataset.data # is a `FluxPoints` object
fp_dataset.data_shape() # number of data points
```
For an example of fitting `FluxPoints`, see [flux point fitting](../analysis/1D/sed_fitting.ipynb), and can be used for catalog objects, eg see [catalog notebook](catalog.ipynb)
## Datasets
`Datasets` are a collection of `Dataset` objects. They can be of the same type, or of different types, eg: mix of `FluxPointDataset`, `MapDataset` and `SpectrumDataset`.
For modelling and fitting of a list of `Dataset` objects, you can either
- Do a joint fitting of all the datasets together
- Stack the datasets together, and then fit them.
`Datasets` is a convenient tool to handle joint fitting of simultaneous datasets. As an example, please see the [joint fitting tutorial](../analysis/3D/analysis_mwl.ipynb)
To see how stacking is performed, please see [Implementation of stacking](../../datasets/index.html#stacking-multiple-datasets)
To create a `Datasets` object, pass a list of `Dataset` on init, eg
```
datasets = Datasets([dataset_empty, dataset_cta])
print(datasets)
```
If all the datasets have the same type we can also print an info table, collectiong all the information from the individual casll to `Dataset.info_dict()`:
```
datasets.info_table() # quick info of all datasets
datasets.names # unique name of each dataset
```
We can access individual datasets in `Datasets` object by name:
```
datasets["dataset-empty"] # extracts the first dataset
```
Or by index:
```
datasets[0]
```
Other list type operations work as well such as:
```
# Use python list convention to remove/add datasets, eg:
datasets.remove("dataset-empty")
datasets.names
```
Or
```
datasets.append(spectrum_dataset)
datasets.names
```
Let's create a list of spectrum datasets to illustrate some more functionality:
```
datasets = Datasets()
path = make_path("$GAMMAPY_DATA/joint-crab/spectra/hess")
for filename in path.glob("pha_*.fits"):
dataset = SpectrumDatasetOnOff.read(filename)
datasets.append(dataset)
print(datasets)
```
Now we can stack all datasets using `.stack_reduce()`:
```
stacked = datasets.stack_reduce(name="stacked")
print(stacked)
```
Or slice all datasets by a given energy range:
```
datasets_sliced = datasets.slice_by_energy(
energy_min="1 TeV", energy_max="10 TeV"
)
print(datasets_sliced.energy_ranges)
```
| github_jupyter |
# Lesson 2 Exercise 1 Solution: Creating Normalized Tables
<img src="images/postgresSQLlogo.png" width="250" height="250">
## In this exercise we are going to walk through the basics of modeling data in normalized form. We will create tables in PostgreSQL, insert rows of data, and do simple JOIN SQL queries to show how these mutliple tables can work together.
#### Import the library
Note: An error might popup after this command has exectuted. If it does, read it carefully before ignoring.
```
import psycopg2
```
#### Create a connection to the database, get a cursor, and set autocommit to true
```
try:
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student")
except psycopg2.Error as e:
print("Error: Could not make connection to the Postgres database")
print(e)
try:
cur = conn.cursor()
except psycopg2.Error as e:
print("Error: Could not get cursor to the Database")
print(e)
conn.set_session(autocommit=True)
```
#### Let's imagine we have a table called Music Store.
`Table Name: music_store
column 0: Transaction Id
column 1: Customer Name
column 2: Cashier Name
column 3: Year
column 4: Albums Purchased`
## Now to translate this information into a Create Table Statement and insert the data
<img src="images/table12.png" width="650" height="650">
```
# We Create Table Statement and insert the data in the table
try:
cur.execute("CREATE TABLE IF NOT EXISTS music_store (transaction_id int, \
customer_name varchar, cashier_name varchar, \
year int, albums_purchased text[]);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("INSERT INTO music_store (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(1, "Amanda", "Sam", 2000, ["Rubber Soul", "Let it Be"]))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(2, "Toby", "Sam", 2000, ["My Generation"]))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(3, "Max", "Bob", 2018, ["Meet the Beatles", "Help!"]))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("SELECT * FROM music_store;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
#### Moving to 1st Normal Form (1NF)
This data has not been normalized. To get this data into 1st normal form, we will need to remove any collections or list of data. We need to break up the list of songs into individual rows.
`Table Name: music_store
column 0: Transaction Id
column 1: Customer Name
column 2: Cashier Name
column 3: Year
column 4: Albums Purchased`
<img src="images/table13.png" width="650" height="650">
```
try:
cur.execute("CREATE TABLE IF NOT EXISTS music_store2 (transaction_id int, \
customer_name varchar, cashier_name varchar, \
year int, albums_purchased text);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("INSERT INTO music_store2 (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(1, "Amanda", "Sam", 2000, "Rubber Soul"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store2 (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(1, "Amanda", "Sam", 2000, "Let it Be"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store2 (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(2, "Toby", "Sam", 2000, "My Generation"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store2 (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(3, "Max", "Bob", 2018, "Help!"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_store2 (transaction_id, customer_name, cashier_name, year, albums_purchased) \
VALUES (%s, %s, %s, %s, %s)", \
(3, "Max", "Bob", 2018, "Meet the Beatles"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("SELECT * FROM music_store2;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
#### Moving to 2nd Normal Form (2NF)
We have moved our data to be in 1NF which is the first step in moving to 2nd Normal Form. Our table is not yet in 2nd Normal Form. While each of our records in our table is unique, our Primary key (transaction id) is not unique. We need to break this up into two tables, transactions and albums sold.
`Table Name: transactions
column 0: Transaction ID
column 1: Customer Name
column 2: Cashier Name
column 3: Year `
`Table Name: albums_sold
column 0: Album Id
column 1: Transaction Id
column 3: Album Name`
<img src="images/table14.png" width="450" height="450"> <img src="images/table15.png" width="450" height="450">
```
# We create two new tables transactions and albums sold and insert data into these tables
try:
cur.execute("CREATE TABLE IF NOT EXISTS transactions (transaction_id int, \
customer_name varchar, cashier_name varchar, \
year int);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("CREATE TABLE IF NOT EXISTS albums_sold (album_id int, transaction_id int, \
album_name varchar);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_name, year) \
VALUES (%s, %s, %s, %s)", \
(1, "Amanda", "Sam", 2000))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_name, year) \
VALUES (%s, %s, %s, %s)", \
(2, "Toby", "Sam", 2000))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_name, year) \
VALUES (%s, %s, %s, %s)", \
(3, "Max", "Bob", 2018))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \
VALUES (%s, %s, %s)", \
(1, 1, "Rubber Soul"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \
VALUES (%s, %s, %s)", \
(2, 1, "Let it Be"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \
VALUES (%s, %s, %s)", \
(3, 2, "My Generation"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \
VALUES (%s, %s, %s)", \
(4, 3, "Meet the Beatles"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \
VALUES (%s, %s, %s)", \
(5, 3, "Help!"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
print("Table: transactions\n")
try:
cur.execute("SELECT * FROM transactions;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
print("\nTable: albums_sold\n")
try:
cur.execute("SELECT * FROM albums_sold;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
#### Let's do a `JOIN` on this table so we can get all the information we had in our first Table.
```
# We complete the join on the transactions and album_sold tables
try:
cur.execute("SELECT * FROM transactions JOIN albums_sold ON transactions.transaction_id = albums_sold.transaction_id ;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
#### Moving to 3rd Normal Form (3NF)
Let's check our table for any transitive dependencies. Transactions can remove Cashier Name to its own table, called Employees, which will leave us with 3 tables.
`Table Name: transactions2
column 0: transaction Id
column 1: Customer Name
column 2: Cashier Id
column 3: Year `
`Table Name: albums_sold
column 0: Album Id
column 1: Transaction Id
column 3: Album Name`
`Table Name: employees
column 0: Employee Id
column 1: Employee Name `
<img src="images/table16.png" width="450" height="450"> <img src="images/table15.png" width="450" height="450"> <img src="images/table17.png" width="350" height="350">
```
try:
cur.execute("CREATE TABLE IF NOT EXISTS transactions2 (transaction_id int, \
customer_name varchar, cashier_id int, \
year int);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("CREATE TABLE IF NOT EXISTS employees (employee_id int, \
employee_name varchar);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
try:
cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \
VALUES (%s, %s, %s, %s)", \
(1, "Amanda", 1, 2000))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \
VALUES (%s, %s, %s, %s)", \
(2, "Toby", 1, 2000))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \
VALUES (%s, %s, %s, %s)", \
(3, "Max", 2, 2018))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO employees (employee_id, employee_name) \
VALUES (%s, %s)", \
(1, "Sam"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO employees (employee_id, employee_name) \
VALUES (%s, %s)", \
(2, "Bob"))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
print("Table: transactions2\n")
try:
cur.execute("SELECT * FROM transactions2;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
print("\nTable: albums_sold\n")
try:
cur.execute("SELECT * FROM albums_sold;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
print("\nTable: employees\n")
try:
cur.execute("SELECT * FROM employees;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
#### Let's do two `JOIN` on these 3 tables so we can get all the information we had in our first Table.
```
try:
cur.execute("SELECT * FROM (transactions2 JOIN albums_sold ON \
transactions2.transaction_id = albums_sold.transaction_id) JOIN \
employees ON transactions2.cashier_id=employees.employee_id;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
```
### DONE! We have Normalized our dataset!
### For the sake of the demo, Iet's drop the tables.
```
try:
cur.execute("DROP table music_store")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
try:
cur.execute("DROP table music_store2")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
try:
cur.execute("DROP table albums_sold")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
try:
cur.execute("DROP table employees")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
try:
cur.execute("DROP table transactions")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
try:
cur.execute("DROP table transactions2")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
```
### And finally close the cursor and connection.
```
cur.close()
conn.close()
```
| github_jupyter |
# 使用dask.delayed并行化代码
使用Dask.delayed并行化简单的for循环代码。通常,这是需要转换用于Dask的函数的惟一函数。
这是一种使用dask并行化现有代码库或构建复杂系统的简单方法。
**Related Documentation**
* [Delayed documentation](https://docs.dask.org/en/latest/delayed.html)
* [Delayed screencast](https://www.youtube.com/watch?v=SHqFmynRxVU)
* [Delayed API](https://docs.dask.org/en/latest/delayed-api.html)
* [Delayed examples](https://examples.dask.org/delayed.html)
* [Delayed best practices](https://docs.dask.org/en/latest/delayed-best-practices.html)
Dask有几种并行执行代码的方法。接下来将通过创建dask.distributed.Client来使用分布式调度器。这能为我们提供一些很好的诊断。后续再详细讨论调度器。
```
from dask.distributed import Client
client = Client()
client
print(client)
```
因为本repo environment下面已经安装了Bokeh,所以可以打开上面Dashboard给出的网址,查看 diagnostic dashboard
diagnostic dashboard 的说明可以参考这里: https://docs.dask.org/en/latest/diagnostics-distributed.html
## 基础
一个简单例子,inc和add函数,sleep一会以模拟工作。
```
from time import sleep
def inc(x):
sleep(1)
return x + 1
def add(x, y):
sleep(1)
return x + y
```
使用 %%time 计时代码的执行时间
```
%%time
# This takes three seconds to run because we call each
# function sequentially, one after the other
x = inc(1)
y = inc(2)
z = add(x, y)
```
### 使用dask.delayed装饰器并行化
这两个inc调用可以并行,因为它们彼此完全独立。
使用dask.delayed函数转换inc和add函数。当我们传递参数调用delayed版本时,函数实际上还没有被调用-这就是为什么单元格执行完成得非常快的原因。相反,创建一个delayed对象,来跟踪要调用的函数和要传递给它的参数。
```
from dask import delayed
%%time
# This runs immediately, all it does is build a graph
x = delayed(inc)(1)
y = delayed(inc)(2)
z = delayed(add)(x, y)
```
这很快就结束了,因为还没有真正执行什么。
要获得结果,需要调用compute。注意,这比原始代码运行得快。
```
%%time
# This actually runs our computation using a local thread pool
z.compute()
```
## 刚才发生了什么?
z对象是一个delayed对象。该对象包含计算最终结果所需的所有内容,包括对所有所需函数的引用、它们的输入以及彼此之间的关系。可以像上面那样使用.compute()计算结果,也可以使用.visualize()可视化该值的任务图。
```
z
# Look at the task graph for `z`
z.visualize()
```
注意,这包括前面提到的函数名,以及inc函数输出到add输入的逻辑流。
在 diagnostic dashboard 上每行对应一个thread,,可以看到,两个inc分布在两行,执行完之后,执行了add,总共花费1+1.01 s
### 需要考虑的一些问题
- 为什么从3s到2s?为什么不能并行化到1s呢?
- 如果inc和add函数没有包含sleep(1)会发生什么?dask还能加速这段代码吗?
- 如果有多个输出,或者也想访问x或y呢?
## 并行化for循环
For循环是最常见的想要并行化的东西之一。使用dask.delayed on inc和sum来并行化下面的计算:
```
data = [1, 2, 3, 4, 5, 6, 7, 8]
%%time
# Sequential code
results = []
for x in data:
y = inc(x)
results.append(y)
total = sum(results)
total
%%time
results = []
for x in data:
y = delayed(inc)(x)
results.append(y)
total = delayed(sum)(results)
print("Before computing:", total) # Let's see what type of thing total is
result = total.compute()
print("After computing :", result) # After it's computed
```
## Exercise: 将for循环代码与控制流并行化
通常我们只想delay一部分函数,立即运行另一些函数。当这些函数速度很快时,这一点特别有用,可以帮助我们确定应该调用哪些其他速度较慢的函数。在使用dask.delayed时,通常需要考虑延迟或不延迟的决定。
在下面的例子中,遍历一个输入列表。如果输入是偶数,则调用inc。如果输入是奇数,那么要调用double。为了继续我们的图形构建Python代码,必须立即(而不是延迟)做出调用inc或double的is_even决定。
```
def double(x):
sleep(1)
return 2 * x
def is_even(x):
return not x % 2
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
%%time
# Sequential code
results = []
for x in data:
if is_even(x):
y = double(x)
else:
y = inc(x)
results.append(y)
total = sum(results)
print(total)
results = []
for x in data:
# 判断的语句是需要立即做出决定的
if is_even(x): # even
y = delayed(double)(x)
else: # odd
y = delayed(inc)(x)
results.append(y)
total = delayed(sum)(results)
%time total.compute()
total.visualize()
```
如果不求和,也是可以直接对list执行delayed计算的,一样的。
```
results = []
for x in data:
# 判断的语句是需要立即做出决定的
if is_even(x): # even
y = delayed(double)(x)
else: # odd
y = delayed(inc)(x)
results.append(y)
%time delayed(results).compute()
```
## 复杂函数能delay么?
假如有一个函数是别人写的,通过pip或者conda安装的,函数本身还有很多其他调用,且没有delay包装的,那能不能至少在最外层加delay?
```
def funcs(a, b):
c = func1(a)
d = func2(b)
e = c * func3(d)
f = func4(c) * e
return f
def func1(v1):
sleep(0.5)
return v1**2
def func2(v2):
sleep(0.5)
return v2/2
def func3(v3):
sleep(0.5)
return v3*3+1
def func4(v4):
sleep(0.5)
return v4*2
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
%%time
# Sequential code
results = []
for x,y in zip(data[1:],data[:-1]):
z = funcs(x,y)
results.append(z)
total = sum(results)
print(total)
# Sequential code
results = []
for x,y in zip(data[1:],data[:-1]):
z = delayed(funcs)(x,y)
results.append(z)
total = delayed(sum)(results)
print(total)
%time total.compute()
```
可以看到还是会加速的,通过 查看 diagnostic dashboard,可以进一步查看并行的执行情况。
```
client.close()
```
离开前,关闭之前打开的client。
| github_jupyter |
⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠
# Disclaimer
👮🚨This notebook is sort of like my personal notes on this subject. It will be changed and updated whenever I have time to work on it. This is not meant to replace a thorough fluid substitution workflow. The intent here is make the major assumptions underlying the process of evaluating the affect of fluid fill on seismic response a little more clear, as well as provide references and background in literature for further study.🚨
At some point I will probably generalize this better so it can be used with real curves. For now it creates some fake blocked logs you can edit just to get a feel for how fluid sub works and how the different fluid fills might look in seismic. Also, currently the rocks are monomineralic.
#### Important Note:
The proper conditioning of logs, calibration of water saturations, reservoir selection for substituion, and rock and mineral parameter selection and calibration are extremely important to the reliability of a fluid substitution's output. These are good candidates for additional tutorials.
This tutorial is focused on the basic workflow from the geophysical perspective and therefore assumes the labor intensive petrophysical work mentioned above is both completed and reliable.
##### Notes for future:
* Incorporate a tuning section
* Put the whole thing in a function and see if I can get interact working so I can just use sliders to change parameters
* Generalize so real .las files can be loaded
* Complete the implementation of the B&W fluid property equations
* Fix a few of the hard-coded parts
* ~~Figure out why fill_betweenx isn't working~~
##### Come up and ask me questions on 7 if anything appears to be amiss! -Thomas
⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠
[](https://mybinder.org/v2/gh/tccw/geotools/master?filepath=tutorials%2FFluidSubstitution.ipynb)
```
from collections import namedtuple
from scipy.stats import linregress
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import bruges as b
from IPython.display import HTML
%matplotlib inline
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<font size="6" color="red">The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.</font>''')
```
# Porosity and Saturation effects on AVO
### Gassmann's Equations
Gassmann's equations (seen below) describes how the bulk modulus (ratio of pressure change to volume change) of a saturated rock changes as the saturating fluid changes. It provides a useful means for modeling how the seismic response of a formation may change for different filling fluids.
For a discussion of the origin and derivation of Gassmann's equations, see Berryman, 2000 (https://doi.org/10.1190/1.1444667)
$$\textbf{Gassmann Equations}$$
$$\frac{K_{sat}}{K_{mineral} - K_{sat}} = \frac{K_{dry}}{K_{mineral} - K_{dry}} + \frac{K_{fluid}}{\phi(K_{mineral} - K_{fluid})}$$
$$\mu_{sat} = \mu_{dry}$$
$K_{dry} = \text{Dry rock bulk modulus}$
$K_{mineral} = \text{Mineral bulk modulus}$
$K_{sat} = \text{Saturated rock bulk modulus}$
$K_{fluid} = \text{Fluid bulk modulus}$
$\mu_{sat} = \text{Shear modulus of the saturated rock}$
$\mu_{dry} = \text{Shear modulus of the dry rock}$
### Assumptions
1. Porous material is isotropic, elastic, monomineralic, and homogeneous
2. Pore sapce is well connected and in pressure equilibrium
3. Medium is a closed system with no pore fluid movement across boundaries
4. No checmical interaction between fluids and rock frame (i.e. no diagenetic processes)
5. Frequency effects are negligible when considering the measurements. Gassmann's equations are valid only for seismic frequencies (<100 Hz from Mavko, 1998).
These assumptions are often violated in real reservoirs. However, Gassmann's model is still generally the preferred model as it can be easily parameterized. A number of publications exist which suggest ways to modify inputs or assumptions to make these relationships more applicable to more variable rocks. A good general discussion of this can be found in Rob Simm's 2007 article "Practical Gassmann fluid substitution in sand/shale sequences [DOI: 10.3997//1365-2387.2007030](http://dreamcell-dev.co.uk/rpa/papers_downloads/RPA_simm_2007.pdf).
Below we will look at the Avseth et. al, 2006 fluid substitution workflow, which is used in this notebook.
#### Gassmann fluid substitution recipe from Avseth, 2006$^{[1]}$
$\textbf{Step 1:}$ Extract the dynamic bulk and shear moduli from $V_{p}^{(1)}$, $V_{s}^{(1)}$ , and $\rho^{(1)}$:
$K^{(1)}\ =\ \rho((V_{p}^{(1)})^2 - \frac{4}{3}(V_{s}^{(1)})^2)\\ \mu^{(1)}\ =\ \rho(V_{s}^{(1)})^2$
$\textbf{Step 2:}$ Apply Hassmann's relation to transform the bulk modulus:
$\frac{K_{sat}^{(2)}}{K_{mineral}\ -\ K_{sat}^{(2)}}\ -\ \frac{K_{fluid}^{(2)}}{\phi(K_{mineral}\ -\ K_{fluid}^{(2)})}\ =\ \frac{K_{sat}^{(1)}}{K_{mineral}\ -\ K_{sat}^{(1)}}\ -\ \frac{K_{fluid}^{(1)}}{\phi(K_{mineral}\ -\ K_{fluid}^{(1)})}$
$\textbf{Step 3:}$ Leave the shear modulus unchanged:
$\mu_{sat}^{(1)} = \mu_{sat}^{(2)}$
$\textbf{Step 4:}$ Remember to correct the bulk density for the fluid change:
$\rho^{(2)} = \rho^{(1)} + \phi(\rho_{fluid}^{(2)} - \rho_{fluid}^{(1)})$
$\textbf{Step 5:}$ Reassemble the velocities:
$V_p^{(2)} = \sqrt{\frac{K_{sat}^{(2)} + \frac{4}{3} \mu_{sat}^{(2)}}{\rho^{(2)}}}$
$V_s^{(2)} = \sqrt{\frac{\mu_{sat}^{(2)}}{\rho^{(2)}}}$
Below is a basic, blocked log example of Gassmann fluid substitution to help explore the affects of different fluids on the seismic response.
$^{[1]}$Avseth, Per; Mukerji, Tapan; Mavko, Gary. Quantitative Seismic Interpretation: Applying Rock Physics Tools to Reduce Interpretation Risk (Kindle Locations 582-584). Cambridge University Press. Kindle Edition.
```
HTML('<font color="red">The B&W implementation is incomplete and needs more testing for verification</font>')
```
### Batzle and Wang fluid calculations
The most common, and likely most useful, method for calcualting the properties of fluids of varying composition, temperature, and pressure are emperical fluid equations from Batzle & Wang 1992.
These functions take pressure in MPa and temperature in Centigrade. It outputs density in g/cc, velocity in m/s, and bulk modulus (K) in GPa.
$\textbf{Equations for dead oil:}$
$API = \frac{141.5}{\rho_0} - 131.5$
$\rho_P = \rho_0 + (0.00277P - 1.71 \times 10^{-7}P^3)(\rho_0 - 1.15)^2 + 3.49 \times 10^{-4}P$
$\rho = \rho_P / [0.972 + 3.81 \times 10^{-4}(T + 17.78)^{1.175}]$
$V = 15450(77.1 + API)^{-1/2} - 3.7T + 4.64P + 0.0115(0.36API^{1/2} - 1)TP$
```
def bwOil(temp,pressure,API, gasGravity,live = False):
# Pressue in MPa, Temp in C
P = pressure
T = temp
G = gasGravity
rho0 = 141.5 / (API + 131.5)
rhoP = rho0 + (0.00277*P - 1.71e-7 * P**3)*(rho0 - 1.15)**2 + 3.49e-4 * P
#Rg = 0.02123*G*(P*np.exp(4.072/rho0 - 0.00377*T))**1.205 # Eqtn 21a
Rg = 2.03*G*(P*np.exp(0.02878*API - 0.00377*T))**1.205 # Eqtn 21b
Bo = 0.972 + 0.00038*(2.4 * Rg * np.sqrt(G/rho0) + T + 17.8)**1.175 # Eqtn 23
rhoPprime = (rho0/Bo) * (1 + 0.001*Rg)**(-1) # Eqtn 22
if live == False:
rho = rhoP / (0.972 + 3.81e-4 * (T + 17.78)**1.175) #etqn 20
vp = 15450*(77.1 + API)**(-1/2) - 3.7*T + 4.64*P + 0.0115*(0.36*API**(1/2) - 1)*T*P
elif live == True:
rho = (rho0 + 0.0012*G*Rg)/Bo
vp = 2096 * np.sqrt(rhoPprime/(2.6 - rhoPprime)
) - 3.7*T + 4.64*P + 0.0115*(4.12 * (1.08 * rhoPprime**-1 - 1) - 1)*T*P
K = (rho * vp**2)/1e6
return K, rho * 1000
def bwBrine(temp,pressure,salinity):
'''
Pressue in MPa, Temp in C, salinity is weight fraction (i.e. ppm/1e6)
The velocity is not agreeing with the FPE from CREWES but I can't figure out why
'''
S = salinity
P = pressure
T = temp
#eqtn 27 - 29
rhow = 1 + 1e-6 * (-80*T - 3.3*T**2 + 0.00175*T**3 + 489*P -
2*T*P + 0.016*T**2 * P - 1.3e-5 * T**3 * P -
0.333*P**2 - 0.002*T*P**2)
rhobr = rhow + S*(0.668 + 0.44*S + 1e-6 * (300*P - 2400*P*S +
T*(80 + 3*T - 3300*S - 13*P + 47*P*S)))
w = np.array([[1402.85, 1.524, 3.437e-3, -1.197e-5],
[4.871, -0.0111, 1.739e-4, -1.628e-6],
[-0.04783, 2.747e-4, -2.135e-6, 1.237e-8],
[1.487e-4, -6.503e-7, -1.455e-8, 1.327e-10],
[-2.197e-7, 7.987e-10, 5.230e-11, -4.614e-13]], dtype = float)
vpW = np.sum(w[i][j]*np.power(P,[i])*np.power(T,[j]) for i in range(0,4) for j in range(0,3))
vpB = vpW + S*(1170 - 9.6*T + 0.055*T**2 - 8.5e-5 * T**3 + 2.6*P
- 0.0029*T*P - 0.0476*P**2)+ S**1.5 * (780 -10*P + 0.16*P**2) - 820*S**2
K = (rhobr * vpB**2)/1e6
rhobr = np.array(rhobr)
vpB = np.array(vpB)
K = np.array(K)
return K, rhobr * 1000
```
## Input data
```
# Pressure (P), Temperature (T), API, Gas Gravity (G), Salinity weight fraction (S)
# Deepwater GOM pressures and temperatures
P = 100 # MPa
T = 85.5 # degrees C
API = 35
G = 0.6
S = 0.088 # ppm/1e6
# In situ parameters are GOM clean sand 100% brine saturated values
vpInSitu = 3550. # m/s
vsInSitu = 1900. # m/s
rhobInSitu = 2240. # kg/m^3
top_depth = 400
base_depth = 500
resThickness = 100. # thickness in meters
KflInitial, rhoflInitial = bwBrine(P,T,0.025) # Inital brine (this was taken from some GOM well data)
KflBrine, rhoflBrine = bwBrine(P,T,S)
KflOil, rhoflOil = bwOil(P,T,API,G,live = False)
KflGas, rhoflGas = 0.374 * 1e9, 338 # gas Gpa (convert to pascals)
Kmineral = 37.0 * 1e9 # Gpa Quartz from tables (convert to pascals 1e9)
# Convert bulk Modluii to pascals from GPa
KflInitial = KflInitial * 1e9 # convert to pascals
KflBrine = KflBrine * 1e9
KflOil = KflOil * 1e9
# encasing rock properties
vpEncase, vsEncase, rhobEncase = 3300.,1500.,2400.
phi = np.round((2650 - rhobInSitu)/(2650 - rhoflInitial),2) # SS density porosity
bwOil(P,T,API,G,live = False)
bwBrine(P,T,0.025)
```
#### Make the wavelet (currently only supports Ricker and Ormsby wavelets)
* Here I am making the sample 1 ms even though most seismic is 2 ms
* This allows me to make a smooth synthetic without having to interpolate later
```
# wavelet parameters
f = 35 #frequency of ricker wavelet
f_arr = [8,12,50,65]
duration = 0.128 # length of wavelet in seconds
dt = 0.001 # size of time sample for
dz = 1 # for later (should probably be located somewhere else)
wvlt, t_basis = b.filters.ricker(duration, dt, f, return_t=True)
wvlt_orm, t_basis_orm = b.filters.ormsby(duration, dt, f_arr,return_t=True)
sns.set_style(style="ticks")
plt.figure(figsize=(10,7))
plt.plot(t_basis * 1e3, wvlt_orm, label = f'Ormsby $f$: {f_arr}', linewidth=4)
plt.plot(t_basis * 1e3, wvlt, label = f'Ricker peak $f$: {f}', linewidth=4)
plt.xlabel('Time (ms)', size=13)
plt.ylabel('Amplitude', size=13)
plt.title('Two possible wavelets we can use for the synthetic angle gathers', size=17)
plt.xlim(t_basis.min() * 1e3,t_basis.max() * 1e3)
plt.grid(alpha=0.3)
plt.legend()
```
#### Create in situ block curves
```
shape = (1000,)
block_vp, block_vs, block_rhob = np.zeros(shape), np.zeros(shape), np.zeros(shape)
block_vp[:], block_vs[:], block_rhob[:] = vpEncase, vsEncase, rhobEncase
block_vp[top_depth:base_depth], block_vs[top_depth:base_depth], block_rhob[top_depth:base_depth] = vpInSitu, vsInSitu, rhobInSitu
```
#### Naive fluid sub from Avseth, 2006
```
rhofl = np.array([rhoflInitial,rhoflBrine, rhoflOil, rhoflGas])
Kfls = np.array([KflInitial,KflBrine, KflOil, KflGas])
names = ['Initial', 'Brine', 'Oil', 'Gas']
# Order is initial fluid, user defined brine, user defined oil, user defined gas
subs_depth = [b.rockphysics.avseth_fluidsub(
block_vp,block_vs,block_rhob,phi,rhofl[0], rhofl[i],
Kmineral,Kfls[0], Kfls[i]) for i in range(len(Kfls))]
subs_depth = {k:v for k,v in zip(names,subs_depth)}
# Resubbing in the old velocities for the encasing rock.
# There must be a better way to approach this. Will have to think about it more later.
for key in names:
getattr(subs_depth[key],'Vp')[:top_depth] = vpEncase
getattr(subs_depth[key],'Vp')[base_depth:] = vpEncase
getattr(subs_depth[key],'Vs')[:top_depth] = vsEncase
getattr(subs_depth[key],'Vs')[base_depth:] = vsEncase
getattr(subs_depth[key],'rho')[:top_depth] = rhobEncase
getattr(subs_depth[key],'rho')[base_depth:] = rhobEncase
```
### Convert all the curves from depth to time
```
curves=['Vp', 'Vs', 'rho']
twt_tmp = [b.transform.time_to_depth(
getattr(subs_depth[n],c),getattr(subs_depth[n],'Vp'), dt, dz) for n in names for c in curves]
```
### Do some organization to make it easier to plot
* Make sure to use the updated Vp curve for each fluid subbed case for correct timing
* Create the different TWT arrays for plotting
```
twt_tmp_composite = [twt_tmp[x:x+3] for x in range(0, len(twt_tmp),3)]
twt_curves = namedtuple('TWTResults',('Vp','Vs','rho'))
subs_twt = [twt_curves(*twt_tmp_composite[i]) for i in range(len(names))]
subs_twt = {k:v for k,v in zip(names,subs_twt)}
twts = {key:np.linspace(0,len(getattr(subs_twt[key],'Vp')) * dt,
len(getattr(subs_twt[key],'Vp'))) for key in names}
```
### Make the pre-stack synthetics
```
theta = np.arange(0,51,1)
reflectivity = {key:b.reflection.reflectivity(getattr(subs_twt[key],'Vp'),
getattr(subs_twt[key],'Vs'),
getattr(subs_twt[key],'rho'),theta=theta) for key in names}
prstk_gaths = {key:np.apply_along_axis(lambda x: np.convolve(wvlt, x, mode='same'),axis=1,arr=reflectivity[key]) for key in names}
# Get the index of the top of the reservoir in time
top_twt_index = np.argmax(reflectivity['Initial']!=0)
```
#### Calc intercept and gradient
* I am only going to use the first 30 degrees of the reflectivity series as beyond ~30 degrees reflectivity stops behaving linearly in reflectivity vs. $sin^2(\theta)$ space, therefore a linear approximation (like the one used for gradient / intercept) is not a hepful regression.
```
theta_grad = 30
refl = {k:reflectivity[k][:theta_grad,top_twt_index] for k in names}
sintheta = np.sin(np.radians(np.arange(0, theta_grad)))**2
int_grad = {k:linregress(sintheta,refl[k][:]) for k in names}
```
### Plot everything up (the hardest part!)
```
sns.set_style('ticks')
# Some useful stuff to initialize
depth = np.linspace(0,1000,1000)
gain = 45
colors=['k','b','g','r']
titles = [r'Vp $\frac{km}{s^2}$', r'Vs $\frac{km}{s^2}$', r'Density $\frac{kg}{m^3}$',
'Angle Gather (Initial)', 'Angle Gather (100% Brine)', 'Angle Gather (100% Oil)', 'Angle Gather (100% Gas)']
curve_buffer_twt = 0.1
def format_axes(fig):
titles = [r'Vp $\frac{km}{s^2}$', r'Vs $\frac{km}{s^2}$', r'Density $\frac{kg}{m^3}$',
'Angle Gather (Initial)', 'Angle Gather (100% Brine)', 'Angle Gather (100% Oil)',
'Angle Gather (100% Gas)', 'Zoeppritz Reflectivity vs Angle (Upper Interface)', 'Intercept vs. Gradient Crossplot (Upper Interface)']
axes_label_size=12
for i, ax in enumerate(fig.axes):
ax.set_title(titles[i],y = 1.01)
ax.tick_params(labelbottom=True, labelleft=True)
ax.grid(alpha=0.5, linestyle='--')
# labels
for ax in (ax4,ax5,ax6,ax7):
ax.set_xlabel(r'Angle $(\theta)$', size = axes_label_size)
ax1.set_ylabel('TWT (s)', size=axes_label_size)
ax8.set_ylabel('Reflectivity', size=axes_label_size)
ax8.set_xlabel(r'Angle $(\theta)$', size=axes_label_size)
ax9.set_ylabel('Gradient $(G)$', size=axes_label_size)
ax9.set_xlabel('Intercept $(R0)$', size=axes_label_size)
# limits
ax1.set_ylim(0.6,0.9)
ax3.set_xlim(1.65,2.65)
ax8.set_xlim(0,theta.max())
ax9.set_xlim(np.real(getattr(int_grad['Initial'],'intercept')) - 0.2, np.real(getattr(int_grad['Initial'],'intercept')) + 0.2)
ax9.set_ylim(np.real(getattr(int_grad['Initial'],'slope')) - 0.2, np.real(getattr(int_grad['Initial'],'slope')) + 0.2)
ax1.invert_yaxis()
fig = plt.figure(constrained_layout=True, figsize=(17,14))
gs = GridSpec(nrows=4, ncols=7, figure=fig)
ax1 = fig.add_subplot(gs[:2, 0])
ax2 = fig.add_subplot(gs[:2, 1], sharey=ax1)
ax3 = fig.add_subplot(gs[:2, 2], sharey=ax1)
ax4 = fig.add_subplot(gs[:2, 3], sharey=ax1)
ax5 = fig.add_subplot(gs[:2, 4], sharey=ax1, sharex=ax4)
ax6 = fig.add_subplot(gs[:2, 5], sharey=ax1, sharex=ax4)
ax7 = fig.add_subplot(gs[:2, 6], sharey=ax1, sharex=ax4)
ax8 = fig.add_subplot(gs[2:,:4])
ax9 = fig.add_subplot(gs[2:,4:])
for key,c in zip(names, colors):
ax1.plot(getattr(subs_twt[key],'Vp') / 1e3,twts[key], label=f'100% {key}', color=c)
ax2.plot(getattr(subs_twt[key],'Vs') / 1e3,twts[key], label=f'100% {key}', color=c)
ax3.plot(getattr(subs_twt[key],'rho') / 1e3,twts[key], label=f'100% {key}', color=c)
for key,ax in zip(names,(ax4,ax5,ax6,ax7)):
for i in range(0,theta.max(),3):
ax.plot(np.real(prstk_gaths[key][i,:] * gain + i), twts[key][:-1],color='k')
ax.fill_betweenx(twts[key][:-1], i, np.real(prstk_gaths[key][i,:]) * gain + i, color='k',alpha=0.5,
where=np.real(prstk_gaths[key][i,:]) * gain + i > i, interpolate=True)
ax.fill_betweenx(twts[key][:-1], i, np.real(prstk_gaths[key][i,:]) * gain + i, color='r',alpha=0.5,
where=np.real(prstk_gaths[key][i,:]) * gain + i < i, interpolate=True)
# np.argmax(reflectivity['Initial']!=0)
for k,c in zip(names,colors):
ax8.plot(np.real(reflectivity[k][:,top_twt_index]), color=c, label=f'100% {k}')
ax9.scatter(np.real(getattr(int_grad[k],'intercept')),np.real(getattr(int_grad[k],'slope')), color=c,label=f'100% {k}')
ax8.axhline(0, color='k', alpha=0.5)
ax9.axhline(color='k')
ax9.axvline(color='k')
ax1.legend()
ax8.legend()
ax9.legend()
fig.suptitle('Gassmann Fluid Substitution Overview', size = 20, y = 1)
format_axes(fig)
# Uncomment the line below to save the figure. You may need to change the filepath.
# plt.savefig('GassmannFluidSubOverview.png', dpi=350,bbox_inches='tight')
plt.show()
```
| github_jupyter |
# Loading Data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm_notebook, tnrange
import os
from sklearn.preprocessing import LabelEncoder
#os.chdir("/content/drive/My Drive/Chartbusters/ChartbustersParticipantsData")
%matplotlib inline
train = pd.read_csv("Data_Train.csv")
test = pd.read_csv("Data_Test.csv")
print(train.shape)
print(test.shape)
print(train.info())
print(test.info())
```
# Data Cleaning
```
train.Likes = (train.Likes.replace(r'[KM]+$', '', regex=True).astype(float) * \
....: train.Likes.str.extract(r'[\d\.]+([KM]+)', expand=False)
....: .fillna(1)
....: .replace(['K','M'], [10**3, 10**6]).astype(int))
train.Popularity = (train.Popularity.replace(r'[KM]+$', '', regex=True).astype(float) * \
....: train.Popularity.str.extract(r'[\d\.]+([KM]+)', expand=False)
....: .fillna(1)
....: .replace(['K','M'], [10**3, 10**6]).astype(int))
test.Likes = (test.Likes.replace(r'[KM]+$', '', regex=True).astype(float) * \
....: test.Likes.str.extract(r'[\d\.]+([KM]+)', expand=False)
....: .fillna(1)
....: .replace(['K','M'], [10**3, 10**6]).astype(int))
test.Popularity = (test.Popularity.replace(r'[KM]+$', '', regex=True).astype(float) * \
....: test.Popularity.str.extract(r'[\d\.]+([KM]+)', expand=False)
....: .fillna(1)
....: .replace(['K','M'], [10**3, 10**6]).astype(int))
# training Data
train['Likes'] = train['Likes'].astype(int)
train['Popularity'] = train['Popularity'].astype(int)
train['Name'] = train['Name'].astype(str)
train['Genre'] = train['Genre'].astype(str)
train['Country'] = train['Country'].astype(str)
# testing Data
test['Likes'] = test['Likes'].astype(int)
test['Popularity'] = test['Popularity'].astype(int)
test['Name'] = test['Name'].astype(str)
test['Genre'] = test['Genre'].astype(str)
test['Country'] = test['Country'].astype(str)
train['Timestamp'] = pd.to_datetime(train['Timestamp'])
test['Timestamp'] = pd.to_datetime(test['Timestamp'])
# Converting into Datetime format both training and testing data
#train['Timestamp'] = pd.to_datetime(train['Timestamp'])
#test['Timestamp'] = pd.to_datetime(test['Timestamp'])
# Add columns to training and testing data
train['year'] = pd.DatetimeIndex(train['Timestamp']).year
train['month'] = pd.DatetimeIndex(train['Timestamp']).month
train['day'] = pd.DatetimeIndex(train['Timestamp']).day
train['hour'] = pd.DatetimeIndex(train['Timestamp']).hour
test['year'] = pd.DatetimeIndex(test['Timestamp']).year
test['month'] = pd.DatetimeIndex(test['Timestamp']).month
test['day'] = pd.DatetimeIndex(test['Timestamp']).day
test['hour'] = pd.DatetimeIndex(test['Timestamp']).hour
target = "Views"
target_value = train[target]
train.drop(["Views", "Unique_ID", "Song_Name", "Timestamp", "Name"], axis=1, inplace=True)
test.drop(["Unique_ID", "Song_Name", "Timestamp", "Name"], axis=1, inplace=True)
print(train.head())
print(test.head())
target_value.head()
target_value = pd.DataFrame(target_value, columns=['Views'])
print("Total number of unique entries are {}".format(train['year'].nunique()))
print(train['year'].unique())
print(train['Country'].nunique())
print(train['Country'].unique())
train.drop(["Country"], axis=1, inplace=True)
test.drop(["Country"], axis=1, inplace=True)
print(train['Genre'].nunique())
print(train['Genre'].unique())
print(train.head())
print(test.head())
train.drop(['day', 'hour'], axis=1, inplace=True)
test.drop(['day', 'hour'], axis=1, inplace=True)
print(train.head())
print(test.head())
print(target_value.head())
```
# Data Preprocession
## Preprocessing only for non-tree based algorithm
```
#fig, axs = plt.subplots(ncols=2, nrows=2)
sns.distplot(train['Comments'], hist=False, rug=True)
sns.distplot(train['Likes'], hist=False, rug=True)
sns.distplot(train['Popularity'], hist=False, rug=True)
sns.distplot(train['Followers'], hist=False, rug=True)
import matplotlib.pyplot as pl
def distribution(data, transformed = False):
"""
Visualization code for displaying skewed distributions of features
"""
# Create figure
fig = pl.figure(figsize = (11,5));
# Skewed feature plotting
for i, feature in enumerate(['Comments','Likes', 'Popularity', 'Followers']):
ax = fig.add_subplot(1, 4, i+1)
ax.hist(train[feature], bins = 25, color = '#00A0A0')
ax.set_title("'%s' Feature Dist"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 2000))
ax.set_yticks([0, 500, 1000, 1500, 2000])
ax.set_yticklabels([0, 500, 1000, 1500, ">2000"])
# Plot aesthetics
if transformed:
fig.suptitle("Log-transformed Distributions of Continuous Census Data Features", \
fontsize = 16, y = 1.03)
else:
fig.suptitle("Skewed Distributions of Continuous Census Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
fig.show()
distribution(train)
skewed = ['Comments', 'Likes', 'Popularity', 'Followers']
train[skewed] = train[skewed].apply(lambda x: np.log(x + 1))
test[skewed] = test[skewed].apply(lambda x: np.log(x+ 1 ))
# Visualize the new log distributions
distribution(train, transformed = True)
import matplotlib.pyplot as pl
def distribution(data, transformed = False):
"""
Visualization code for displaying skewed distributions of features
"""
# Create figure
fig = pl.figure(figsize = (11,5));
# Skewed feature plotting
for i, feature in enumerate(['Views']):
ax = fig.add_subplot(1, 2, i+1)
ax.hist(target_value[feature], bins = 25, color = '#00A0A0')
ax.set_title("'%s' Feature Dist"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 2000))
ax.set_yticks([0, 500, 1000, 1500, 2000])
ax.set_yticklabels([0, 500, 1000, 1500, ">2000"])
# Plot aesthetics
if transformed:
fig.suptitle("Log-transformed Distributions of Continuous Census Data Features", \
fontsize = 16, y = 1.03)
else:
fig.suptitle("Skewed Distributions of Continuous Census Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
fig.show()
distribution(target_value)
skewed = ['Views']
target_value[skewed] = target_value[skewed].apply(lambda x: np.log(x + 1))
distribution(target_value, transformed=True)
print(train.head())
print(test.head())
print(target_value.head())
```
## Preprocessing for tree based algorithm
```
label_encoding = LabelEncoder()
#catg = ['Genre', 'year', 'month']
train['Genre'] = label_encoding.fit_transform(train['Genre'])
#train['year'] = label_encoding.fit_transform(train['year'])
#train['month'] = label_encoding.fit_transform(train['month'])
test['Genre'] = label_encoding.fit_transform(test['Genre'])
#test['year'] = label_encoding.fit_transform(test['year'])
#test['month'] = label_encoding.fit_transform(test['month'])
#train['Genre'] = label_encoding.transform(train['Genre'])
print(train.head(n=3))
print(test.head(n=3))
from sklearn.model_selection import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(train,
target_value,
test_size = 0.2,
random_state = 123)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
```
# Applying Different algorithm
## sub_3_bs model
```
import xgboost as xgb
xg_reg = xgb.XGBRegressor(colsample_bytree=1, max_depth=5, min_child_weight=1, n_estimators=400)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
preds.shape
import sklearn
rmse = np.sqrt(sklearn.metrics.mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
y_out = xg_reg.predict(test)
y_out = np.reshape(y_out, (-1,1))
```
## sub_12 best model
```
import xgboost as xgb
xg_reg = xgb.XGBRegressor(colsample_bytree=1, max_depth=5, min_child_weight=1, n_estimators=500)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
preds.shape
import sklearn
rmse = np.sqrt(sklearn.metrics.mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
y_out = xg_reg.predict(test)
y_out = np.reshape(y_out, (-1,1))
y_out = pd.DataFrame(data=y_out, columns=['Views'])
t1 = pd.read_csv('Data_Test.csv')
fin = pd.concat([t1['Unique_ID'], y_out['Views']], axis=1, names=['Unique_ID', 'Views'])
fin.to_csv("result.csv")
res = pd.read_csv("result.csv", index_col=['Unique_ID'])
res.drop(['Unnamed: 0'], axis=1, inplace=True)
res.to_csv("result.csv")
```
## sub_11 2 best model
```
import xgboost as xgb
xg_reg = xgb.XGBRegressor(colsample_bytree=1, max_depth=5)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
preds.shape
import sklearn
rmse = np.sqrt(sklearn.metrics.mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
y_out = xg_reg.predict(test)
y_out = np.reshape(y_out, (-1,1))
```
## RandomForest Algorithm
```
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=200, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_test)
predictions = model.predict(test)
import sklearn
rmse = np.sqrt(sklearn.metrics.mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
y_out = pd.DataFrame(data=predictions, columns=['Views'])
```
## GradientBoostingAlgorithm
```
from sklearn.ensemble import GradientBoostingRegressor
grb=GradientBoostingRegressor(learning_rate=0.1,n_estimators=200, max_depth=5
,subsample=0.8,
verbose=True,random_state=10)
grb.fit(X_train, y_train)
preds = grb.predict(X_test)
preds.shape
import sklearn
rmse = np.sqrt(sklearn.metrics.mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
y_out = xg_reg.predict(test)
y_out = np.reshape(y_out, (-1,1))
```
# Submission
```
y_out = pd.DataFrame(data=y_out, columns=['Views'])
t1 = pd.read_csv('Data_Test.csv')
fin = pd.concat([t1['Unique_ID'], y_out['Views']], axis=1, names=['Unique_ID', 'Views'])
fin.to_csv("result.csv")
res = pd.read_csv("result.csv", index_col=['Unique_ID'])
res.drop(['Unnamed: 0'], axis=1, inplace=True)
res.to_csv("result.csv")
```
| github_jupyter |
### Converting a `Functional` model to `Sequential` model during `Transfare` Learning.
* This notebook will walk through on how to convert to `Sequential` from `Functional` API using Transfare leaning.
```
import tensorflow as tf
```
### Data Argumentation using `keras api`
```
from tensorflow.keras.preprocessing.image import img_to_array, load_img, ImageDataGenerator
train_path = "bees_v_ant/train"
validation_path = "bees_v_ant/validation"
test_path = '.'
test_gen = ImageDataGenerator(rescale=1./255)
valid_gen = ImageDataGenerator(rescale=1./255)
train_gen = ImageDataGenerator(rescale=1./255)
test_data = test_gen.flow_from_directory(
test_path,
target_size=(224, 224),
classes=["test"]
)
train_data = train_gen.flow_from_directory(
train_path,
target_size=(224, 224),
classes=["ant", 'bee'],
class_mode='categorical',
batch_size=8,
)
valid_data = valid_gen.flow_from_directory(
validation_path,
target_size=(224, 224),
classes=["ant", 'bee'],
class_mode='categorical',
batch_size=8,
)
test_data[0]
```
> Select the `base` model. `VGG16` with 1000 [class names](https://image-net.org/challenges/LSVRC/2014/browse-synsets)
```
vgg_model = tf.keras.applications.vgg16.VGG16()
print(type(vgg_model)) # Functional Model
vgg_model.summary()
```
### `VGG16` model achitecture
<p align="center">
<img src="https://miro.medium.com/max/237/1*Z5jNPTu8Xexp9rRs7RNKbA.png"/>
</p>
It can be ploted using the `plot_model` function from keras as follows:
```python
from keras.applications.vgg16 import VGG16
from keras.utils import plot_model
model = VGG16()
plot_model(model)
```
> Create a ``sequential`` model instance
```
model = tf.keras.Sequential()
```
> Loop through all the `base` model layers and add them to the created `model` model except the output layer.
```
for layer in vgg_model.layers[0:-1]:
model.add(layer)
model.summary()
```
> Set `trainable=False` for all the model layers, this is because we don't want to train them again.
```
for layer in model.layers:
layer.trainable = False
model.summary()
```
> Add the `last` layers for classification with the number of `classes` that we have.
```
output_layer = tf.keras.layers.Dense(2, activation='softmax')
model.add(output_layer)
model.summary()
```
> Compile the ``model``.
```
model.compile(
loss = tf.keras.losses.categorical_crossentropy,
optimizer = tf.keras.optimizers.Adam(),
metrics = ['acc']
)
```
> Train the model with your own data by calling `model.fit()`
```
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=2,
verbose=0,
)
history = model.fit(
train_data,
epochs = 10,
batch_size = 8,
validation_data = valid_data,
verbose = 1,
callbacks=[early_stopping]
)
```
> Plotting the model `history`.
```
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
pd.DataFrame(history.history).plot(title="Model History", xlabel="epochs")
plt.show()
```
> Evaluating the `model`.
```
model.evaluate(test_data, verbose=1)
```
> Making ``predictions``.
```
predictions = tf.argmax(model.predict(test_data), axis=1).numpy()
predictions
class_names = np.array(["bee", "ant"])
images = [image for image in test_data[0][0]]
def plot_predictions_images(images_and_classes, labels_pred, cols=5):
rows = 3
fig = plt.figure()
fig.set_size_inches(cols * 2, rows * 2)
for i, (image, label_pred) in enumerate(zip(images_and_classes, labels_pred)):
plt.subplot(rows, cols, i + 1)
plt.axis('off')
plt.imshow(image)
plt.title(class_names[label_pred], color ='g', fontsize=16 )
plot_predictions_images(images[:], predictions[:])
```
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Learn how to build a Word2Vec model
2. Prepare training data for Word2Vec
3. Train a Word2Vec model. In this lab we will build a Skip Gram Model
4. Learn how to visualize embeddings and analyze them using the Embedding Projector
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this lab, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size. Here *v<sub>0* and *v<sub>1* are model parameters which are updated by gradient descent.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
import io
import itertools
import os
import re
import string
import numpy as np
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (
Activation,
Dense,
Dot,
Embedding,
Flatten,
GlobalAveragePooling1D,
Reshape,
)
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow you should be using TF 2.6
print("TensorFlow version: ", tf.version.VERSION)
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
OUTDIR = f"gs://{BUCKET}/text_models"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env OUTDIR=$OUTDIR
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab["<pad>"] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0,
)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(
f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})"
)
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling", # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0] * num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context)
print(f"label :", label)
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Lab Task 1
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
"""
Generates skip-gram pairs with negative sampling for a list of sequences
(int-encoded sentences) based on window size, number of negative samples
and vocabulary size.
"""
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(
vocab_size
)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0,
)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1
)
(
negative_sampling_candidates,
_,
_,
) = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling",
)
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1
)
context = tf.concat(
[context_class, negative_sampling_candidates], 0
)
label = tf.constant([1] + [0] * num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file(
"shakespeare.txt",
"https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt",
)
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(
lambda x: tf.cast(tf.strings.length(x), bool)
)
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
"""
We create a custom standardization function to lowercase the text and
remove punctuation.
"""
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(
lowercase, "[%s]" % re.escape(string.punctuation), ""
)
"""
Define the vocabulary size and number of words in a sequence.
"""
vocab_size = 4096
sequence_length = 10
"""
Use the text vectorization layer to normalize, split, and map strings to
integers. Set output_sequence_length length to pad all samples to same length.
"""
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode="int",
output_sequence_length=sequence_length,
)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = (
text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
)
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED,
)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super().__init__()
self.target_embedding = Embedding(
vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding",
)
self.context_embedding = Embedding(
vocab_size, embedding_dim, input_length=num_ns + 1
)
self.dots = Dot(axes=(3, 2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(
optimizer="adam",
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
dataset
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
### Visualize training on Tensorboard
In order to visualize how the model has trained we can use tensorboard to show the Word2Vec model's accuracy and loss. To do that, we first have to copy the logs from local to a GCS (Cloud Storage) folder.
```
def copy_tensorboard_logs(local_path: str, gcs_path: str):
"""Copies Tensorboard logs from a local dir to a GCS location.
After training, batch copy Tensorboard logs locally to a GCS location.
Args:
local_path: local filesystem directory uri.
gcs_path: cloud filesystem directory uri.
Returns:
None.
"""
pattern = f"{local_path}/*/events.out.tfevents.*"
local_files = tf.io.gfile.glob(pattern)
gcs_log_files = [
local_file.replace(local_path, gcs_path) for local_file in local_files
]
for local_file, gcs_file in zip(local_files, gcs_log_files):
tf.io.gfile.copy(local_file, gcs_file)
copy_tensorboard_logs("./logs", OUTDIR + "/word2vec_logs")
```
To visualize the embeddings, open Cloud Shell and use the following command:
`tensorboard --port=8081 --logdir OUTDIR/word2vec_logs`
In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.

## Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
# TODO 4a
weights = word2vec.get_layer("w2v_embedding").get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
```
Create and save the vectors and metadata file.
```
out_v = open("text_models/vectors.tsv", "w", encoding="utf-8")
out_m = open("text_models/metadata.tsv", "w", encoding="utf-8")
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write("\t".join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to your local machine and then open [Embedding Projector](https://projector.tensorflow.org/). Here you will have the option to upload the two files you have downloaded and visualize the embeddings.
| github_jupyter |
# JAX Quickstart
[](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/quickstart.ipynb)
**JAX is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research.**
With its updated version of [Autograd](https://github.com/hips/autograd), JAX
can automatically differentiate native Python and NumPy code. It can
differentiate through a large subset of Python’s features, including loops, ifs,
recursion, and closures, and it can even take derivatives of derivatives of
derivatives. It supports reverse-mode as well as forward-mode differentiation, and the two can be composed arbitrarily
to any order.
What’s new is that JAX uses
[XLA](https://www.tensorflow.org/xla)
to compile and run your NumPy code on accelerators, like GPUs and TPUs.
Compilation happens under the hood by default, with library calls getting
just-in-time compiled and executed. But JAX even lets you just-in-time compile
your own Python functions into XLA-optimized kernels using a one-function API.
Compilation and automatic differentiation can be composed arbitrarily, so you
can express sophisticated algorithms and get maximal performance without having
to leave Python.
```
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
# Prevent GPU/TPU warning.
import jax; jax.config.update('jax_platform_name', 'cpu')
```
## Multiplying Matrices
We'll be generating random data in the following examples. One big difference between NumPy and JAX is how you generate random numbers. For more details, see [Common Gotchas in JAX].
[Common Gotchas in JAX]: https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Random-Numbers
```
key = random.PRNGKey(0)
x = random.normal(key, (10,))
print(x)
```
Let's dive right in and multiply two big matrices.
```
size = 3000
x = random.normal(key, (size, size), dtype=jnp.float32)
%timeit jnp.dot(x, x.T).block_until_ready() # runs on the GPU
```
We added that `block_until_ready` because JAX uses asynchronous execution by default (see {ref}`async-dispatch`).
JAX NumPy functions work on regular NumPy arrays.
```
import numpy as np
x = np.random.normal(size=(size, size)).astype(np.float32)
%timeit jnp.dot(x, x.T).block_until_ready()
```
That's slower because it has to transfer data to the GPU every time. You can ensure that an NDArray is backed by device memory using {func}`~jax.device_put`.
```
from jax import device_put
x = np.random.normal(size=(size, size)).astype(np.float32)
x = device_put(x)
%timeit jnp.dot(x, x.T).block_until_ready()
```
The output of {func}`~jax.device_put` still acts like an NDArray, but it only copies values back to the CPU when they're needed for printing, plotting, saving to disk, branching, etc. The behavior of {func}`~jax.device_put` is equivalent to the function `jit(lambda x: x)`, but it's faster.
If you have a GPU (or TPU!) these calls run on the accelerator and have the potential to be much faster than on CPU.
```
x = np.random.normal(size=(size, size)).astype(np.float32)
%timeit np.dot(x, x.T)
```
JAX is much more than just a GPU-backed NumPy. It also comes with a few program transformations that are useful when writing numerical code. For now, there are three main ones:
- {func}`~jax.jit`, for speeding up your code
- {func}`~jax.grad`, for taking derivatives
- {func}`~jax.vmap`, for automatic vectorization or batching.
Let's go over these, one-by-one. We'll also end up composing these in interesting ways.
## Using {func}`~jax.jit` to speed up functions
JAX runs transparently on the GPU (or CPU, if you don't have one, and TPU coming soon!). However, in the above example, JAX is dispatching kernels to the GPU one operation at a time. If we have a sequence of operations, we can use the `@jit` decorator to compile multiple operations together using [XLA](https://www.tensorflow.org/xla). Let's try that.
```
def selu(x, alpha=1.67, lmbda=1.05):
return lmbda * jnp.where(x > 0, x, alpha * jnp.exp(x) - alpha)
x = random.normal(key, (1000000,))
%timeit selu(x).block_until_ready()
```
We can speed it up with `@jit`, which will jit-compile the first time `selu` is called and will be cached thereafter.
```
selu_jit = jit(selu)
%timeit selu_jit(x).block_until_ready()
```
## Taking derivatives with {func}`~jax.grad`
In addition to evaluating numerical functions, we also want to transform them. One transformation is [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation). In JAX, just like in [Autograd](https://github.com/HIPS/autograd), you can compute gradients with the {func}`~jax.grad` function.
```
def sum_logistic(x):
return jnp.sum(1.0 / (1.0 + jnp.exp(-x)))
x_small = jnp.arange(3.)
derivative_fn = grad(sum_logistic)
print(derivative_fn(x_small))
```
Let's verify with finite differences that our result is correct.
```
def first_finite_differences(f, x):
eps = 1e-3
return jnp.array([(f(x + eps * v) - f(x - eps * v)) / (2 * eps)
for v in jnp.eye(len(x))])
print(first_finite_differences(sum_logistic, x_small))
```
Taking derivatives is as easy as calling {func}`~jax.grad`. {func}`~jax.grad` and {func}`~jax.jit` compose and can be mixed arbitrarily. In the above example we jitted `sum_logistic` and then took its derivative. We can go further:
```
print(grad(jit(grad(jit(grad(sum_logistic)))))(1.0))
```
For more advanced autodiff, you can use {func}`jax.vjp` for reverse-mode vector-Jacobian products and {func}`jax.jvp` for forward-mode Jacobian-vector products. The two can be composed arbitrarily with one another, and with other JAX transformations. Here's one way to compose them to make a function that efficiently computes full Hessian matrices:
```
from jax import jacfwd, jacrev
def hessian(fun):
return jit(jacfwd(jacrev(fun)))
```
## Auto-vectorization with {func}`~jax.vmap`
JAX has one more transformation in its API that you might find useful: {func}`~jax.vmap`, the vectorizing map. It has the familiar semantics of mapping a function along array axes, but instead of keeping the loop on the outside, it pushes the loop down into a function’s primitive operations for better performance. When composed with {func}`~jax.jit`, it can be just as fast as adding the batch dimensions by hand.
We're going to work with a simple example, and promote matrix-vector products into matrix-matrix products using {func}`~jax.vmap`. Although this is easy to do by hand in this specific case, the same technique can apply to more complicated functions.
```
mat = random.normal(key, (150, 100))
batched_x = random.normal(key, (10, 100))
def apply_matrix(v):
return jnp.dot(mat, v)
```
Given a function such as `apply_matrix`, we can loop over a batch dimension in Python, but usually the performance of doing so is poor.
```
def naively_batched_apply_matrix(v_batched):
return jnp.stack([apply_matrix(v) for v in v_batched])
print('Naively batched')
%timeit naively_batched_apply_matrix(batched_x).block_until_ready()
```
We know how to batch this operation manually. In this case, `jnp.dot` handles extra batch dimensions transparently.
```
@jit
def batched_apply_matrix(v_batched):
return jnp.dot(v_batched, mat.T)
print('Manually batched')
%timeit batched_apply_matrix(batched_x).block_until_ready()
```
However, suppose we had a more complicated function without batching support. We can use {func}`~jax.vmap` to add batching support automatically.
```
@jit
def vmap_batched_apply_matrix(v_batched):
return vmap(apply_matrix)(v_batched)
print('Auto-vectorized with vmap')
%timeit vmap_batched_apply_matrix(batched_x).block_until_ready()
```
Of course, {func}`~jax.vmap` can be arbitrarily composed with {func}`~jax.jit`, {func}`~jax.grad`, and any other JAX transformation.
This is just a taste of what JAX can do. We're really excited to see what you do with it!
| github_jupyter |
# Westeros Tutorial - Introducing soft constraints
In the baseline tutorial, we added dynamic constraints on activity via the parameter `growth_activity_up` for the electricity generation technologies. As a result, when we added an emission tax, `wind_ppl` was scaled up at the maximum rate of 10% annually in the last period.
In this tutorial, we are going to explore how to provide additional flexibility to the dynamic growth constraints. We will explore so called [`soft constraints`](https://docs.messageix.org/en/stable/model/MESSAGE/parameter_def.html?highlight=soft%20constraint#dynamic-constraints-on-new-capacity-and-activity). Soft constraints can be configured to provide a relaxation for both activity and capacity related dynamic constraints. At a certain cost, additional annual growth rate can be realized. The cost can be absolute or defined as a share of the levelized cost (see details [here](https://docs.messageix.org/en/stable/model/MESSAGE/parameter_def.html?highlight=soft%20constraint#cost-parameters-for-soft-relaxations-of-dynamic-constraints)).
Providing this additional flexibility to dynamic constraints can be useful for assessing mitigation pathways. While without these for example, certain emission targets may not be achievable, the additional flexibility provided reflects the fact that investments i.e. technology diffusion, can be realized faster taking into account higher costs. This can provide interesting insights as to where additional subsidies or policies can help pursue more ambitious targets.
Further information can be found in https://doi.org/10.1016/j.energy.2010.01.019 (*Kepp and Strubegger, 2010*)
**Pre-requisites**
- You have the *MESSAGEix* framework installed and working
- You have run Westeros tutorial on emission taxes (``westeros_emissions_taxes.ipynb``) and solved it successfully
```
import pandas as pd
import ixmp
import message_ix
from message_ix.utils import make_df
%matplotlib inline
mp = ixmp.Platform()
```
## Load existing and clone to new scenario
We load the existing scenario '*carbon_tax*' and clone it to a new scenario '*carbon_tax_soft_constraints*' to which we will add soft constraints for the upper dynamic growth constraint.
```
model = 'Westeros Electrified'
base = message_ix.Scenario(mp, model=model, scenario='carbon_tax')
scen = base.clone(model, 'carbon_tax_soft_constraints','adding_soft_constraints',
keep_solution=False)
scen.check_out()
```
## Retrieve parameters
We will retrieve those parameters necessary to perform subsequent additions of parameters
```
model_horizon = base.set('year')
country = 'Westeros'
```
## Add soft activity up for `wind_ppl`
Recall that when setting up the Westeros baseline scenario, we added the parameter `growth_activity_up` for `wind_ppl`, with a value of 10%. As the growth rate is an annual value and the duration of the periods in this example are 10 years, activity within a single period can increase by a maximum of 259% ((1 + .10)^10).
We will now add a soft constraint for the year 720, allowing additional growth of up 1% per year. This means that activity can increase by a maximum of 259% + 10% (((1 + .01)^10) - 1). The costs will be defined in relative terms i.e. relative to the [levelized cost](https://docs.messageix.org/en/stable/model/MESSAGE/scaling_investment_costs.html?highlight=levelized%20cost#levelized-capital-costs). By specifying the `level_cost_activity_soft_up` as 1%, we are defining that per unit of activity from the previous period, used to determine the allowable additional activity, 1% of the `levelized_cost` are applied (see [link](https://docs.messageix.org/en/stable/model/MESSAGE/model_core.html#objective-function) for more information).
### Define additional annual growth.
We will allow `wind_ppl` to grow an additional 1% annually in the year 720.
```
df = pd.DataFrame({
'node_loc': country,
'technology': 'wind_ppl',
'year_act': model_horizon,
'time': 'year',
'value': [.00, .0, .0, .01],
'unit': '-'})
scen.add_par('soft_activity_up', df)
```
### Define costs for additional growth.
As previously explained the relative costs to increase the activity of `wind_ppl` will be set to 1% of the `levelized_cost` of `wind_ppl`.
```
df = pd.DataFrame({
'node_loc': country,
'technology': 'wind_ppl',
'year_act': model_horizon,
'time': 'year',
'value': '.01',
'unit': '-'})
scen.add_par('level_cost_activity_soft_up', df)
```
## Commit and solve
```
scen.commit('soft constraints added for wind_ppl')
scen.set_as_default()
scen.solve()
```
## Plotting Results
```
from message_ix.reporting import Reporter
from message_ix.util.tutorial import prepare_plots
rep_base = Reporter.from_scenario(base)
prepare_plots(rep_base)
rep_scen = Reporter.from_scenario(scen)
prepare_plots(rep_scen)
```
### Activity
***
When comparing the results of the scenario `tax_carbon`, `coal_ppl` still contributed to the eletricity generation mix in 710.
```
rep_base.set_filters(t=["coal_ppl", "wind_ppl"])
rep_base.get("plot activity")
```
With the additional growth permitted in 720, `coal_ppl` is now completely phased out.
```
rep_scen.set_filters(t=["coal_ppl", "wind_ppl"])
rep_scen.get("plot activity")
```
In the figure below the dark-blue bars represent the maximum activity for each period. This is calculated based on the activity of the preceding period and accounting for the annual growth of 10% (`growth_activity_up`). The orange bar shows that additional activity based on the soft constraint added for `wind_ppl` in 720.
The lines compare the results (var('ACT')) for `wind_ppl` of the carbon tax scenario without (gold line) and with soft constraints (grey line). In the scenario with soft constraints, you can see that already in 710 `wind_ppl` has increased activity, so that the full use can be made of the relaxation provided by the soft constraints in 720.
<img src='_static/soft-constraint.PNG' width='600'>
| github_jupyter |
<a href="https://colab.research.google.com/github/vitutorial/exercises/blob/master/LatentFactorModel/LatentFactorModel-Solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
import os
import re
import urllib.request
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import itertools
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
```
In this notebook you will work with a deep generative language model that maps words from a discrete (bit-vector-valued) latent space. We will use text data (we will work on the character level) in Spanish and pytorch.
The first section concerns data manipulation and data loading classes necessary for our implementation. You do not need to modify anything in this part of the code.
Let's first download the SIGMORPHON dataset that we will be using for this notebook: these are inflected Spanish words together with some morphosyntactic descriptors. For this notebook we will ignore the morphosyntactic descriptors.
```
url = "https://raw.githubusercontent.com/ryancotterell/sigmorphon2016/master/data/"
train_file = "spanish-task1-train"
val_file = "spanish-task1-dev"
test_file = "spanish-task1-test"
print("Downloading data files...")
if not os.path.isfile(train_file):
urllib.request.urlretrieve(url + train_file, filename=train_file)
if not os.path.isfile(val_file):
urllib.request.urlretrieve(url + val_file, filename=val_file)
if not os.path.isfile(test_file):
urllib.request.urlretrieve(url + test_file, filename=test_file)
print("Download complete.")
```
# Data
In order to work with text data, we need to transform the text into something that our algorithms can work with. The first step of this process is converting words into word ids. We do this by constructing a vocabulary from the data, assigning a new word id to each new word it encounters.
```
UNK_TOKEN = "?"
PAD_TOKEN = "_"
SOW_TOKEN = ">"
EOW_TOKEN = "."
def extract_inflected_word(s):
"""
Extracts the inflected words in the SIGMORPHON dataset.
"""
return s.split()[-1]
class Vocabulary:
def __init__(self):
self.idx_to_char = {0: UNK_TOKEN, 1: PAD_TOKEN, 2: SOW_TOKEN, 3: EOW_TOKEN}
self.char_to_idx = {UNK_TOKEN: 0, PAD_TOKEN: 1, SOW_TOKEN: 2, EOW_TOKEN: 3}
self.word_freqs = {}
def __getitem__(self, key):
return self.char_to_idx[key] if key in self.char_to_idx else self.char_to_idx[UNK_TOKEN]
def word(self, idx):
return self.idx_to_char[idx]
def size(self):
return len(self.char_to_idx)
@staticmethod
def from_data(filenames):
"""
Creates a vocabulary from a list of data files. It assumes that the data files have been
tokenized and pre-processed beforehand.
"""
vocab = Vocabulary()
for filename in filenames:
with open(filename) as f:
for line in f:
# Strip whitespace and the newline symbol.
word = extract_inflected_word(line.strip())
# Split the words into characters and assign ids to each
# new character it encounters.
for char in list(word):
if char not in vocab.char_to_idx:
idx = len(vocab.char_to_idx)
vocab.char_to_idx[char] = idx
vocab.idx_to_char[idx] = char
return vocab
# Construct a vocabulary from the training and validation data.
print("Constructing vocabulary...")
vocab = Vocabulary.from_data([train_file, val_file])
print("Constructed a vocabulary of %d types" % vocab.size())
# some examples
print('e', vocab['e'])
print('é', vocab['é'])
print('ș', vocab['ș']) # something UNKNOWN
```
We also need to load the data files into memory. We create a simple class `TextDataset` that stores the data as a list of words:
```
class TextDataset(Dataset):
"""
A simple class that loads a list of words into memory from a text file,
split by newlines. This does not do any memory optimisation,
so if your dataset is very large, you might want to use an alternative
class.
"""
def __init__(self, text_file, max_len=30):
self.data = []
with open(text_file) as f:
for line in f:
word = extract_inflected_word(line.strip())
if len(list(word)) <= max_len:
self.data.append(word)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
# Load the training, validation, and test datasets into memory.
train_dataset = TextDataset(train_file)
val_dataset = TextDataset(val_file)
test_dataset = TextDataset(test_file)
# Print some samples from the data:
print("Sample from training data: \"%s\"" % train_dataset[np.random.choice(len(train_dataset))])
print("Sample from validation data: \"%s\"" % val_dataset[np.random.choice(len(val_dataset))])
print("Sample from test data: \"%s\"" % test_dataset[np.random.choice(len(test_dataset))])
```
Now it's time to write a function that converts a word into a list of character ids using the vocabulary we created before. This function is `create_batch` in the code cell below. This function creates a batch from a list of words, and makes sure that each word starts with a start-of-word symbol and ends with an end-of-word symbol. Because not all words are of equal length in a certain batch, words are padded with padding symbols so that they match the length of the largest word in the batch. The function returns an input batch, an output batch, a mask of 1s for words and 0s for padding symbols, and the sequence lengths of each word in the batch. The output batch is shifted by one character, to reflect the predictions that the model is expected to make. For example, for a word
\begin{align}
\text{e s p e s e m o s}
\end{align}
the input sequence is
\begin{align}
\text{SOW e s p e s e m o s}
\end{align}
and the output sequence is
\begin{align}
\text{e s p e s e m o s EOW}
\end{align}
You can see the output is shifted wrt the input, that's because we will be computing a distribution for the next character in context of its prefix, and that's why we need to shift the sequence this way.
Lastly, we create an inverse function `batch_to_words` that recovers the list of words from a padded batch of character ids to use during test time.
```
def create_batch(words, vocab, device, word_dropout=0.):
"""
Converts a list of words to a padded batch of word ids. Returns
an input batch, an output batch shifted by one, a sequence mask over
the input batch, and a tensor containing the sequence length of each
batch element.
:param words: a list of words, each a list of token ids
:param vocab: a Vocabulary object for this dataset
:param device:
:param word_dropout: rate at which we omit words from the context (input)
:returns: a batch of padded inputs, a batch of padded outputs, mask, lengths
"""
tok = np.array([[SOW_TOKEN] + list(w) + [EOW_TOKEN] for w in words])
seq_lengths = [len(w)-1 for w in tok]
max_len = max(seq_lengths)
pad_id = vocab[PAD_TOKEN]
pad_id_input = [
[vocab[w[t]] if t < seq_lengths[idx] else pad_id for t in range(max_len)]
for idx, w in enumerate(tok)]
# Replace words of the input with <unk> with p = word_dropout.
if word_dropout > 0.:
unk_id = vocab[UNK_TOKEN]
word_drop = [
[unk_id if (np.random.random() < word_dropout and t < seq_lengths[idx]) else word_ids[t] for t in range(max_len)]
for idx, word_ids in enumerate(pad_id_input)]
# The output batch is shifted by 1.
pad_id_output = [
[vocab[w[t+1]] if t < seq_lengths[idx] else pad_id for t in range(max_len)]
for idx, w in enumerate(tok)]
# Convert everything to PyTorch tensors.
batch_input = torch.tensor(pad_id_input)
batch_output = torch.tensor(pad_id_output)
seq_mask = (batch_input != vocab[PAD_TOKEN])
seq_length = torch.tensor(seq_lengths)
# Move all tensors to the given device.
batch_input = batch_input.to(device)
batch_output = batch_output.to(device)
seq_mask = seq_mask.to(device)
seq_length = seq_length.to(device)
return batch_input, batch_output, seq_mask, seq_length
def batch_to_words(tensors, vocab: Vocabulary):
"""
Converts a batch of word ids back to words.
:param tensors: [B, T] word ids
:param vocab: a Vocabulary object for this dataset
:returns: an array of strings (each a word).
"""
words = []
batch_size = tensors.size(0)
for idx in range(batch_size):
word = [vocab.word(t.item()) for t in tensors[idx,:]]
# Filter out the start-of-word and padding tokens.
word = list(filter(lambda t: t != PAD_TOKEN and t != SOW_TOKEN, word))
# Remove the end-of-word token and all tokens following it.
if EOW_TOKEN in word:
word = word[:word.index(EOW_TOKEN)]
words.append("".join(word))
return np.array(words)
```
In PyTorch the RNN functions expect inputs to be sorted from long words to shorter ones. Therefore we create a simple wrapper class for the DataLoader class that sorts words from long to short:
```
class SortingTextDataLoader:
"""
A wrapper for the DataLoader class that sorts a list of words by their
lengths in descending order.
"""
def __init__(self, dataloader):
self.dataloader = dataloader
self.it = iter(dataloader)
def __iter__(self):
return self
def __next__(self):
words = None
for s in self.it:
words = s
break
if words is None:
self.it = iter(self.dataloader)
raise StopIteration
words = np.array(words)
sort_keys = sorted(range(len(words)),
key=lambda idx: len(list(words[idx])),
reverse=True)
sorted_words = words[sort_keys]
return sorted_words
```
# Model
## Deterministic language model
In language modelling, we model a word $x = \langle x_1, \ldots, x_n \rangle$ of length $n = |x|$ as a sequence of categorical draws:
\begin{align}
X_i|x_{<i} & \sim \text{Cat}(f(x_{<i}; \theta))
& i = 1, \ldots, n \\
\end{align}
where we use $x_{<i}$ to denote a (possibly empty) prefix string, and thus the model makes no Markov assumption. We map from the conditioning context, the prefix $x_{<i}$, to the categorical parameters (a $v$-dimensional probability vector, where $v$ denotes the size of the vocabulary, in this case, the size of the character set) using a fixed neural network architecture whose parameters we collectively denote by $\theta$.
This assigns the following likelihood to the word
\begin{align}
P(x|\theta) &= \prod_{i=1}^n P(x_i|x_{<i}, \theta) \\
&= \prod_{i=1}^n \text{Cat}(x_i|f(x_{<i}; \theta))
\end{align}
where the categorical pmf is $\text{Cat}(k|\pi) = \prod_{j=1}^v \pi_j^{[k=j]} = \pi_k$.
Suppose we have a dataset $\mathcal D = \{x^{(1)}, \ldots, x^{(N)}\}$ containing $N$ i.i.d. observations. Then we can use the log-likelihood function
\begin{align}
\mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{N} \log P(x^{(k)}| \theta) \\
&= \sum_{k=1}^{N} \sum_{i=1}^{|x^{(k)}|} \log \text{Cat}(x^{(k)}_i|f(x^{(k)}_{<i}; \theta))
\end{align}
to estimate $\theta$ by maximisation:
\begin{align}
\theta^\star = \arg\max_{\theta \in \Theta} \mathcal L(\theta|\mathcal D) ~ .
\end{align}
We can use stochastic gradient-ascent to find a local optimum of $\mathcal L(\theta|\mathcal D)$, which only requires a gradient estimate:
\begin{align}
\nabla_\theta \mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{|\mathcal D|} \nabla_\theta \log P(x^{(k)}|\theta) \\
&= \sum_{k=1}^{|\mathcal D|} \frac{1}{N} N \nabla_\theta \log P(x^{(k)}| \theta) \\
&= \mathbb E_{\mathcal U(1/N)} \left[ N \nabla_\theta \log P(x^{(K)}| \theta) \right] \\
&\overset{\text{MC}}{\approx} \frac{N}{M} \sum_{m=1}^M \nabla_\theta \log P(x^{(k_m)}|\theta) \\
&\text{where }K_m \sim \mathcal U(1/N)
\end{align}
This is a Monte Carlo (MC) estimate of the gradient computed on $M$ data points selected uniformly at random from $\mathcal D$.
For as long as $f$ remains differentiable wrt to its inputs and parameters, we can rely on automatic differentiation to obtain gradient estimates.
An example design for $f$ is:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\
\mathbf h_0 &= \mathbf 0 \\
\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\
f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))
\end{align}
where
* $\text{emb}$ is a fixed embedding layer with parameters $\theta_{\text{emb}}$;
* $\text{rnn}$ is a recurrent architecture with parameters $\theta_{\text{rnn}}$, e.g. an LSTM or GRU, and $\mathbf h_0$ is part of the architecture's parameters;
* $\text{dense}_v$ is a dense layer with $v$ outputs (vocabulary size) and parameters $\theta_{\text{out}}$.
In what follows we show how to extend this model with a continuous latent word embedding.
## Deep generative language model
We want to model a word $x$ as a draw from the marginal of deep generative model $P(z, x|\theta, \alpha) = P(z|\alpha)P(x|z, \theta)$.
### Generative model
The generative story is:
\begin{align}
Z_k & \sim \text{Bernoulli}(\alpha_k) & k=1,\ldots, K \\
X_i | z, x_{<i} &\sim \text{Cat}(f(z, x_{<i}; \theta)) & i=1, \ldots, n
\end{align}
where $z \in \mathbb R^K$ and we impose a product of independent Bernoulli distributions prior. Other choices of prior can induce interesting properties in latent space, for example, the Bernoullis could be correlated, however, in this notebook, we use independent distributions.
**About the prior parameter** The parameter of the $k$th Bernoulli distribution is the probability that the $k$th bit in $z$ is set to $1$, and therefore, if we have reasons to believe some bits are more frequent than others (for example, because we expect some bits to capture verb attributes and others to capture noun attributes, and we know nouns are more frequent than verbs) we may be able to have a good guess at $\alpha_k$ for different $k$, otherwise, we may simply say that bits are about as likely to be on or off a priori, thus setting $\alpha_k = 0.5$ for every $k$. In this lab, we will treat the prior parameter ($\alpha$) as *fixed*.
**Architecture** It is easy to design $f$ by a simple modification of the deterministic design shown before:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\
\mathbf h_0 &= \tanh(\text{dense}(z; \theta_{\text{init}})) \\
\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\
f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))
\end{align}
where we just initialise the recurrent cell using $z$. Note we could also use $z$ in other places, for example, as additional input to every update of the recurrent cell $\mathbf h_i = \text{rnn}(\mathbf h_{i-1}, [\mathbf x_{i-1}, z])$. This is an architecture choice which like many others can only be judged empirically or on the basis of practical convenience.
### Parameter estimation
The marginal likelihood, necessary for parameter estimation, is now no longer tractable:
\begin{align}
P(x|\theta, \alpha) &= \sum_{z \in \{0,1\}^K} P(z|\alpha)P(x|z, \theta) \\
&= \sum_{z \in \{0,1\}^K} \prod_{k=1}^K \text{Bernoulli}(z_k|\alpha_k)\prod_{i=1}^n \text{Cat}(x_i|f(z,x_{<i}; \theta) )
\end{align}
the intractability is clear as there is an exponential number of assignments to $z$, namely, $2^K$.
We turn to variational inference and derive a lowerbound $\mathcal E(\theta, \lambda|\mathcal D)$ on the log-likelihood function
\begin{align}
\mathcal E(\theta, \lambda|\mathcal D) &= \sum_{s=1}^{|\mathcal D|} \mathcal E_s(\theta, \lambda|x^{(s)})
\end{align}
which for a single datapoint $x$ is
\begin{align}
\mathcal E(\theta, \lambda|x) &= \mathbb{E}_{Q(z|x, \lambda)}\left[\log P(x|z, \theta)\right] - \text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right)\\
\end{align}
where we have introduce an independently parameterised auxiliary distribution $Q(z|x, \lambda)$. The distribution $Q$ which maximises this *evidence lowerbound* (ELBO) is also the distribution that minimises
\begin{align}
\text{KL}(Q(z|x, \lambda)||P(z|x, \theta, \alpha)) = \mathbb E_{Q(z|x, \lambda)}\left[\log \frac{Q(z|x, \lambda)}{P(z|x, \theta, \alpha)}\right]
\end{align}
where $P(z|x, \theta, \alpha) = \frac{P(x, z|\theta, \alpha)}{P(x|\theta, \alpha)}$ is our intractable true posterior. For that reason, we think of $Q(z|x, \lambda)$ as an *approximate posterior*.
The approximate posterior is an independent model of the latent variable given the data, for that reason we also call it an *inference model*.
In this notebook, our inference model will be a product of independent Bernoulli distributions, to make sure that we cover the sample space of our latent variable. We will leave at the end of the notebook as an optional exercise to model correlations (thus achieving *structured* inference, rather than mean field inference). Such mean field (MF) approximation takes $K$ Bernoulli variational factors whose parameters we predict with a neural network:
\begin{align}
Q(z|x, \lambda) &= \prod_{k=1}^K \text{Bernoulli}(z_k|\beta_k(x; \lambda))
\end{align}
Note we compute a *fixed* number, namely, $K$, of Bernoulli parameters. This can be done with a neural network that outputs $K$ values and employs a sigmoid activation for the outputs.
For this choice, the KL term in the ELBO is tractable:
\begin{align}
\text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right) &= \sum_{k=1}^K \text{KL}\left(Q(z_k|x, \lambda)||P(z_k|\alpha_k)\right) \\
&= \sum_{k=1}^K \text{KL}\left(\text{Bernoulli}(\beta_k(x;\lambda))|| \text{Bernoulli}(\alpha_k)\right) \\
&= \sum_{k=1}^K \beta_k(x;\lambda) \log \frac{\beta_k(x;\lambda)}{\alpha_k} + (1-\beta_k(x;\lambda)) \log \frac{1-\beta_k(x;\lambda)}{1-\alpha_k}
\end{align}
Here's an example design for our inference model:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \lambda_{\text{emb}}) \\
\mathbf f_i &= \text{rnn}(\mathbf f_{i-1}, \mathbf x_{i}; \lambda_{\text{fwd}}) \\
\mathbf b_i &= \text{rnn}(\mathbf b_{i+1}, \mathbf x_{i}; \lambda_{\text{bwd}}) \\
\mathbf h &= \text{dense}([\mathbf f_{n}, \mathbf b_1]; \lambda_{\text{hid}}) \\
\beta(x; \lambda) &= \text{sigmoid}(\text{dense}_K(\mathbf h; \lambda_{\text{out}}))
\end{align}
where we use the $\text{sigmoid}$ activation to make sure our probabilities are independently set between $0$ and $1$.
Because we have neural networks compute the Bernoulli variational factors for us, we call this *amortised* mean field inference.
### Gradient estimation
We have to obtain gradients of the ELBO with respect to $\theta$ (generative model) and $\lambda$ (inference model). Recall we will leave $\alpha$ fixed.
For the **generative model**
\begin{align}
\nabla_\theta \mathcal E(\theta, \lambda|x) &=\nabla_\theta\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \underbrace{\nabla_\theta \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{\color{blue}{0}} \\
&=\sum_{z} Q(z|x, \lambda)\nabla_\theta\log P(x|z,\theta) \\
&= \mathbb E_{Q(z|x, \lambda)}\left[\nabla_\theta\log P(x|z,\theta) \right] \\
&\overset{\text{MC}}{\approx} \frac{1}{S} \sum_{s=1}^S \nabla_\theta \log P(x|z^{(s)}, \theta)
\end{align}
where $z^{(s)} \sim Q(z|x,\lambda)$.
Note there is no difficulty in obtaining gradient estimates precisely because the samples come from the inference model and therefore do not interfere with backpropagation for updates to $\theta$.
For the **inference model** the story is less straightforward, and we have to use the *score function estimator* (a.k.a. REINFORCE):
\begin{align}
\nabla_\lambda \mathcal E(\theta, \lambda|x) &=\nabla_\lambda\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \nabla_\lambda \underbrace{\sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{ \color{blue}{\text{tractable} }} \\
&=\sum_{z} \nabla_\lambda Q(z|x, \lambda)\log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&=\sum_{z} \underbrace{Q(z|x, \lambda) \nabla_\lambda \log Q(z|x, \lambda)}_{\nabla_\lambda Q(z|x, \lambda)} \log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&= \mathbb E_{Q(z|x, \lambda)}\left[ \log P(x|z,\theta) \nabla_\lambda \log Q(z|x, \lambda) \right] - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&\overset{\text{MC}}{\approx} \left(\frac{1}{S} \sum_{s=1}^S \log P(x|z^{(s)}, \theta) \nabla_\lambda \log Q(z^{(s)}|x, \lambda) \right) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))
\end{align}
where $z^{(s)} \sim Q(z|x,\lambda)$.
## Implementation
Let's implement the model and the loss (negative ELBO). We work with the notion of a *surrogate loss*, that is, a computation node whose gradients wrt to parameters are equivalent to the gradients we need.
For a given sample $z \sim Q(z|x, \lambda)$, the following is a single-sample surrogate loss:
\begin{align}
\mathcal S(\theta, \lambda|x) = \log P(x|z, \theta) + \color{red}{\text{detach}(\log P(x|z, \theta) )}\log Q(z|x, \lambda) - \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))
\end{align}
Check the documentation of pytorch's `detach` method.
Show that it's gradients wrt $\theta$ and $\lambda$ are exactly what we need:
\begin{align}
\nabla_\theta \mathcal S(\theta, \lambda|x) = \color{red}{?}
\end{align}
\begin{align}
\nabla_\lambda \mathcal S(\theta, \lambda|x) = \color{red}{?}
\end{align}
**Solution**
\begin{align}
\nabla_\theta \mathcal S(\theta, \lambda|x) = \nabla_\theta \log P(x|z, \theta) + 0
\end{align}
\begin{align}
\nabla_\lambda \mathcal S(\theta, \lambda|x) &= 0 + \underbrace{\log Q(z|x, \lambda)\nabla_\lambda \log P(x|z, \theta) + \log P(x|z, \theta) \nabla_\lambda \log Q(z|x, \lambda)}_{\text{chain rule}} \\
&= 0+ 0 + \log P(x|z, \theta) \nabla_\lambda \log Q(z|x, \lambda)
\end{align}
Let's now turn to the actual implementation in pytorch of the inference model as well as the generative model.
Here and there we will provide helper code for you.
```
def bernoulli_log_probs_from_logits(logits):
"""
Let p be the Bernoulli parameter and q = 1 - p.
This function is a stable computation of p and q from logit = log(p/q).
:param logit: log (p/q)
:return: log_p, log_q
"""
return - F.softplus(-logits), - F.softplus(logits)
```
We start with the implementation of a product of Bernoulli distributions where the parameters are *given* at construction time. That is, for some vector $b_1, \ldots, b_K$ we have
\begin{equation}
Z_k \sim \text{Bernoulli}(b_k)
\end{equation}
and thus the joint probability of $z_1, \ldots, z_K$ is given by $\prod_{k=1}^K \text{Bernoulli}(z_k|b_k)$.
```
class ProductOfBernoullis:
"""
This is class models a product of independent Bernoulli distributions.
Each product of Bernoulli is defined by a D-dimensional vector of logits
for each independent Bernoulli variable.
"""
def __init__(self, logits):
"""
:param p: a tensor of D Bernoulli parameters (logits) for each batch element. [B, D]
"""
pass
def mean(self):
"""For Bernoulli variables this is the probability of each Bernoulli being 1."""
return None
def std(self):
"""For Bernoulli variables this is p*(1-p) where p
is the probability of the Bernoulli being 1"""
return self.probs * (1.0 - self.probs)
def sample(self):
"""
Returns a sample with the shape of the Bernoulli parameter. # [B, D]
"""
return None
def log_prob(self, x):
"""
Assess the log probability mass of x.
:param x: a tensor of Bernoulli samples (same shape as the Bernoulli parameter) [B, D]
:returns: tensor of log probabilitie densities [B]
"""
return None
def unstable_kl(self, other: 'Bernoulli'):
"""
The straightforward implementation of the KL between two Bernoullis.
This implementation is unstable, a stable implementation is provided in
ProductOfBernoullis.kl(self, q)
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
return None
def kl(self, other: 'Bernoulli'):
"""
A stable implementation of the KL divergence between two Bernoulli variables.
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
return None
# SOLUTION
class ProductOfBernoullis:
"""
This is class models a product of independent Bernoulli distributions.
Each product of Bernoulli is defined by a D-dimensional vector of logits
for each independent Bernoulli variable.
"""
def __init__(self, logits):
"""
:param p: a tensor of D Bernoulli parameters (logits) for each batch element. [B, D]
"""
self.logits = logits
self.probs = torch.sigmoid(self.logits)
self.log_probs_1, self.log_probs_0 = bernoulli_log_probs_from_logits(logits)
def mean(self):
"""For Bernoulli variables this is the probability of each Bernoulli being 1."""
return self.probs
def std(self):
"""For Bernoulli variables this is p*(1-p) where p
is the probability of the Bernoulli being 1"""
return self.probs * (1.0 - self.probs)
def sample(self):
"""
Returns a sample with the shape of the Bernoulli parameter. # [B, D]
"""
u = torch.rand_like(self.probs) # uniform random draws
sample = (u < self.probs).byte() # interpret as 0s and 1s
return sample.float() # we use float for consistency with how pytorch implementations usually work
def log_prob(self, x):
"""
Assess the log probability mass of x.
:param x: a tensor of Bernoulli samples (same shape as the Bernoulli parameter) [B, D]
:returns: tensor of log probabilitie densities
"""
return torch.where(x == 1, self.log_probs_1, self.log_probs_0).sum(dim=1)
def unstable_kl(self, other: 'Bernoulli'):
"""
The straightforward implementation of the KL between two Bernoullis.
This implementation is unstable, a stable implementation is provided in
ProductOfBernoullis.kl(self, q)
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
t1 = self.probs * (torch.log(self.probs) - torch.log(other.probs))
t2 = (1 - self.probs) * (torch.log(1.0 - self.probs) - torch.log(1.0 - other.probs))
return (t1 + t2).sum(dim=1)
def kl(self, other: 'Bernoulli'):
"""
A stable implementation of the KL divergence between two Bernoulli variables.
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
t1 = self.probs * (self.log_probs_1 - other.log_probs_1)
t2 = (1-self.probs) * (self.log_probs_0 - other.log_probs_0)
return (t1 + t2).sum(dim=1)
```
Then we should implement the inference model $Q(z | x, \lambda)$, that is, a module that uses a neural network to map from a data point $x$ to the parameters of a product of Bernoullis.
You might want to consult the documentation of
* `torch.nn.Embedding`
* `torch.nn.LSTM`
* `torch.nn.Linear`
* and of our own `ProductOfBernoullis` distribution (see above).
```
class InferenceModel(nn.Module):
def __init__(self, vocab_size, embedder, hidden_size,
latent_size, pad_idx, bidirectional=False):
"""
Implement the layers in the inference model.
:param vocab_size: size of the vocabulary of the language
:param embedder: embedding layer
:param hidden_size: size of recurrent cell
:param latent_size: size K of the latent variable
:param pad_idx: id of the -PAD- token
:param bidirectional: whether we condition on x via a bidirectional or
unidirectional encoder
"""
super().__init__() # pytorch modules should always start with this
pass
# Construct your NN blocks here
# and make sure every block is an attribute of self
# or they won't get initialised properly
# for example, self.my_linear_layer = torch.nn.Linear(...)
def forward(self, x, seq_mask, seq_len) -> ProductOfBernoullis:
"""
Return an inference product of Bernoullis per instance in the mini-batch
:param x: words [B, T] as token ids
:param seq_mask: indicates valid positions vs padding positions [B, T]
:param seq_len: the length of the sequences [B]
:return: a collection of B ProductOfBernoullis approximate posterior,
each a distribution over K-dimensional bit vectors
"""
pass
# SOLUTION
class InferenceModel(nn.Module):
def __init__(self, vocab_size, embedder, hidden_size,
latent_size, pad_idx, bidirectional=False):
"""
:param vocab_size: size of the vocabulary of the language
:param embedder: embedding layer
:param hidden_size: size of recurrent cell
:param latent_size: size K of the latent variable
:param pad_idx: id of the -PAD- token
:param bidirectional: whether we condition on x via a bidirectional or
unidirectional encoder
"""
super().__init__()
self.bidirectional = bidirectional
# We borrow the embedder from the generative model, but we don't
# want tobackpropagate through it for the inference model. So we
# need to make sure to call detach on the embeddings later.
self.embedder = embedder
emb_size = embedder.embedding_dim
# Create a (bidirectional) LSTM to encode x.
self.lstm = nn.LSTM(emb_size, hidden_size, batch_first=True,
bidirectional=bidirectional)
# The output of the LSTM doubles if we use a bidirectional encoder.
encoding_size = hidden_size * 2 if bidirectional else hidden_size
# We can let features interact once more
self.combination_layer = nn.Linear(encoding_size, 2 * latent_size)
# Create an affine layers to project the encoder final state to
# the logits of the independent Bernoullis that
# we are predicting.
self.logits_layer = nn.Linear(2 * latent_size, latent_size)
def forward(self, x, seq_mask, seq_len) -> ProductOfBernoullis:
# Compute word embeddings and detach them so that no gradients
# from the infererence model flow through. That's done because
# this embedding layer was borrowed from the generative model
# thus its parameters a part of the set \theta
x_embed = self.embedder(x).detach()
# Alternatively, we could have construct an independent embedding layer
# for the inference net, then its parameters would be part of the set
# \lambda and we would allow updates
# Encode the sentence using the LSTM.
hidden = None
packed_seq = pack_padded_sequence(x_embed, seq_len, batch_first=True)
_, final = self.lstm(packed_seq, hidden)
# Take the final output h_T from the LSTM, concatenate the forward
# and backward directions for the bidirectional case.
h_T = final[0]
if self.bidirectional:
h_T_fwd = h_T[0]
h_T_bwd = h_T[1]
h_T = torch.cat([h_T_fwd, h_T_bwd], dim=-1)
# We make one more transformation
# this allows a few more interactions between features
# and if we have bidirectional features then the two
# directions also interact
h_T = torch.tanh(self.combination_layer(h_T))
# Compute the mean and sigma of the diagonal Gaussian distribution.
# Use a softplus activation for the standard deviation to ensure it's
# positive.
logits = self.logits_layer(h_T)
# Return the inferred Gaussian distribution q(z|x).
qz = ProductOfBernoullis(logits)
return qz
# tests for inference model
pad_idx = vocab.char_to_idx[PAD_TOKEN]
dummy_inference_model = InferenceModel(
vocab_size=vocab.size(),
embedder=nn.Embedding(vocab.size(), 64, padding_idx=pad_idx),
hidden_size=128, latent_size=16, pad_idx=pad_idx, bidirectional=True
).to(device=device)
dummy_batch_size = 32
dummy_dataloader = SortingTextDataLoader(DataLoader(train_dataset, batch_size=dummy_batch_size))
dummy_words = next(dummy_dataloader)
x_in, _, seq_mask, seq_len = create_batch(dummy_words, vocab, device)
q_z_given_x = dummy_inference_model.forward(x_in, seq_mask, seq_len)
```
Then we should implement the generative latent factor model. The decoder is a sequence of correlated Categorical draws that condition on a latent factor assignment.
We will be parameterising categorical distributions, so you might want to check the documentation of `torch.distributions.categorical.Categorical`.
```
from torch.distributions import Categorical
class LatentFactorModel(nn.Module):
def __init__(self, vocab_size, emb_size, hidden_size, latent_size,
pad_idx, dropout=0.):
"""
:param vocab_size: size of the vocabulary of the language
:param emb_size: dimensionality of embeddings
:param hidden_size: dimensionality of recurrent cell
:param latent_size: this is D the dimensionality of the latent variable z
:param pad_idx: the id reserved to the -PAD- token
:param dropout: a dropout rate (you can ignore this for now)
"""
super().__init__()
# Construct your NN blocks here,
# remember to assign them to attributes of self
pass
def init_hidden(self, z):
"""
Returns the hidden state of the LSTM initialized with a projection of a given z.
:param z: [B, K]
:returns: [num_layers, B, H] hidden state, [num_layers, B, H] cell state
"""
pass
def step(self, prev_x, z, hidden):
"""
Performs a single LSTM step for a given previous word and hidden state.
Returns the unnormalized log probabilities (logits) over the vocabulary
for this time step.
:param prev_x: [B, 1] id of the previous token
:param z: [B, K] latent variable
:param hidden: hidden ([num_layers, B, H] state, [num_layers, B, H] cell)
:returns: [B, V] logits, ([num_layers, B, H] updated state, [num_layers, B, H] updated cell)
"""
pass
def forward(self, x, z) -> Categorical:
"""
Performs an entire forward pass given a sequence of words x and a z.
This returns a collection of [B, T] categorical distributions, each
with support over V events.
:param x: [B, T] token ids
:param z: [B, K] a latent sample
:returns: Categorical object with shape [B,T,V]
"""
hidden = self.init_hidden(z)
outputs = []
for t in range(x.size(1)):
# [B, 1]
prev_x = x[:, t].unsqueeze(-1)
# logits: [B, V]
logits, hidden = self.step(prev_x, z, hidden)
outputs.append(logits)
outputs = torch.cat(outputs, dim=1)
return Categorical(logits=outputs)
def loss(self, output_distributions, observations, pz, qz, free_nats=0., evaluation=False):
"""
Computes the terms in the loss (negative ELBO) given the
output Categorical distributions, observations,
the prior distribution p(z), and the approximate posterior distribution q(z|x).
If free_nats is nonzero it will clamp the KL divergence between the posterior
and prior to that value, preventing gradient propagation via the KL if it's
below that value.
If evaluation is set to true, the loss will be summed instead
of averaged over the batch.
Returns the (surrogate) loss, the ELBO, and the KL.
:returns:
surrogate loss (scalar),
ELBO (scalar),
KL (scalar)
"""
pass
# SOLUTION
class LatentFactorModel(nn.Module):
def __init__(self, vocab_size, emb_size, hidden_size, latent_size,
pad_idx, dropout=0.):
"""
:param vocab_size: size of the vocabulary of the language
:param emb_size: dimensionality of embeddings
:param hidden_size: dimensionality of recurrent cell
:param latent_size: this is D the dimensionality of the latent variable z
:param pad_idx: the id reserved to the -PAD- token
:param dropout: a dropout rate
"""
super().__init__()
self.pad_idx = pad_idx
self.embedder = nn.Embedding(vocab_size, emb_size,
padding_idx=pad_idx)
self.lstm = nn.LSTM(emb_size, hidden_size, batch_first=True)
self.bridge = nn.Linear(latent_size, hidden_size)
self.projection = nn.Linear(hidden_size, vocab_size, bias=False)
self.dropout_layer = nn.Dropout(p=dropout)
def init_hidden(self, z):
"""
Returns the hidden state of the LSTM initialized with a projection of a given z.
:param z: [B, K]
:returns: [num_layers, B, H] hidden state, [num_layers, B, H] cell state
"""
h = self.bridge(z).unsqueeze(0)
c = self.bridge(z).unsqueeze(0)
return (h, c)
def step(self, prev_x, z, hidden):
"""
Performs a single LSTM step for a given previous word and hidden state.
Returns the unnormalized probabilities over the vocabulary for this time step.
:param prev_x: [B, 1] id of the previous token
:param z: [B, K] latent variable
:param hidden: hidden ([num_layers, B, H] state, [num_layers, B, H] cell)
:returns: [B, V] logits, ([num_layers, B, H] updated state, [num_layers, B, H] updated cell)
"""
# [B, E]
x_embed = self.dropout_layer(self.embedder(prev_x))
# output: [B, H]
output, hidden = self.lstm(x_embed, hidden)
# [B, V]
logits = self.projection(self.dropout_layer(output))
return logits, hidden
def forward(self, x, z) -> Categorical:
"""
Performs an entire forward pass given a sequence of words x and a z.
This returns a collection of [B, T] categorical distributions, each
with support over V events.
:param x: [B, T] token ids
:param z: [B, K] a latent sample
:returns: Categorical object with shape [B,T,V]
"""
hidden = self.init_hidden(z)
outputs = []
for t in range(x.size(1)):
# [B, 1]
prev_x = x[:, t].unsqueeze(-1)
# logits: [B, V]
logits, hidden = self.step(prev_x, z, hidden)
outputs.append(logits)
outputs = torch.cat(outputs, dim=1)
return Categorical(logits=outputs)
def loss(self, output_distributions, observations, pz, qz, z, free_nats=0., evaluation=False):
"""
Computes the terms in the loss (negative ELBO) given the
output Categorical distributions, observations,
the prior distribution p(z), and the approximate posterior distribution q(z|x).
If free_nats is nonzero it will clamp the KL divergence between the posterior
and prior to that value, preventing gradient propagation via the KL if it's
below that value.
If evaluation is set to true, the loss will be summed instead
of averaged over the batch.
Returns the (surrogate) loss, the ELBO, and the KL.
:returns:
surrogate loss (scalar),
ELBO (scalar),
KL (scalar)
"""
# [B, T]
log_prob = output_distributions.log_prob(observations)
mask = (observations != self.pad_idx)
log_prob = torch.where(mask, log_prob, torch.zeros_like(log_prob))
# [B]
log_prob = log_prob.sum(-1)
# [B]
surrogate = log_prob.detach() * qz.log_prob(z)
loss = - (log_prob + surrogate) # TODO baselines
# Compute the KL divergence and clamp to at least the given amount of free nats.
KL = qz.kl(pz)
KL = torch.clamp(KL, min=free_nats)
# Compute an ELBO estimate
ELBO = (log_prob - KL)
# For evaluation return the sum of individual components, for
# training return the mean of those components.
if evaluation:
return (loss.sum(), ELBO.sum(), KL.sum())
else:
return (loss.mean(), ELBO.mean(), KL.mean())
```
The code below is used to assess the model and also investigate what it learned. We implemented it for you, so that you can focus on the VAE part. It's useful however to learn from this example: we do interesting things like computing perplexity and sampling novel words!
# Evaluation metrics
During training we'd like to keep track of some evaluation metrics on the validation data in order to keep track of how our model is doing and to perform early stopping. One simple metric we can compute is the ELBO on all the validation or test data using a single sample from the approximate posterior $Q(z|x, \lambda)$:
```
def eval_elbo(model, inference_model, eval_dataset, vocab, device, batch_size=128):
"""
Computes a single sample estimate of the ELBO on a given dataset.
This returns both the average ELBO and the average KL (for inspection).
"""
dl = DataLoader(eval_dataset, batch_size=batch_size)
sorted_dl = SortingTextDataLoader(dl)
# Make sure the model is in evaluation mode (i.e. disable dropout).
model.eval()
total_ELBO = 0.
total_KL = 0.
num_words = 0
# We don't need to compute gradients for this.
with torch.no_grad():
for words in sorted_dl:
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device)
# Infer the approximate posterior and construct the prior.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5)
# Compute the unnormalized probabilities using a single sample from the
# approximate posterior.
z = qz.sample()
# Compute distributions X_i|z, x_{<i}
px_z = model(x_in, z)
# Compute the reconstruction loss and KL divergence.
loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z,
free_nats=0.,
evaluation=True)
total_ELBO += ELBO
total_KL += KL
num_words += x_in.size(0)
# Return the average reconstruction loss and KL.
avg_ELBO = total_ELBO / num_words
avg_KL = total_KL / num_words
return avg_ELBO, avg_KL
dummy_lm = LatentFactorModel(
vocab.size(), emb_size=64, hidden_size=128,
latent_size=16, pad_idx=pad_idx).to(device=device)
!head -n 128 {val_file} > ./dummy_dataset
dummy_data = TextDataset('./dummy_dataset')
dummy_ELBO, dummy_kl = eval_elbo(dummy_lm, dummy_inference_model,
dummy_data, vocab, device)
print(dummy_ELBO, dummy_kl)
assert dummy_kl.item() > 0
```
A common metric to evaluate language models is the perplexity per word. The perplexity per word for a dataset is defined as:
\begin{align}
\text{ppl}(\mathcal{D}|\theta, \lambda) = \exp\left(-\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal{D}|} \log P(x^{(k)}|\theta, \lambda)\right)
\end{align}
where $n^{(k)} = |x^{(k)}|$ is the number of tokens in a word and $P(x^{(k)}|\theta, \lambda)$ is the probability that our model assigns to the datapoint $x^{(k)}$. In order to compute $\log P(x|\theta, \lambda)$ for our model we need to evaluate the marginal:
\begin{align}
P(x|\theta, \lambda) = \sum_{z \in \{0, 1\}^K} P(x|z,\theta) P(z|\alpha)
\end{align}
As this is summation cannot be computed in a reasonable amount of time (due to exponential complexity), we have two options: we can use the earlier derived lower-bound on the log-likelihood, which will give us an upper-bound on the perplexity, or we can make an importance sampling estimate using our approximate posterior distribution. The importance sampling (IS) estimate can be done as:
\begin{align}
\hat P(x|\theta, \lambda) &\overset{\text{IS}}{\approx} \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x|z^{(s)}, \theta)}{Q(z^{(s)}|x)} & \text{where }z^{(s)} \sim Q(z|x)
\end{align}
where $S$ is the number of samples.
Then our perplexity becomes:
\begin{align}
&\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log P(x^{(k)}|\theta) \\
&\approx \frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x^{(k)}|z^{(s)}, \theta)}{Q(z^{(s)}|x^{(k)})} \\
\end{align}
We define the function `eval_perplexity` below that implements this importance sampling estimate:
```
def eval_perplexity(model, inference_model, eval_dataset, vocab, device,
n_samples, batch_size=128):
"""
Estimates the per-word perplexity using importance sampling with the
given number of samples.
"""
dl = DataLoader(eval_dataset, batch_size=batch_size)
sorted_dl = SortingTextDataLoader(dl)
# Make sure the model is in evaluation mode (i.e. disable dropout).
model.eval()
log_px = 0.
num_predictions = 0
num_words = 0
# We don't need to compute gradients for this.
with torch.no_grad():
for words in sorted_dl:
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device)
# Infer the approximate posterior and construct the prior.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5) # TODO different prior
# Create an array to hold all samples for this batch.
batch_size = x_in.size(0)
log_px_samples = torch.zeros(n_samples, batch_size)
# Sample log P(x) n_samples times.
for s in range(n_samples):
# Sample a z^s from the posterior.
z = qz.sample()
# Compute log P(x^k|z^s)
px_z = model(x_in, z)
# [B, T]
cond_log_prob = px_z.log_prob(x_out)
cond_log_prob = torch.where(seq_mask, cond_log_prob, torch.zeros_like(cond_log_prob))
# [B]
cond_log_prob = cond_log_prob.sum(-1)
# Compute log p(z^s) and log q(z^s|x^k)
prior_log_prob = pz.log_prob(z) # B
posterior_log_prob = qz.log_prob(z) # B
# Store the sample for log P(x^k) importance weighted with p(z^s)/q(z^s|x^k).
log_px_sample = cond_log_prob + prior_log_prob - posterior_log_prob
log_px_samples[s] = log_px_sample
# Average over the number of samples and count the number of predictions made this batch.
log_px_batch = torch.logsumexp(log_px_samples, dim=0) - \
torch.log(torch.Tensor([n_samples]))
log_px += log_px_batch.sum()
num_predictions += seq_len.sum()
num_words += seq_len.size(0)
# Compute and return the perplexity per word.
perplexity = torch.exp(-log_px / num_predictions)
NLL = -log_px / num_words
return perplexity, NLL
```
Lastly, we want to occasionally qualitatively see the performance of the model during training, by letting it reconstruct a given word from the latent space. This gives us an idea of whether the model is using the latent space to encode some semantics about the data. For this we use a deterministic greedy decoding algorithm, that chooses the word with maximum probability at every time step, and feeds that word into the next time step.
```
def greedy_decode(model, z, vocab, max_len=50):
"""
Greedily decodes a word from a given z, by picking the word with
maximum probability at each time step.
"""
# Disable dropout.
model.eval()
# Don't compute gradients.
with torch.no_grad():
batch_size = z.size(0)
# We feed the model the start-of-word symbol at the first time step.
prev_x = torch.ones(batch_size, 1, dtype=torch.long).fill_(vocab[SOW_TOKEN]).to(z.device)
# Initialize the hidden state from z.
hidden = model.init_hidden(z)
predictions = []
for t in range(max_len):
logits, hidden = model.step(prev_x, z, hidden)
# Choose the argmax of the unnnormalized probabilities as the
# prediction for this time step.
prediction = torch.argmax(logits, dim=-1)
predictions.append(prediction)
prev_x = prediction.view(batch_size, 1)
return torch.cat(predictions, dim=1)
```
# Training
Now it's time to train the model. We use early stopping on the validation perplexity for model selection.
```
# Define the model hyperparameters.
emb_size = 256
hidden_size = 256
latent_size = 16
bidirectional_encoder = True
free_nats = 0 # 5.
annealing_steps = 0 # 11400
dropout = 0.6
word_dropout = 0 # 0.75
batch_size = 64
learning_rate = 0.001
num_epochs = 20
n_importance_samples = 3 # 50
# Create the training data loader.
dl = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
sorted_dl = SortingTextDataLoader(dl)
# Create the generative model.
model = LatentFactorModel(vocab_size=vocab.size(),
emb_size=emb_size,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
dropout=dropout)
model = model.to(device)
# Create the inference model.
inference_model = InferenceModel(vocab_size=vocab.size(),
embedder=model.embedder,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
bidirectional=bidirectional_encoder)
inference_model = inference_model.to(device)
# Create the optimizer.
optimizer = optim.Adam(itertools.chain(model.parameters(),
inference_model.parameters()),
lr=learning_rate)
# Save the best model (early stopping).
best_model = "./best_model.pt"
best_val_ppl = float("inf")
best_epoch = 0
# Keep track of some statistics to plot later.
train_ELBOs = []
train_KLs = []
val_ELBOs = []
val_KLs = []
val_perplexities = []
val_NLLs = []
step = 0
training_ELBO = 0.
training_KL = 0.
num_batches = 0
for epoch_num in range(1, num_epochs+1):
for words in sorted_dl:
# Make sure the model is in training mode (for dropout).
model.train()
# Transform the words to input, output, seq_len, seq_mask batches.
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device,
word_dropout=word_dropout)
# Compute the multiplier for the KL term if we do annealing.
if annealing_steps > 0:
KL_weight = min(1., (1.0 / annealing_steps) * step)
else:
KL_weight = 1.
# Do a forward pass through the model and compute the training loss. We use
# a reparameterized sample from the approximate posterior during training.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5)
z = qz.sample()
px_z = model(x_in, z)
loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z, free_nats=free_nats)
# Backpropagate and update the model weights.
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Update some statistics to track for the training loss.
training_ELBO += ELBO
training_KL += KL
num_batches += 1
# Every 100 steps we evaluate the model and report progress.
if step % 100 == 0:
val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device)
print("(%d) step %d: training ELBO (KL) = %.2f (%.2f) --"
" KL weight = %.2f --"
" validation ELBO (KL) = %.2f (%.2f)" %
(epoch_num, step, training_ELBO/num_batches,
training_KL/num_batches, KL_weight, val_ELBO, val_KL))
# Update some statistics for plotting later.
train_ELBOs.append((step, (training_ELBO/num_batches).item()))
train_KLs.append((step, (training_KL/num_batches).item()))
val_ELBOs.append((step, val_ELBO.item()))
val_KLs.append((step, val_KL.item()))
# Reset the training statistics.
training_ELBO = 0.
training_KL = 0.
num_batches = 0
step += 1
# After an epoch we'll compute validation perplexity and save the model
# for early stopping if it's better than previous models.
print("Finished epoch %d" % (epoch_num))
val_perplexity, val_NLL = eval_perplexity(model, inference_model, val_dataset, vocab, device,
n_importance_samples)
val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device)
# Keep track of the validation perplexities / NLL.
val_perplexities.append((epoch_num, val_perplexity.item()))
val_NLLs.append((epoch_num, val_NLL.item()))
# If validation perplexity is better, store this model for early stopping.
if val_perplexity < best_val_ppl:
best_val_ppl = val_perplexity
best_epoch = epoch_num
torch.save(model.state_dict(), best_model)
# Print epoch statistics.
print("Evaluation epoch %d:\n"
" - validation perplexity: %.2f\n"
" - validation NLL: %.2f\n"
" - validation ELBO (KL) = %.2f (%.2f)"
% (epoch_num, val_perplexity, val_NLL, val_ELBO, val_KL))
# Also show some qualitative results by reconstructing a word from the
# validation data. Use the mean of the approximate posterior and greedy
# decoding.
random_word = val_dataset[np.random.choice(len(val_dataset))]
x_in, _, seq_mask, seq_len = create_batch([random_word], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
z = qz.mean()
reconstruction = greedy_decode(model, z, vocab)
reconstruction = batch_to_words(reconstruction, vocab)[0]
print("-- Original word: \"%s\"" % random_word)
print("-- Model reconstruction: \"%s\"" % reconstruction)
```
# Let's plot the training and validation statistics:
```
steps, training_ELBO = list(zip(*train_ELBOs))
_, training_KL = list(zip(*train_KLs))
_, val_ELBO = list(zip(*val_ELBOs))
_, val_KL = list(zip(*val_KLs))
epochs, val_ppl = list(zip(*val_perplexities))
_, val_NLL = list(zip(*val_NLLs))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
# Plot training ELBO and KL
ax1.set_title("Training ELBO")
ax1.plot(steps, training_ELBO, "-o")
ax2.set_title("Training KL")
ax2.plot(steps, training_KL, "-o")
plt.show()
# Plot validation ELBO and KL
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
ax1.set_title("Validation ELBO")
ax1.plot(steps, val_ELBO, "-o", color="orange")
ax2.set_title("Validation KL")
ax2.plot(steps, val_KL, "-o", color="orange")
plt.show()
# Plot validation perplexities.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
ax1.set_title("Validation perplexity")
ax1.plot(epochs, val_ppl, "-o", color="orange")
ax2.set_title("Validation NLL")
ax2.plot(epochs, val_NLL, "-o", color="orange")
plt.show()
print()
```
Let's load the best model according to validation perplexity and compute its perplexity on the test data:
```
# Load the best model from disk.
model = LatentFactorModel(vocab_size=vocab.size(),
emb_size=emb_size,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
dropout=dropout)
model.load_state_dict(torch.load(best_model))
model = model.to(device)
# Compute test perplexity and ELBO.
test_perplexity, test_NLL = eval_perplexity(model, inference_model, test_dataset, vocab,
device, n_importance_samples)
test_ELBO, test_KL = eval_elbo(model, inference_model, test_dataset, vocab, device)
print("test ELBO (KL) = %.2f (%.2f) -- test perplexity = %.2f -- test NLL = %.2f" %
(test_ELBO, test_KL, test_perplexity, test_NLL))
```
# Qualitative analysis
Let's have a look at what how our trained model interacts with the learned latent space. First let's greedily decode some samples from the prior to assess the diversity of the model:
```
# Generate 10 samples from the standard normal prior.
num_prior_samples = 10
pz = ProductOfBernoullis(torch.ones(num_prior_samples, latent_size) * 0.5)
z = pz.sample()
z = z.to(device)
# Use the greedy decoding algorithm to generate words.
predictions = greedy_decode(model, z, vocab)
predictions = batch_to_words(predictions, vocab)
for num, prediction in enumerate(predictions):
print("%d: %s" % (num+1, prediction))
```
Let's now have a look how good the model is at reconstructing words from the test dataset using the approximate posterior mean and a couple of samples:
```
# Pick a random test word.
test_word = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x).
x_in, _, seq_mask, seq_len = create_batch([test_word], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
# Decode using the mean.
z_mean = qz.mean()
mean_reconstruction = greedy_decode(model, z_mean, vocab)
mean_reconstruction = batch_to_words(mean_reconstruction, vocab)[0]
print("Original: \"%s\"" % test_word)
print("Posterior mean reconstruction: \"%s\"" % mean_reconstruction)
# Decode a couple of samples from the approximate posterior.
for s in range(3):
z = qz.sample()
sample_reconstruction = greedy_decode(model, z, vocab)
sample_reconstruction = batch_to_words(sample_reconstruction, vocab)[0]
print("Posterior sample reconstruction (%d): \"%s\"" % (s+1, sample_reconstruction))
```
We can also qualitatively assess the smoothness of the learned latent space by interpolating between two words in the test set:
```
# Pick a random test word.
test_word_1 = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x).
x_in, _, seq_mask, seq_len = create_batch([test_word_1], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
qz_1 = qz.mean()
# Pick a random second test word.
test_word_2 = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x) again.
x_in, _, seq_mask, seq_len = create_batch([test_word_2], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
qz_2 = qz.mean()
# Now interpolate between the two means and generate words between those.
num_words = 5
print("Word 1: \"%s\"" % test_word_1)
for alpha in np.linspace(start=0., stop=1., num=num_words):
z = (1-alpha) * qz_1 + alpha * qz_2
reconstruction = greedy_decode(model, z, vocab)
reconstruction = batch_to_words(reconstruction, vocab)[0]
print("(1-%.2f) * qz1.mean + %.2f qz2.mean: \"%s\"" % (alpha, alpha, reconstruction))
print("Word 2: \"%s\"" % test_word_2)
```
| github_jupyter |
# A Demo on Backtesting M3 with Various Models
This notebook aims to
1. provide a simple demo how to backtest models with orbit provided functions.
2. add transperancy how our accuracy metrics are derived in https://arxiv.org/abs/2004.08492.
Due to versioning and random seed, there could be subtle difference for the final numbers. This notebook should also be available in colab.
```
!pip install orbit-ml==1.0.13
!pip install fbprophet==0.7.1
import numpy as np
import tqdm
import pandas as pd
import statsmodels.api as sm
import inspect
import random
from fbprophet import Prophet
from statsmodels.tsa.statespace.sarimax import SARIMAX
import orbit
from orbit.models.dlt import DLTMAP
from orbit.utils.dataset import load_m3monthly
from orbit.diagnostics.backtest import BackTester
from orbit.diagnostics.metrics import smape
seed=2021
n_sample=10
random.seed(seed)
```
We can load the m3 dataset from orbit repository. For demo purpose, i set `n_sample` to be `10`. Feel free to adjust it or simply run the entire dataset.
```
data = load_m3monthly()
unique_keys = data['key'].unique().tolist()
if n_sample > 0:
sample_keys = random.sample(unique_keys, 10)
# just get the first 5 series for demo
data = data[data['key'].isin(sample_keys)].reset_index(drop=True)
else:
sample_keys = unique_keys
print(sample_keys)
data.columns
```
We need to provide some meta data such as date column, response column etc.
```
key_col='key'
response_col='value'
date_col='date'
seasonality=12
```
We also provide some setting mimic M3 (see https://forecasters.org/resources/time-series-data/m3-competition/) criteria.
```
backtest_args = {
'min_train_len': 1, # not useful; a placeholder
'incremental_len': 18, # not useful; a placeholder
'forecast_len': 18,
'n_splits': 1,
'window_type': "expanding",
}
```
We are using `DLT` here. To use a multiplicative form, we need a natural log transformation of response. Hence, we need to a wrapper for `DLT`. We also need to build wrapper for signature prupose for `prophet` and `sarima`.
Note that prophet comes with its own multiplicative form.
```
class DLTMAPWrapper(object):
def __init__(self, response_col, date_col, **kwargs):
kw_params = locals()['kwargs']
for key, value in kw_params.items():
setattr(self, key, value)
self.response_col = response_col
self.date_col = date_col
self.model = DLTMAP(
response_col=response_col,
date_col=date_col,
**kwargs)
def fit(self, df):
df = df.copy()
df[[self.response_col]] = df[[self.response_col]].apply(np.log1p)
self.model.fit(df)
def predict(self, df):
df = df.copy()
pred_df = self.model.predict(df)
pred_df['prediction'] = np.clip(np.expm1(pred_df['prediction']).values, 0, None)
return pred_df
class SARIMAXWrapper(object):
def __init__(self, response_col, date_col, **kwargs):
kw_params = locals()['kwargs']
for key, value in kw_params.items():
setattr(self, key, value)
self.response_col = response_col
self.date_col = date_col
self.model = None
self.df = None
def fit(self, df):
df_copy = df.copy()
infer_freq = pd.infer_freq(df_copy[self.date_col])
df_copy = df_copy.set_index(self.date_col)
df_copy = df_copy.asfreq(infer_freq)
endog = df_copy[self.response_col]
sig = inspect.signature(SARIMAX)
all_params = dict()
for key in sig.parameters.keys():
if hasattr(self, key):
all_params[key] = getattr(self, key)
self.df = df_copy
self.model = SARIMAX(endog=endog, **all_params).fit(disp=False)
def predict(self, df, **kwargs):
df_copy = df.copy()
infer_freq = pd.infer_freq(df_copy[self.date_col])
df_copy = df_copy.set_index(self.date_col)
df_copy = df_copy.asfreq(infer_freq)
pred_array = np.array(self.model.predict(start=df_copy.index[0],
end=df_copy.index[-1],
**kwargs))
out = pd.DataFrame({
self.date_col: df[self.date_col],
'prediction': pred_array
})
return out
class ProphetWrapper(object):
def __init__(self, response_col, date_col, **kwargs):
kw_params = locals()['kwargs']
for key, value in kw_params.items():
setattr(self, key, value)
self.response_col = response_col
self.date_col = date_col
self.model = Prophet(**kwargs)
def fit(self, df):
sig = inspect.signature(Prophet)
all_params = dict()
for key in sig.parameters.keys():
if hasattr(self, key):
all_params[key] = getattr(self, key)
object_type = type(self.model)
self.model = object_type(**all_params)
train_df = df.copy()
train_df = train_df.rename(columns={self.date_col: "ds", self.response_col: "y"})
self.model.fit(train_df)
def predict(self, df):
df = df.copy()
df = df.rename(columns={self.date_col: "ds"})
pred_df = self.model.predict(df)
pred_df = pred_df.rename(columns={'yhat': 'prediction', 'ds': self.date_col})
pred_df = pred_df[[self.date_col, 'prediction']]
return pred_df
```
Declare model objects and run backtest. Score shows in the end.
```
dlt = DLTMAPWrapper(
response_col=response_col,
date_col=date_col,
seasonality=seasonality,
seed=seed,
)
sarima = SARIMAXWrapper(
response_col=response_col,
date_col=date_col,
seasonality=seasonality,
seed=seed,
)
prophet = ProphetWrapper(
response_col=response_col,
date_col=date_col,
)
all_scores = []
for key in tqdm.tqdm(sample_keys):
# dlt
df = data[data[key_col] == key]
bt = BackTester(
model=dlt,
df=df,
**backtest_args,
)
bt.fit_predict()
scores_df = bt.score(metrics=[smape])
scores_df[key_col] = key
scores_df['model'] = 'dlt'
all_scores.append(scores_df)
# sarima
df = data[data[key_col] == key]
bt = BackTester(
model=sarima,
df=df,
**backtest_args,
)
bt.fit_predict()
scores_df = bt.score(metrics=[smape])
scores_df[key_col] = key
scores_df['model'] = 'sarima'
all_scores.append(scores_df)
# prophet
df = data[data[key_col] == key]
bt = BackTester(
model=prophet,
df=df,
**backtest_args,
)
bt.fit_predict()
scores_df = bt.score(metrics=[smape])
scores_df[key_col] = key
scores_df['model'] = 'prophet'
all_scores.append(scores_df)
all_scores = pd.concat(all_scores, axis=0, ignore_index=True)
all_scores.groupby('model')['metric_values'].apply(np.mean).reset_index()
```
| github_jupyter |
# Corpus scratch
This notebook is for miscelaneous processing from the swbd.tab database file
```
import pandas as pd
import numpy as np
# import the database file from the TGrep2 searching
df = pd.read_csv("../results/swbd.tab", sep='\t', engine='python')
d = pd.read_csv("swbd_contexts.csv")
# This makes the display show more info
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
# New run
df.groupby("QuestionType")["QuestionType"].count()
```
# Good MA examples
```
# high rating for 'every' m = 0.8259341
d.loc[d["TGrepID"] == "2512:34"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "149808:32"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "66692:37"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "164808:75"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "34334:57"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "4959:7"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "73191:21"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "101503:16"][["EntireSentence","PreceedingContext"]]
```
## KNOW MS
```
d.loc[d["TGrepID"] == "26185:5"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "22314:16"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "153958:24"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "13191:16"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "38489:4"][["EntireSentence","PreceedingContext"]]
```
### MA=MS
```
d.loc[d["TGrepID"] == "12883:11"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "17191:26"][["EntireSentence","PreceedingContext"]]
```
## SURPRISE
- the: .52
- other: .26
- a: .17
- every: .05
```
d.loc[d["TGrepID"] == "72529:48"][["EntireSentence","PreceedingContext"]]
```
## PREDICT
- every: .7
- a: .18
- other: .09
- the: .04
```
d.loc[d["TGrepID"] == "75191:67"][["EntireSentence","PreceedingContext"]]
```
# Cases where there's a large change from context to nocontext
```
d.loc[d["TGrepID"] == "132917:4"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "89239:4"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "11053:4"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "126136:4"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "49447:4"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "13656:19"][["EntireSentence","PreceedingContext"]]
d.loc[d["TGrepID"] == "58758:18"][["EntireSentence","PreceedingContext"]]
```
## Cases with little difference
```
# every --> the
d.loc[d["TGrepID"] == "58058:6"][["EntireSentence","PreceedingContext"]]
# no real change
d.loc[d["TGrepID"] == "148861:4"][["EntireSentence","PreceedingContext"]]
# no real change
d.loc[d["TGrepID"] == "5042:13"][["EntireSentence","PreceedingContext"]]
```
| github_jupyter |
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
# Handling Missing Data
```
import numpy as np
import pandas as pd
```
### NaN and None in Pandas
``NaN`` and ``None`` both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:
```
pd.Series([1, np.nan, 2, None])
1 + np.nan
1 + None
```
## Operating on Null Values
- ``isnull()``: Generate a boolean mask indicating missing values
- ``notnull()``: Opposite of ``isnull()``
- ``dropna()``: Return a filtered version of the data
- ``fillna()``: Return a copy of the data with missing values filled or imputed
### Detecting null values
Pandas data structures have two useful methods for detecting null data: ``isnull()`` and ``notnull()``.
Either one will return a Boolean mask over the data. For example:
```
data = pd.Series([1, np.nan, 'hello', None])
data
data.isnull()
data[data.notnull()]
```
### Dropping null values
In addition to the masking used before, there are the convenience methods, ``dropna()``
(which removes NA values) and ``fillna()`` (which fills in NA values). For a ``Series``,
the result is straightforward:
```
data.dropna()
df = pd.DataFrame([[1, np.nan, 2],
[2, 3, 5],
[np.nan, 4, 6]])
df
```
We cannot drop single values from a ``DataFrame``; we can only drop full rows or full columns.
Depending on the application, you might want one or the other, so ``dropna()`` gives a number of options for a ``DataFrame``.
By default, ``dropna()`` will drop all rows in which *any* null value is present:
```
df.dropna()
```
Alternatively, you can drop NA values along a different axis; ``axis=1`` drops all columns containing a null value:
```
df.dropna(axis='columns')
```
But this drops some good data as well; you might rather be interested in dropping rows or columns with *all* NA values, or a majority of NA values.
This can be specified through the ``how`` or ``thresh`` parameters, which allow fine control of the number of nulls to allow through.
The default is ``how='any'``, such that any row or column (depending on the ``axis`` keyword) containing a null value will be dropped.
You can also specify ``how='all'``, which will only drop rows/columns that are *all* null values:
```
df[3] = np.nan
df
df.dropna(axis='columns', how='any')
df.dropna(axis='columns', how='all')
```
For finer-grained control, the ``thresh`` parameter lets you specify a minimum number of non-null values for the row/column to be kept:
```
df
df.dropna(axis='rows', thresh=3)
```
### Filling null values
Sometimes rather than dropping NA values, you'd rather replace them with a valid value.
This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values.
You could do this in-place using the ``isnull()`` method as a mask, but because it is such a common operation Pandas provides the ``fillna()`` method, which returns a copy of the array with the null values replaced.
Consider the following ``Series``:
```
data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
data
data.fillna(0)
data.fillna(data.mean())
data
# forward-fill
data.fillna(method='ffill')
# back-fill
data.fillna(method='bfill')
df
df.fillna(0)
df
df.fillna(method='ffill', axis=1)
df.fillna(method='ffill', axis=0)
df.replace(np.nan, 7)
df.replace(5, 7)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
df = pd.read_csv("./word2vec_wrangling.csv")
exercise_to_loop = df["exercise_name"].to_list()
# -*- coding: utf-8 -*-
import re
from konlpy.tag import Mecab, Okt
from collections import Counter
import pandas as pd
import numpy as np
def preprocessing_hangul(text):
# 개행문자 제거
hangul = re.compile('[^ ㄱ-ㅣ가-힣]+')
result = hangul.sub('', text)
return result
def yield_combined_df(exercise_name):
file_name = "#" + exercise_name + "_sum.txt"
file_1 = "/Users/noopy/FitCuration/" + file_name
text = open(file_1, 'r', -1, "UTF-8", errors="ignore").read()
# text
clean_text = preprocessing_hangul(text)
clean_text
# MeCab으로 명사 뽑아내기
mecab = Mecab()
noun_list_mecab = mecab.nouns(clean_text)
# stopwards preprocessing
stopwords_mecab = ['수','퀄리티','도시','분','전문','스타','년','원',\
'월','화','수','목','금','시','앤','일','그램','문']
clean_noun_list_mecab = []
for n in noun_list_mecab:
if n not in stopwords_mecab:
clean_noun_list_mecab.append(n)
# get top 100 most common nouns
nouns_mecab = Counter(clean_noun_list_mecab)
tags_mecab = nouns_mecab.most_common(100)
# Okt(Twitter 업데이트 버전)으로 서술어 뽑아내기
twitter = Okt()
# 4. 각 문장별로 형태소 구분하기
morphed_list_okt = twitter.pos(clean_text)
# morphed_list
# 불용어
stopwords_Twitter = ["입니다","있는","있습니다","같은","안녕하세요","고마워요","있어요","있게"\
,"있도록","부탁드립니다","하는","합니다","할","하세요","하기","해","됩니다","하여",'잘','된','되고','되어','되었습니다',"없는","드립니다"\
,"되기","하시는","하고","않을","같다","싶다","이런","저런","그런",'바랍니다'\
,"했습니다","했다","해드립니다","하신","하실","않고","해요","가능합니다","하고싶으신"\
,"않으며","주세요","오세요"]
# 5. 형용사 품사만 뽑아내기
adj_list_okt = []
for word, tag in morphed_list_okt:
if tag in ['Adjective','Verb'] and word not in stopwords_Twitter:
adj_list_okt.append(word)
# 6. 선별된 품사별 빈도수 계산 & 상위 빈도 10위 까지 출력
adj_counts_okt = Counter(adj_list_okt)
common_adj_okt = adj_counts_okt.most_common(100)
# common_adj_okt
noun_df = pd.DataFrame(np.sort(np.array(tags_mecab), axis=1), columns=[exercise_name+"freq",exercise_name+'관련명사'])
adj_df = pd.DataFrame(np.sort(np.array(common_adj_okt), axis=1), columns=[exercise_name+ "freq",exercise_name+'관련서술어'])
combined_df = pd.concat([noun_df,adj_df],axis=1)
return combined_df
exercise_to_loop.sort()
exercise_to_loop[:10]
df0 = yield_combined_df('PT')
for item in exercise_to_loop:
df0 = pd.concat([df0,yield_combined_df(item)],axis=1)
df0.to_csv("combined_wrangling_db.csv", index=False, header=True)
```
| github_jupyter |
# Part 9 - Intro to Encrypted Programs
You believe or you no believe, he dey possible to compute with encrypted data. Make I talk am another way sey he dey possible to run program where **ALL of the variables** in the program dey **encrypted**!
For this tutoria we go learn basic tools of encrypted computation. In particular, we go focus on one popular approach called Secure Multi-Party Computation. We go build encrypted calculator wey fit perform calculations on encrypted numbers for this lesson.
Person wey write am:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle)
Reference wey you fit use:
- Morten Dahl - [Blog](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
Person wey translate am:
- Temitọpẹ Ọladokun - Twitter: [@techie991](https://twitter.com/techie991)
# Step 1: Encryption Using Secure Multi-Party Computation
SMPC na strange form of "encryptioon". Instead make we use public/private key to encrypt a variable, we go split each value into multiple `shares`, each of dem operates like a private key. Typically, we go distribute these `shares` amongst 2 or more _owners_. Thus, to decrypt the variable, all owners must agree sey make we decryt am. In essence, everyone go get private key.
### Encrypt()
So, let's say we wan "encrypt" a variable `x`, we fit do am in the following way.
So, let's say we wanted to "encrypt" a variable `x`, we could do so in the following way.
> Encryption no dey use floats or real numbers but he dey happen in a mathematical space called [integer quotient ring](http://mathworld.wolfram.com/QuotientRing.html) wey be the integer wey dey between `0` and `Q-1`, where `Q` na prime and "big enough" so that the space can contain all the numbers that we use in our experiments. In practice, given a value `x` integer, we do `x % Q` to fit in the ring. (That's why we dey avoid to use number `x' > Q`).
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(-Q,Q)
share_b = random.randint(-Q,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
As you dey see am, we don split our variable `x` into 3 different shares, wey we go send to 3 different owners.
### Decrypt()
If you wan decrypt these 3 shares, we go just sum them togeda and then we go take the modulus of the result (mod Q)
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
Importantly, notice sey if we try to decrypt with only two shares, the decryption no dey work!
```
decrypt(a, b)
```
We go need all of the owners to participate if we wan decrypt value. Na like this `shares` dey act like private keys, everytin must to dey before we go fit decrypt a value.
# Step 2: Basic Arithmetic Using SMPC
The extraordinary property of Secure Multi-Party Computation na him ability to perform computation **while the variables are still encrypted**. Make we demonstrate simple addition as he dey below.
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### Success!!!
And there you get am! If each worker (separately) add their shares togeda, the resulting shares go decrypt to the correct vallues (25 + 5 == 30).
As he dey so, SMPC protocols exist go allow make this encrypted computation fit do the following operations:
- addition (which we've just seen)
- multiplication
- comparison
If we use these basic underlying primitives, we go fit perform arbitrary computation!!!
In the next section, we go learn how we fit use PySyft library to perform these operations!
# Step 3: SMPC Using PySyft
In the previous sections, we talk about some intuition wey dey around SMPC if he go work. However, we no wan use hand write everi primitive operations ourselves when we fit write our encrypted programs. So, for this section we go studi the basics of how we go do encrypted computation using PySyft. Particularly, we go focus on how to do 3 primitives wey we mention before: addition, multiplication, and comparison.
First, we go first create few Virtual Worker (wey you don dey familiar as you don study our previous tutorials).
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
bill = sy.VirtualWorker(hook, id="bill")
```
### Basic Encryption/Decryption
Encryption na simple way of taking PySyft tensor and calling .share(). Decryption na simple way of calling .get() on shared variable.
```
x = torch.tensor([25])
x
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### Introspecting the Encrypted Values
If you look Bob, Alice and Bill's workers wella, we go see the shares we done create!
```
bob._objects
x = torch.tensor([25]).share(bob, alice, bill)
# Bob's share
bobs_share = list(bob._objects.values())[0]
bobs_share
# Alice's share
alices_share = list(alice._objects.values())[0]
alices_share
# Bill's share
bills_share = list(bill._objects.values())[0]
bills_share
```
If we want, we fit decrypt these values if we use the SAME approach we don talk about before now!!!
```
Q = x.child.field
(bobs_share + alices_share + bills_share) % Q
```
Look am wella, as we call `.share()` he just split value into 3 shares ans he come send one share go each parties!
# Encrypted Arithmetic
We fit perform arithmetic on the underlying values! API dey constructed on to sey make we perform arithmetic liek the one wey we do with normal PyTorch tensors.
```
x = torch.tensor([25]).share(bob,alice)
y = torch.tensor([5]).share(bob,alice)
z = x + y
z.get()
z = x - y
z.get()
```
# Encrypted Multiplication
For multiplication we go need an additional party who go dey responsible to dey consistently generate random numbers (he no go collude with any other parties). We dey call this person "crypto provider". For all intensive purposes, the crypto provider na just additional VirtualWorker, but he go dey important to acknowledge sey crypto provider no be "owner" onto sey he/she no get him own shares but na someone wey we need to trust make he no go collude with ani existing shareholders.
```
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
# multiplication
z = x * y
z.get()
```
You fit do matrix multiplication
```
x = torch.tensor([[1, 2],[3,4]]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([[2, 0],[0,2]]).share(bob,alice, crypto_provider=crypto_provider)
# matrix multiplication
z = x.mm(y)
z.get()
```
# Encrypted comparison
He dey possible to private compare private values. We go rely on SecureNN protocol, we fit find the details [here](https://eprint.iacr.org/2018/442.pdf). The result of the comparison na private shared tensor.
```
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
z = x > y
z.get()
z = x <= y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
You fit perform max operations
```
x = torch.tensor([2, 3, 4, 1]).share(bob,alice, crypto_provider=crypto_provider)
x.max().get()
x = torch.tensor([[2, 3], [4, 1]]).share(bob,alice, crypto_provider=crypto_provider)
max_values, max_ids = x.max(dim=0)
max_values.get()
```
# Congratulations!!! - Oya Join the Community!
Clap for una sef as you don finish this notebook tutorial! If you enjoy am and you wan join the movement towards privacy preserving, decentralized ownership of AI and the AI supply chain (data), follow the steps wey dey below.
### Star PySyft on GitHub
The easiset way to helep our community na to star the GitHub repos! This go helep raise awareness of the tools we dey build.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
To follow up bumper to bumper on how latest advancements, join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community na to become code contributor! You fit go to PySyft GitHub Issues page and filter for "Projects". E go show you all the top level Tickets giving an overview of what projects you fit join! If you no wan join any project, but you wan code small, you fit look for more "one off" mini-projects by searching for GitHub issues marked "good first issue"
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you no get time to contribute to our codebase, but still like to lend support, you fit be a Backer on our Open Collective. All donations wey we get na for our web hosting and other community expenses such as hackathons and meetups! meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# Document Classification Tutorial 1
(C) 2019 by [Damir Cavar](http://damir.cavar.me/)
## Amazon Reviews
See for more details the source of this tutorial: [https://www.analyticsvidhya.com/blog/2018/04/a-comprehensive-guide-to-understand-and-implement-text-classification-in-python/](https://www.analyticsvidhya.com/blog/2018/04/a-comprehensive-guide-to-understand-and-implement-text-classification-in-python/)
We will use the data provided at [this site](https://gist.github.com/kunalj101/ad1d9c58d338e20d09ff26bcc06c4235). This is a collection of 3.6 mil. Amazon text reviews and labels. The data is formated using the [FastText](https://fasttext.cc/docs/en/supervised-tutorial.html) corpus format, that is, each file contains lines with a label followed by the text.
`__label__2 Stuning even for the non-gamer: This sound track was beautiful! It paints the senery in your mind so well I would recomend it even to people who hate vid. game music! I have played the game Chrono Cross but out of all of the games I have ever played it has the best music! It backs away from crude keyboarding and takes a fresher step with grate guitars and soulful orchestras. It would impress anyone who cares to listen! ^_^`
We load the data set
```
data = open('data/corpus').read()
labels, texts = [], []
for line in data.split("\n"):
content = line.split(' ', 1)
labels.append(content[0])
texts.append(content[1])
print(texts[:3])
```
We will use Pandas to store the labels and texts in a DataFrame. We import Pandas:
```
import pandas
```
Packing the data into a Pandas DataFrame:
```
corpus = pandas.DataFrame()
corpus['text'] = texts
corpus['label'] = labels
```
From *scikit_learn* we will import *model_selection*. This module contains a function *train_test_split* that splits arrays or matrices into random train and test subsets. See for more details the [documentation page](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
```
from sklearn import model_selection
```
We will select a third of the data set for testing. The *random_state* in the default will use *np.random* in this function call.
```
train_text, test_text, train_label, test_label = model_selection.train_test_split(corpus['text'],
corpus['label'],
test_size=0.33)
print(train_text[:2])
print(test_text[:2])
```
We use the *scikit_learn* module for *preprocessing*. We will use the *LabelEncoder* in the *preprocessing* module to normalize the labels such that they contain only values between 0 and n_classes-1. See for more details the [documentation page](https://scikit-learn.org/stable/modules/preprocessing_targets.html#preprocessing-targets).
```
from sklearn import preprocessing
encoder = preprocessing.LabelEncoder()
```
We encode the labels for the training and test set:
```
print(test_label[:10])
train_label = encoder.fit_transform(train_label)
test_label = encoder.fit_transform(test_label)
print(test_label[:10])
```
## Feature Engineering
To engineer a classifier, we will select different types of features. We will start using the count vectors as features. In count vectors, each row represents a document from the corpus and each column represents a word from the corpus. The scalar in each vector contains the frequency of a particular token (column) in the document (row). We will import the *CountVectorizer* from the *scikit-learn* module and its *feature_extraction.text* collection:
```
from sklearn.feature_extraction.text import CountVectorizer
```
The *CountVectorizer* should make features of word n-grams, as specified in *analyzer='word'*. The *token_pattern* parameter is a regular expression denoting what constitutes a token and it is only used if *analyzer == 'word'*. The regular expression here selects words to be tokens of one or more characters. See for more details the [documentation page](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html).
```
vectorizer = CountVectorizer(analyzer='word', token_pattern=r'\w{1,}')
```
The *fit* method applied to the *vectorizer* object learns a vocabulary dictionary of all tokens in the raw texts.
```
vectorizer.fit(corpus['text'])
```
We will now transform the training and test data using the *vectorizer* object:
```
train_text_count = vectorizer.transform(train_text)
test_text_count = vectorizer.transform(test_text)
```
We will use the *scikit_learn* module for *linear models*:
```
from sklearn import linear_model
```
We can now apply logistic regression on the transformed data and print the resulting accuracy. We create an instance of a Logistic Regression classifier using the *liblinear* algorithm as a solver for optimization. We train the model and generate the predictions for the test data. See for more details the [documentation page](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
```
classifier = linear_model.LogisticRegression(solver='liblinear')
classifier.fit(train_text_count, train_label)
predictions = classifier.predict(test_text_count)
```
We will use the *metrics* module in *scikit_learn* to compute the accuracy score:
```
from sklearn import metrics
```
To compute the accuracy score, we provide the *accuracy_score* function in the *metrics* module with the predicted labels for the test data set and the real labels.
```
accuracy = metrics.accuracy_score(predictions, test_label)
print("LR, Count Vectors: ", accuracy)
```
In this case logistic regression as a classifier on the word count vectors results in more than 84% accuracy.
| github_jupyter |
**<p style="font-size: 35px; text-align: center">Hypothesis Testing</p>**
***<center>Miguel Ángel Vélez Guerra</center>***
<hr/>

<hr />
<hr />
**<p id="tocheading">Tabla de contenido</p>**
<br/>
<div id="toc"></div>
```
%%javascript
// Script to generate table of contents
$.getScript('../resources/table_of_contents.js')
```
<hr/>
<hr/>
## Imports
```
#-------Importing from other folder------#
import sys
sys.path.insert(0, "../resources/")
import mstats as ms
#-----------Miguel's statistics----------#
import scipy.stats as ss
import numpy as np
```
<hr/>
<hr/>
## 1. Pruebas de hipótesis de 2 colas para la media poblacional en muestras grandes
Se supone que el embotellador desea probar la hipótesis de que la media poblacional es de 16 onzas y selecciona un nivel de significancia del 5%. Debido a que se plantea la hipótesis de que μ = 16.
Si el embotellador selecciona una muestra de n = 50 botellas con una media de 16.357 onzas y una desviación estándar de 0.866 onzas.
```
mu_embotellador = 16 # Hipótesis nula de la media poblacional
x__embotellador = 16.357 # Media muestral
s_embotellador = 0.866 # Desviación estándar muestral
n_embotellador = 50 # Tamaño de la muestra
alpha_embotellador = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μ = 16
**Ha:** μ ≠ 16
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_embotellador
```
<u> **Paso 3**</u>: Valores críticos
```
crit_embotellador = ms.hypothesis.crit_val_norm(alpha_embotellador, 'two') # Valores críticos
crit_embotellador
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_embotellador = ms.generals.get_z(x__embotellador, mu_embotellador, s_embotellador, n=n_embotellador)
z_embotellador
```
<u> **Paso 5**</u>: Decisión
```
ms.graph.hypothesis(ss.norm, z_embotellador, alpha_embotellador, "two")
ms.hypothesis.reject_h0(crit_embotellador, z_embotellador, 'two')
```
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *2.91497830119627* si es mayor o menor que los valores críticos *-1.959963984540054, 1.959963984540054*.
<u>**Paso 6**</u>: Conclusión
Se puede afirmar con un nivel de significancia del *5%* que que el peso promedio de las botellas es **diferente** de 16 onzas.
<hr/>
<hr/>
## 2. Pruebas de hipótesis de 1 cola para la media poblacional en muestras grandes
En una reunión informativa para una oficina corporativa, el gerente del hotel Embassy Suites en Atlanta, reportó que el número promedio de habitaciones alquiladas por noches es de por lo menos 212. Es decir μ > 212. Uno de los funcionarios operativos considera que esta cifra puede estar algo subestimada. Una muestra de 150 noches produce una media de 201.3 habitaciones y una desviación estándar de 45.5 habitaciones. Si estos resultados sugieren que el gerente ha "inflado" su reporte, será amonestado severamente. A un nivel de confianza del 1%. ¿Cuál es el destino del gerente?
```
mu_habitaciones = 212 # Hipótesis nula de la media poblacional
x__habitaciones = 201.3 # Media muestral
s_habitaciones = 45.5 # Desviación estándar muestral
n_habitaciones = 150 # Tamaño de la muestra
alpha_habitaciones = 0.01 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μ = 212
**Ha:** μ < 212
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_habitaciones
```
<u> **Paso 3**</u>: Valores críticos
```
crit_habitaciones = ms.hypothesis.crit_val_norm(alpha_habitaciones, 'left')
crit_habitaciones
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_habitaciones = ms.generals.get_z(x__habitaciones, mu_habitaciones, s_habitaciones, n=n_habitaciones)
z_habitaciones
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_habitaciones, z_habitaciones, 'left')
```
**SI se rechaza la hipótesis nula** teniendo en cuenta que el valor del estadístico de prueba *-2.8801692579977995* es menor que el valor crítico *-2.3263478740408408*.
<u>**Paso 6**</u>: Conclusión
Con un nivel de significancia del *1%* podemos afirmar que el número promedio de habitaciones alquiladas por noche es **menor** de 212 habitaciones.
Por lo que podemos concluir, que el gerente será amonestado gravemente por "inflar" su reporte.
<hr/>
<hr/>
## 3. Valor p para prueba de 1 cola
Chuck Cash es el jefe de personal de una empresa. A partir de un breve análisis de los registros de los empleados, Chuck considera que los empleados tienen un promedio de más de 31000 USD en sus cuentas de pensiones. Al tomar como muestra 100 empleados, Chuck encuentra una media de 31366, con s = 1894. Se supone que Chuck desea calcular el valor p relacionado con esta prueba de cola a la derecha.
```
mu_empleados = 31000 # Hipótesis nula de la media poblacional
n_empleados = 100 # Tamaño de la muestra
x__empleados = 31366 # Promedio muestral
s_empleados = 1894 # Desviación estándar muestral
z_empleados = ms.generals.get_z(x__empleados, mu_empleados, s_empleados, n=n_empleados)
z_empleados
p_empleados = ms.hypothesis.get_p(z_empleados, 'right')
p_empleados
```
**R/** El mínimo nivel de significancia que puede tener Chuck es de **2.66%** para poder afirmar que los empleados tienen un promedio **de más de** 31000 USD en sus cuentas de pensiones.
<hr/>
<hr/>
## 4. Valor p para una prueba de 2 colas
Chuck Cash también sospecha que los empleados invierten un promedio de 100 UDD mensuales en el plan de opción de compra de acciones de la compañía. Al tomar como muestra 100 empleados, Chuck descubre una media de 106.81 USD con una desviación estándar de 36.60 USD. Ahora desea determinar el valor p relacionado con la prueba de hipótesis.
```
mu_acciones = 100 # Hipótesis nula de la media poblacional
n_acciones = 100 # Tamaño de la muestra
x__acciones = 106.81 # Promedio muestral
s_acciones = 36.6 # Desviación estándar muestral
z_acciones = ms.generals.get_z(x__acciones, mu_acciones, s_acciones, n=n_acciones)
z_acciones
p_acciones = ms.hypothesis.get_p(z_acciones, 'two')
p_acciones
```
**R/** El mínimo nivel de significancia que puede tomar Chuck para determinar que los empleados invierten un promedio **diferente** de 100 USD mensuales en el plan de opción de compra de acciones de la compañía es de **6.27%**
<hr/>
<hr/>
## 5. Pruebas de hipótesis de 2 colas para la media poblacional en muestras pequeñas
Los estudiantes de una clase de estadística en State University cuestionan la afirmación de que McDonalds coloca 0.25 libras de carne en sus “Habueguesas de cuarto de libra”. Algunos estudiantes argumentan que en realidad se utiliza más, mientras otros insisten que menos. Para probar la afirmación publicitaria que el peso promedio es es de 0.25 libras, cada estudiante compra una hamburguesa de cuarto y la lleva a clase, en donde la pesan en una balanza suministrada por el instructor. Los resultados de la muestra son una media de 0.22 libras y una desviación estándar de 0.09. Si hay 25 estudiantes en clase, ¿a que conclusión llegarían a un nivel de significancia del 5%?
```
mu_mcd = 0.25 # Hipótesis nula de la media poblacional
x__mcd = 0.22 # Promedio muestral
s_mcd = 0.09 # Desviación estándar muestral
n_mcd = 25 # Tamaño de la muestra
alpha_mcd = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μ = 0.25
**Ha:** μ ≠ 0.25
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_mcd
```
<u> **Paso 3**</u>: Valores críticos
```
df_mcd = n_mcd - 1
crit_mcd = ms.hypothesis.crit_val_t(df_mcd, alpha_mcd, "two")
crit_mcd
```
<u> **Paso 4**</u>: Estadístico de prueba (T)
```
t_mcd = ms.generals.get_t(x__mcd, mu_mcd, s_mcd, n_mcd)
t_mcd
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_mcd, t_mcd, 'two')
```
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *-1.6666666666666667* no es menor ni mayor que los valores críticos *-2.0638985616280205, 2.0638985616280205*.
<u>**Paso 6**</u>: Conclusión
Teniendo en cuenta el nivel de significancia de *5%* se puede concluir que no hay argumentos suficientes para negar que McDonalds coloca **exactamente** 0.25 libras de carne en sus "Hamburguesas de cuarto de libra"
<hr />
<hr />
## 6. Pruebas de hipótesis de 1 cola para la media poblacional en muestras pequeñas
The American Kennel Club (AKC) reportó en su publicación Estadounidenses Propietarios de Perros (abril 1997) que los perros cocker spaniels de un año de edad deberían pesar “un poco más de 40 libras si han recibido una notrición paropiada”. Para probar la hipótesis Hill’s, productor de alimentos para la dieta de perros, pesa 15 perros cockers de un año de edad y descubre una media de 41.17 libras, con s = 4.71 libras. Utilice un valor α del 1%.
```
mu_perros = 40 # Hipótesis nula de la media poblacional
n_perros = 15 # Tamaño de la muestra
x__perros = 41.17 # Promedio muestral
s_perros = 4.71 # Desviación estándar muestral
alpha_perros = 0.01 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μ = 40
**Ha:** μ > 40
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_perros
```
<u> **Paso 3**</u>: Valores críticos
```
df_perros = n_perros - 1
crit_perros = ms.hypothesis.crit_val_t(df_perros, alpha_perros, 'right')
crit_perros
```
<u> **Paso 4**</u>: Estadístico de prueba (T)
```
t_perros = ms.generals.get_t(x__perros, mu_perros, s_perros, n_perros)
t_perros
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_perros, t_perros, 'right')
```
**NO se rechaza la nula**, teniendo en cuenta que el estadístico de prueba *0.9620786656184043* no es mayor que el valor crítico *2.624494067560231*.
<u>**Paso 6**</u>: Conclusión
Si tenemos un nivel de significancia de *1%*, podemos afirmar que el peso promedio de los perros cocker spaniels de un año de edad es **menor o igual** a 40 libras.
<hr />
<hr />
## 7. Pruebas de hipótesis de 2 colas para la proporción poblacional
Como director de operaciones de mercadeo para una gran cadena minorista, usted considera que el 60% de los clientes de la firma se han graduado de la universidad. Usted intenta establecer una importante política respecto a la estructura de precios sobre esta proporción. Una muestra de 800 clientes revela que 492 clientes tienen grados universitarios. A un nivel del 5%, ¿qué puede concluir sobre la proporción de todos los clientes que se han graduado de la universidad?
```
pi_graduados = 0.6 # Hipótesis nula de la proporción poblacional
n_graduados = 800 # Tamaño de la muestra
p_graduados = 492/800 # Proporción muestral
alpha_graduados = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** π = 0.6
**Ha:** π ≠ 0.6
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_graduados
```
<u> **Paso 3**</u>: Valores críticos
```
crit_graduados = ms.hypothesis.crit_val_norm(alpha_graduados, 'two')
crit_graduados
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_graduados = ms.generals.get_z_prop(p_graduados, pi_graduados, n_graduados)
z_graduados
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_graduados, z_graduados, 'two')
```
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *0.8660254037844394* no es menor ni mayor que los valores críticos *-1.959963984540054, 1.959963984540054*.
<u>**Paso 6**</u>: Conclusión
A un nivel de significancia del *5%*, podemos concluir que la proporción de todos los clientes que se han graduado de la universidad **no** es **diferente** de 0.6.
<hr />
<hr />
## 8. Pruebas de hipótesis de 1 cola para la proporción poblacional
El CEO de una gran firma manufacturera debe garantizar que por lo menos 75% de sus empleados ha concluido un curso avanzado de capacitación. De los 1200 empleados seleccionados aleatoriamente, 875 lo han hecho. El CEO registra su asistencia para probar esta hipótesis y calcular el valor de p. A un nivel de significancia del 5%, ¿qué conclusiones incluye usted en su reporte?
```
pi_curso = 0.75 # Hipótesis nula de la proporción poblacional
n_curso = 1200 # Tamaño de la muestra
p_curso = 875/1200 # Proporción muestral
alpha_curso = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** π = 0.75
**Ha:** π < 0.75
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_curso
```
<u> **Paso 3**</u>: Valores críticos
```
crit_curso = ms.hypothesis.crit_val_norm(alpha_curso, 'left')
crit_curso
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_curso = ms.generals.get_z_prop(p_curso, pi_curso, n_curso)
z_curso
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_curso, z_curso, 'left')
```
**SI se rechaza la hipótesis nula** teniendo en cuenta que el estadístico de prueba *-1.6666666666666696* es menor que el valor crítico *-1.6448536269514722*
<u>**Paso 6**</u>: Conclusión
Con un nivel de significancia del *5%*, podemos afirmar que en la empresa, **menos** del *75%* de los empleados han concluido un curso avanzado de capacitación.
Por lo que el CEO debe tomar medidas para incrementar la proporción de empleados con el curso avanzado de capacitación.
<u>**Valor p**</u>
```
pvalue_curso = ms.hypothesis.get_p(z_curso, 'left')
pvalue_curso
```
**R/** El mínimo nivel de significancia que puede tomar el CEO para no rechazar que **por lo menos** el *75%* de sus empleados tienen el curso de capacitación es de *4.77%*.
Pero como el nivel de significancia que se tomó es de *5%*, y es mayor al valor-p, se puede **negar** la afirmación de que por lo menos el 75% de sus empleados tienen el curso de capacitación.
<hr/>
<hr/>
## 9. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras grandes
Weaver Ridge Golf Course desea ver si el tiempo promedio que requieren los hombres para jugar los 18 hoyos es diferente al de las mujeres. Se mide el tiempo para 50 partidos de hombres y 45 de mujeres y se obtiene:
Hombres:
X_ = 3.5 horas
S = 0.9 horas
Mujeres:
X_ = 4.9 horas
S = 1.5 horas
Utilice un nivel de significancia del 5%
```
n_h = 50 # Tamaño de la muestra 1
x_h = 3.5 # Promedio de la muestra 1
s_h = 0.9 # Desviación estándar de la muestra 1
n_m = 45 # Tamaño de la muestra 2
x_m = 4.9 # Promedio de la muestra 2
s_m = 1.5 # Desviación estándar de la muestra 2
alpha_golf = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μh = μm
**Ha:** μh ≠ μm
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_golf
```
<u> **Paso 3**</u>: Valores críticos
```
crit_golf = ms.hypothesis.crit_val_norm(alpha_golf, 'two')
crit_golf
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_golf = ms.generals.get_z_2p(x_h, x_m, s_h, s_m, n_h, n_m)
z_golf
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_golf, z_golf, 'two')
```
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *-5.4412545203553035* es mayor o menor que los valores críticos *-1.959963984540054, 1.959963984540054*
<u>**Paso 6**</u>: Conclusión
Con un nivel de significancia del *5%*, podemos afirmar que el tiempo promedio que se demoran los hombres para jugar los 18 hoyos es **diferente** al de las mujeres.
También, cómo la zona de rechazo fue la izquierda, se puede concluir que el tiempo promedio que se demoran las mujeres para jugar los 18 hoyos es **mayor** que el de los hombres ***(μ_m > μ_h)***
<hr/>
<hr/>
## 10. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pequeñas con varianzas iguales
**Ejercicio 9.2** Las negociaciones salariales entre su empresa y el sindicato de sus trabajadores están a punto de romperse. Existe un desacuerdo considerable sobre el nivel salarial promedio de los trabajadores en la planta de Atlanta y en la planta de Virginia. Los salarios fueron fijados por el antiguo acuerdo laboral de hace tres años y se basan estrictamente en la antigüedad. Debido a que los salarios están controlados muy de cerca por el contrato laboral, se asume que la variación en los salarios es la misma en ambas plantas y que los salarios están distribuidos normalmente. Sin embargo, se siente que existe una diferencia entre los niveles salariales promedio debido a los patrones de antigüedad diferentes en las dos plantas.
El negociador laboral que representa a la gerencia desea que usted desarrolle un intervalo de confianza del 98% para estimar la diferencia entre los niveles salariales promedio. Si existe una diferencia en las medias, deben hacerse ajustes para hacer que los salarios más bajos alcancen el nivel de los más altos. Dados los siguientes datos, ¿Qué ajustes se requieren, si es el caso?
-----
Regresando al ejercicio 9.2, un estimado del intervalo del 98% de la diferencia en los salarios promedio fué de: *-5.09 < μ1 - μ2 < 9.15* . Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
```
n_a = 23 # Tamaño de la muestra 1
x_a = 17.53 # Promedio de la muestra 1
s_a = 92.10**0.5 # Desviación estándar de la muestra 1
n_v = 19 # Tamaño de la muestra 2
x_v = 15.5 # Promedio de la muestra 2
s_v = 87.19**0.5 # Desviación estándar de la muestra 2
alpha_plantas = 0.02 # Nivel de significancia
var_plantas = True # ¿Varianzas poblacionales iguales?
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μh = μm
**Ha:** μh ≠ μm
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_plantas
```
<u> **Paso 3**</u>: Valores críticos
```
df_plantas = n_a + n_v - 2 # Grados de libertad cuando las varianzas poblacionales son iguales
crit_plantas = ms.hypothesis.crit_val_t(df_plantas, alpha_plantas, 'two')
crit_plantas
```
<u> **Paso 4**</u>: Estadístico de prueba (T)
```
t_plantas = ms.generals.get_t_2p(x_a, x_v, s_a, s_v, n_a, n_v, var_plantas)
t_plantas
```
<u> **Paso 5**</u>: Decisión
```
ms.graph.hypothesis(ss.t(df_plantas), t_plantas, alpha_plantas, 'two')
ms.hypothesis.reject_h0(crit_plantas, t_plantas, 'two')
```
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el valor del estadístico de prueba *0.6906455424446802* no es menor ni mayor que los valores críticos *-2.4232567793348565, 2.4232567793348565*.
<u>**Paso 6**</u>: Conclusión
Con un nivel de significancia del *2%*, se puede afirmar que la diferencia entre los salarios de los trabajadores de la planta de Atlanta y los de la planta de Virginia **no son diferentes**.
Por lo que se puede concluir que no son necesarios ningunos ajustes.
<hr/>
<hr/>
## 11. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pequeñas con varianzas diferentes
**9.3** Acme Ltd. Vende dos tipos de amortiguadores de caucho para coches de bebés. Las pruebas de desgaste para medir la durabilidad revelaron que 13 amortiguadores del tipo 1 duraron un promedio de 11.3 semanas, con una desviación estándar de 3.5 semanas; mientras 10 del tipo 2 duraron un promedio de 7.5 semanas, con una desviación estándar de 2.7 semanas. El tipo 1 es más costoso para fabricar y el CEO de Acme no desea utilizarlo a menos que tenga un promedio de duración de por lo menos ocho semanas más que el tipo 2. El CEO tolerará una probabilidad de error de sólo el 2%. No existe evidencia que sugiera que las varianzas de la duración de los dos productos sean iguales.
-----
En el ejercicio 9.3, un intervalo del 98% para la diferencia en la durabilidad promedio de los 2 tipos de amortiguadores de caucho para coches de bebé, se estimó que era *0.5 < μ1 - μ2 < 7.1*. Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
```
n_am1 = 13 # Tamaño de la muestra 1
x_am1 = 11.3 # Promedio de la muestra 1
s_am1 = 3.5 # Desviación estándar de la muestra 1
n_am2 = 10 # Tamaño de la muestra 2
x_am2 = 7.5 # Promedio de la muestra 2
s_am2 = 2.7 # Desviación estándar de la muestra 2
alpha_am = 0.02 # Nivel de significancia
var_am = False # ¿Varianzas poblacionales iguales?
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μam1 = μam2
**Ha:** μam1 ≠ μam2
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_am
```
<u> **Paso 3**</u>: Valores críticos
```
df_am = ms.generals.get_df_var(n_am1, n_am2, s_am1, s_am2) # Grados de libertad cuando las varianzas poblacionales son diferentes
crit_am = ms.hypothesis.crit_val_t(df_am, alpha_am, 'two')
crit_am
```
<u> **Paso 4**</u>: Estadístico de prueba (T)
```
t_am = ms.generals.get_t_2p(x_am1, x_am2, s_am1, s_am2, n_am1, n_am2, var_am)
t_am
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_am, t_am, 'two')
```
**SI se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *2.9393776700394625*, es mayor o menor que los valores críticos *-2.517696736682547, 2.517696736682547*.
<u>**Paso 6**</u>: Conclusión
A un nivel de significancia del *2%*, podemos afirmar que el promedio de duración de los 2 tipos de amortiguadores **no son iguales**.
Y también se puede concluir que, como la zona de rechazo fue la zona derecha, se puede afirmar que el promedio de duración del amortiguador de tipo 1, es **mayor** que el amortiguador de tipo 2 **(μam1 > μam2)**
<hr/>
<hr/>
## 12. Pruebas de hipótesis para la diferencia entre 2 poblaciones en muestras pareadas
**Ejercicio 9.4.** Vicki Peplow, directora regional de pagos de asistencia médica para Aetna Insurance, constató que dos hospitales diferentes parecían cobrar cantidades ampliamente diferentes por el mismo procedimiento médico. Ella recolectó observaciones sobre costos de facturación para 15 procedimientos idénticos en cada hospital, y construyó un intervalo de confianza del 95% para la diferencia entre los costos promedio presentados por cada hospital. Se utilizaron muestras pareadas porque Vicki corrigió todos los demás factores relevantes distintos al costo. Si existe una diferencia, la señora Peplow planea reportar este asunto a las autoridades de asistencia médica Medicare. ¿Deberá ella presentar el informe?

---
En el ejercicio 9.4, Vicky Peplow preparó un estimado de intervalo del 95% para la diferencia en costos para procedimientos idénticos en los 2 hospitales. El resultado fué *-146.33 < μ1 - μ2 < 28.47*. Si en lugar de un estimado por intervalos de confianza, se hubiera querido realizar una prueba de hipótesis de que las medias poblacionales eran iguales.
```
n_hospital = 15 # Tamaño de las muestras
cond1_hospital = [465, 532, 426, 543, 587, 537, 598, 698, 378, 376, 524, 387, 429, 398, 412] # Condición inicial de la muestra
cond2_hospital = [512, 654, 453, 521, 632, 418, 587, 376, 529, 517, 476, 519, 587, 639, 754] # Condición final de la muestra
alpha_hospital = 0.05 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** μh1 = μh2
**Ha:** μh1 ≠ μh2
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_hospital
```
<u> **Paso 3**</u>: Valores críticos
```
df_hospital = n_hospital - 1
crit_hospital = ms.hypothesis.crit_val_t(df_hospital, alpha_hospital, 'two')
crit_hospital
```
<u> **Paso 4**</u>: Estadístico de prueba (T)
```
t_hospital = ms.generals.get_d_pair(n_hospital, cond1_hospital, cond2_hospital)
t_hospital
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_hospital, t_hospital, 'two')
```
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *-1.44642249848984* no es menor ni mayor que los valores críticos *-2.1447866879169273, 2.1447866879169273*
<u>**Paso 6**</u>: Conclusión
Con un nivel de significancia del *5%* se puede concluir que la diferencia entre los costos promedio de los procedimientos médicos del hospital 1 y el hospital 2 **no son diferentes**.
Entonces, Vicki no debería presentar el informe a las autoridades de asistencia médica Medicare, pues **no existe una diferencia significativa** entre los costos promedio de los procedimientos médicos del hospital 1 y el hospital 2 que justifique el reporte.
<hr/>
<hr/>
## 13. Pruebas de hipótesis para la diferencia entre 2 proporciones poblacionales
Un minorista desea probar la hipótesis nula de que la proporción de sus clientes masculinos, quienes compran a crédito, es igual a la proporción de mujeres que utilizan el crédito. Él selecciona 100 clientes hombres y encuentra que 57 compraron a crédito, mientras que 52 de 110 mujeres lo hicieron. Utilice α = 1%.
```
n_hombres = 100 # Tamaño de la muestra 1
p_hombres = 57/100 # Proporción de la muestra 1
n_mujeres = 110 # Tamaño de la muestra 2
p_mujeres = 52/110 # Proporción de la muestra 2
alpha_credito = 0.01 # Nivel de significancia
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** πh = πm
**Ha:** πh ≠ πm
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_credito
```
<u> **Paso 3**</u>: Valores críticos
```
crit_credito = ms.hypothesis.crit_val_norm(alpha_credito, 'two')
crit_credito
```
<u> **Paso 4**</u>: Estadístico de prueba (Z)
```
z_credito = ms.generals.get_z_2prop(p_hombres, p_mujeres, n_hombres, n_mujeres)
z_credito
```
<u> **Paso 5**</u>: Decisión
```
ms.hypothesis.reject_h0(crit_credito, z_credito, 'two')
```
**NO se rechaza la hipótesis nula**, teniendo en cuenta que el estadístico de prueba *1.4163146434662097* no es mayor ni menor que los valores críticos *-2.5758293035489004, 2.5758293035489004*.
<u>**Paso 6**</u>: Conclusión
Teniendo en cuenta un nivel de significancia del *1%*, se puede afirmar que la proporción de clientes hombres que compran a
crédito **no es diferente** a la proporción de clientes mujeres que compran a crédito.
<hr/>
<hr/>
## 14. Pruebas de hipótesis para la razón de varianzas entre 2 poblaciones
Un consultor gerencial desea probar una hipótesis respecto a dos medias poblacionales.
Sin embargo, antes de hacerlo debe decidir si hay alguna evidencia que sugiera que las
varianzas poblacionales son iguales. Al recolectar sus datos, el consultor encuentra que:

El consultor utiliza un α = 5%
```
n1_gerencia = 10
var1_gerencia = 15.4**2
n2_gerencia = 10
var2_gerencia = 12.2**2
alpha_gerencia = 0.05
```
<u> **Paso 1**</u>: Plantear hipótesis
**Ho:** σ1 = σ2
**Ha:** σ1 ≠ σ2
<u> **Paso 2**</u>: Nivel de significancia
```
alpha_gerencia = 0.05/2
alpha_gerencia
```
<u> **Paso 3**</u>: Valores críticos
```
df1_gerencia = n1_gerencia - 1
df2_gerencia = n2_gerencia - 1
crit_gerencia = ms.hypothesis.crit_val_f(df1_gerencia, df2_gerencia, alpha_gerencia)
crit_gerencia
```
<u> **Paso 4**</u>: Estadístico de prueba (F)
```
f_gerencia = ms.generals.get_f_2p(var1_gerencia, var2_gerencia)
f_gerencia
```
<u> **Paso 5**</u>: Decisión
```
ms.graph.hypothesis(ss.f(df1_gerencia, df2_gerencia), f_gerencia, alpha_gerencia, "right")
ms.hypothesis.reject_h0(crit_gerencia, f_gerencia, "right")
```
Debido a que F = 1.59 < 4.03, la hipótesis nula no se rechaza.
<u>**Paso 6**</u>: Conclusión
El consultor
puede proceder con la prueba de hipótesis respecto a las medias
poblacionales bajo la suposición de que las varianzas son iguales.
| github_jupyter |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 1. Overview</h2></div>
<br><br>
# Welcome to HoloViz!
HoloViz is a set of compatible tools to make it easier to see and understand your data at *every* stage needed by users, research groups, and projects:
- importing and cleaning
- initial exploration
- testing hypotheses
- generating accurate and convincing figures
- sharing and deploying live apps
- improving and adapting each stage over time
Why "Holo"? "*holo-*", from the Greek root "*hólos*", means ["whole, entire, complete"](https://www.dictionary.com/browse/holo-).
# Doesn't Python *already* cover all these stages?
Sure! That's how it ended up with:
<div style="clear:left;">
<img src="../assets/landscape_hv_nx.png" width=75% align="left" style="margin: 0px 30px">
</div>
## Why so many tools?
Because each tool is typically limited to one or two of the stages in the data life cycle, supporting some well but not the others:
- Simple, quick plots, but limited in capabilities, customization, and compositionality.
- Deep capabilities for print-based plots, but weak or no support for custom interactivity in web apps.
- Good interactivity in Jupyter, but weak or no support for batch processing or deployed servers.
- Good support for deployed servers, but weak or no support for Jupyter or interactive exploration in general.
- Good support for small datasets, but can't handle large datasets until *after* they have been explored enough to determine how to aggregate or model them
## How does HoloViz help?
To avoid having to abandon all your work on one stage to reach the next, HoloViz tools reduce friction and gaps between the stages:
- Focuses from the start on tools that support web browsers fully, because all tools offer static output, but not vice versa.
- Focuses on writing Python, not web tech -- web tech like JS/CSS would be fine for deployments, but very impractical for exploratory analysis.
- Eliminates browser-based data-size restrictions, to avoid having to switch tools for each dataset.
- Makes sure that all interactivity works the same in Jupyter and in a standalone deployment
- Provides high-level (quick and convenient) interfaces that are shortcuts, not dead ends:
## Shortcuts, not dead ends
<img src="../assets/shortcuts.png" width=70% align="left">
## HoloViz principles
Throughout the tutorial, you'll see these principles at work:
- We'll usually start out with an easy one-line command to build a dashboard, app, or plot.
- Then we'll show what you can do with it at this level: customize it, compose it with other things, etc.
- Then we'll show you how to drop down a level when you need to make it do what you want
- Repeat as necessary, all the way down to the HTML, CSS, and JS!
<div style="clear:left;">
<img src="../assets/landscape_hv_nx_panel.png" width=65%
align="left" style="margin: 0px 30px">
<br><br>
<br><br>
HoloViz currently covers this subset of viz tools
<br><br>
</div>
<div style="clear:left;"></div>
<div style="clear:left;">
<img src="../assets/landscape_hv_nx_pyviz.png" width=65%
align="left" style="margin: 0px 30px">
<br><br>
<br><br>
These tools are the most fully supported and are often entirely sufficient on their own.
<br><br>
</div>
<div style="clear:left;"></div>
## HoloViz libraries
<img width="800" src="../assets/pn_hp_hv_gv_ds_pa_cs.png"/>
To address the above issues, we have developed a set of open-source
Python packages to streamline the process of working with small and
large datasets (from a few datapoints to billions or more) in a web
browser, whether doing exploratory analysis, making simple widget-based
tools, or building full-featured dashboards. The main libraries in this
ecosystem include:
- [Panel](http://panel.pyviz.org): Assembling objects from many different libraries into a layout or app, whether in a Jupyter notebook or in a standalone servable dashboard
- [hvPlot](http://hvplot.pyviz.org): Quickly return interactive Bokeh-based HoloViews or GeoViews objects from Pandas, Xarray, orother data structures
- [HoloViews](http://holoviews.org): Declarative objects for instantly visualizable data, building Bokeh plots from convenient high-level specifications
- [GeoViews](http://geo.holoviews.org): Visualizable geographic data that that can be mixed and matched with HoloViews objects
- [Datashader](http://datashader.org): Rasterizing huge datasets quickly as fixed-size images
- [Param](http://param.pyviz.org): Declaring user-relevant parameters, making it simple to work with widgets inside and outside of a notebook context
- [Colorcet](http://colorcet.pyviz.org): Perceptually accurate continuous and categorical colormaps for any viz tool
## Built on the Python scientific ecosystem
Beyond the specific HoloViz tools, all these approaches work with and often rely upon a wide range of other open-source libraries for their implementation, including:
- [Bokeh](https://bokeh.org): HTML/JS plots in a web browser for Python data structures (used by Panel, hvPlot, HoloViews, GeoViews)
- [Matplotlib](https://matplotlib.org): Flexible, publication-quality plots (used by HoloViews, Geoviews; used with Panel)
- [Pandas](http://pandas.pydata.org): Convenient computation on columnar datasets (used by HoloViews and Datashader)
- [Xarray](http://xarray): Convenient computations on multidimensional array datasets (used by hvPlot HoloViews and Datashader)
- [Dask](http://dask.pydata.org): Efficient out-of-core/distributed computation on massive datasets (used by hvPlot, Datashader)
- [Numba](http://numba.pydata.org): Accelerated machine code for inner loops (used by Datashader)
- [Fastparquet](https://fastparquet.readthedocs.io): Efficient storage for columnar data (used with Datashader)
- [Cartopy](http://scitools.org.uk/cartopy): Support for geographical data (used by GeoViews; uses a wide range of other lower-level libraries)
## The HoloViz tutorial
In this tutorial, we'll focus on an example set of data about [earthquake events](https://earthquake.usgs.gov), using it to illustrate how to:
* create simple but powerful apps and dashboards out of anything in a Jupyter notebook
* make simple but powerful plots out of Pandas dataframes and Xarray multidimensional arrays
* handle columnar data, big data, geo data, array data
* provide custom interactive links between views of datasets
* handle the whole process from getting the data in, cleaning it, exploring it visually, creating plots for communication, building dashboards for sharing your analyses, and deploying dashboards.
The tutorial is organized around the most general to the most specific, in terms of tool support. We first look at [Panel](https://panel.pyviz.org) package, which works with nearly any plotting library, then [hvPlot](https://hvplot.pyviz.org), which works with nearly any data library and shares an API with many other plotting libraries, and then dive deeper into HoloViz-specific approaches that let you work with large data, provide deep interactivity, and other advanced features.
## To summarize
- HoloViz provides a set of very high-level tools for interacting with data
- These tools make it simple to work with multidimensional data, flexibly selecting, visualizing, combining, and comparing it.
- The tools focus on information visualization in 2D in web browsers, not 3D scientific visualization.
- Together the tools support a flexible workflow with very little friction between initial exploratory analysis, making interactive apps, building fully deployable dashboards, and revisiting the initial analyses as needed, with changes immediately propagating to the deployed dashboard.
- The tools are designed around "shortcuts", not "dead ends", and so there is always another level deeper that you can go if you need more power or more customization.
## Getting started
Before going further, it's worth exploring some examples of what you can get with HoloViz, to make sure that it covers your needs:
- https://panel.pyviz.org/gallery
- https://examples.pyviz.org
And then you can browse through the already-run versions of the HoloViz [tutorials](tutorial/index.html) to see what they cover and how it all fits together. But everything on this website is a Jupyter Notebook that you can run yourself, once you follow the [installation](../installation) instructions, so the next step is then to try it all out and have fun exploring it!
| github_jupyter |
```
# imports
import json
import multiprocessing
import os
import re
import string
import sys
sys.path.append("../")
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import gensim
import matplotlib.pyplot as plt
import nltk
import numpy as np
import pandas as pd
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
# from gensim.corpora import Dictionary
from datahandler import DataHandler
# fcns
stopwords = nltk.corpus.stopwords.words()
def filter_ngram(ngram, n:int):
tag = nltk.pos_tag(ngram)
if tag[0][1] not in ['JJ', 'NN'] and tag[1][1] not in ['NN']:
return False
if n == 2:
if ngram[0] in stopwords or ngram[1] in stopwords:
return False
if n==3:
if ngram[0] in stopwords or ngram[-1] in stopwords or ngram[1] in stopwords:
return False
if 'n' in ngram or 't' in ngram:
return False
if 'PRON' in ngram:
return False
return True
def merge_ngram(x, bigrams, trigrams):
for gram in trigrams:
x = x.replace(gram, '_'.join(gram.split()))
for gram in bigrams:
x = x.replace(gram, '_'.join(gram.split()))
return x
def filter_stopwords(x):
return [word for word in x.split() if word not in stopwords and len(word)>2]
def filter_pos(x):
pos = nltk.pos_tag(x)
filtered = [word[0] for word in pos if word[1] in ['NN']]
return filtered
seed = 123
data_dir = os.path.join(os.pardir, os.pardir, "web_data", "preproc")
print("Loading corpus")
corpus = DataHandler(data_dir, seed)
# print some various information from the corpus
print("Total Word Count: {}".format(corpus.total_words))
print("Number of Docs in the Corpus: {}".format(corpus.total_docs))
docs_fpath = corpus.data.keys()
# create dictionary for filename and text
fpath_txt = {}
for fpath in docs_fpath:
with open(fpath, "r") as f:
fpath_txt[fpath] = f.read()
# make dataframe
df = (pd.DataFrame.from_dict(fpath_txt, orient='index')
.reset_index().rename(index = str, columns = {'index': 'file_name', 0: 'text'}))
corpus = df['text']
print("Finished loading corpus")
min_bigram_frequency = 50
bigram_measures = nltk.collocations.BigramAssocMeasures()
finder = nltk.collocations.BigramCollocationFinder.from_documents([doc.split() for doc in corpus])
finder.apply_freq_filter(min_bigram_frequency)
bigram_scores = finder.score_ngrams(bigram_measures.pmi)
bigram_pmi = pd.DataFrame(bigram_scores)
bigram_pmi.columns = ['bigram', 'pmi']
bigram_pmi.sort_values(by='pmi', axis = 0, ascending = False, inplace = True)
min_trigram_frequency = 50
trigram_measures = nltk.collocations.TrigramAssocMeasures()
finder = nltk.collocations.TrigramCollocationFinder.from_documents([doc.split() for doc in corpus])
finder.apply_freq_filter(min_trigram_frequency)
trigram_scores = finder.score_ngrams(trigram_measures.pmi)
trigram_pmi = pd.DataFrame(trigram_scores)
trigram_pmi.columns = ['trigram', 'pmi']
trigram_pmi.sort_values(by='pmi', axis = 0, ascending = False, inplace = True)
print("cell done")
min_pmi = 5
max_ngrams = 500
filtered_bigram = bigram_pmi[bigram_pmi.apply(lambda bigram:\
filter_ngram(bigram['bigram'], 2)\
and min_pmi > 5, axis = 1)][:max_ngrams]
filtered_trigram = trigram_pmi[trigram_pmi.apply(lambda trigram: \
filter_ngram(trigram['trigram'], 3)\
and min_pmi > 5, axis = 1)][:max_ngrams]
bigrams = [' '.join(x) for x in filtered_bigram.bigram.values if len(x[0]) > 2 or len(x[1]) > 2]
trigrams = [' '.join(x) for x in filtered_trigram.trigram.values if len(x[0]) > 2 or len(x[1]) > 2 and len(x[2]) > 2]
print("cell done")
corpus_w_ngrams = corpus.copy()
corpus_w_ngrams = corpus_w_ngrams.map(lambda x: merge_ngram(x, bigrams, trigrams))
print("cell done")
p = multiprocessing.Pool()
corpus_w_ngrams = p.map(filter_stopwords, [doc for doc in corpus_w_ngrams])
p.close()
print("cell done")
p = multiprocessing.Pool()
final_corpus = p.map(filter_pos, [doc for doc in corpus_w_ngrams])
p.close()
print("cell done")
dictionary = gensim.corpora.Dictionary(final_corpus)
dictionary.filter_extremes(no_below=10, no_above=0.20)
corpus_bow = [dictionary.doc2bow(doc) for doc in final_corpus]
print("cell done")
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(corpus_bow, num_topics=5, id2word = dictionary, passes=40,\
iterations=200, chunksize = 100, eval_every = None)
print("cell done")
p = pyLDAvis.gensim.prepare(ldamodel, corpus_bow, dictionary, mds='tsne')
pyLDAvis.save_html(p, 'web_lda_mp_debug.html')
coherence = []
for ii in range(3,5):
print('lda with {} topics'.format(ii))
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(corpus_bow, num_topics=ii, id2word = dictionary, passes=40,\
iterations=200, chunksize = 100, eval_every = None)
print("fit model, computing coherence")
cm = gensim.models.coherencemodel.CoherenceModel(model=ldamodel, texts=final_corpus,\
dictionary=dictionary, coherence='c_v')
coherence.append((ii,cm.get_coherence()))
print("generating tsne viz")
p = pyLDAvis.gensim.prepare(ldamodel, corpus_bow, dictionary, mds='tsne')
title = 'web_lda_mp_debug_cm_{}.html'.format(ii)
pyLDAvis.save_html(p, title)
print("done")
n_topics = [x[0] for x in coherence]
cm = [x[1] for x in coherence]
plt.plot(n_topics,cm)
plt.scatter(n_topics,cm)
plt.title('Number of Topics vs. Coherence')
plt.xlabel('Number of Topics')
plt.ylabel('Coherence')
plt.xticks(x_val)
plt.savefig("topic_coherence.png")
plt.close()
```
| github_jupyter |
<a href="https://pythonista.io"> <img src="img/pythonista.png"></a>
# *Selenium WebDriver*.
*Selenium WebDriver* es una herramienta que permite emular las operaciones realizadas por un usuario en un navegador, de tal forma que es posible automatizar pruebas sobre una interfaz web.
La documentación de *Selenium WebDriver* está disponible en la siguiente liga:
https://www.selenium.dev/documentation/webdriver/
## Preliminares.
```
!sudo rm -rf /var/www/html/*
!sudo unzip data/html.zip -d /var/www/html/
ls /var/www/html
```
## *Selenium* en *Python*.
Aún cuando *Selenium WebDriver* está escrito originalmente el *Java*, existen implementaciones para diversos lenguajes de programación.
La documentación de *Selenium WebDriver* en *Python* puede ser consultada en a siguiente liga:
https://selenium-python.readthedocs.io/
* La siguiente celda instalará el software necesario.
```
!pip install selenium
```
## Drivers de *Selenium*.
*Selenium WenDriver* permite aprovechar las funcionalidades de los navegadores más populares, mendiante el uso de *drivers* específicos. Los navegadores soportados son:
* *Google Chrome*.
* *Mozilla Firefox*.
* *Microsoft Edge*.
* *Safari*.
https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
### Gestión de *drivers* de *Selenium*.
Para facilitar la instalaciónd e los drivers de *Selenium* se utilizará el paquete *webdriver-manager*.
https://github.com/SergeyPirogov/webdriver_manager
```
!pip install webdriver-manager
```
## Habilitación de un dispositivo en *Linux*.
```
pip install PyVirtualDisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(800, 600))
display.start()
```
## Ejecución de *Selenium*.
El objeto ```selenium.webdriver``` es el componente principal que permite realizar una conexión con el navegador seleccionado mediante un *driver* específico.
El resultado de esta colección es un objeto de la clase ```Webdriver```.
**Nota:** Para fines de simplicidad, a partir de ahora se usará ```driver``` para hacer referencia a un objeto instanciado de ```Webdriver``` .
**Ejemplo:**
* La siguiente celda creará un objeto instanciado de ```Webdriver``` usando el *driver* de *Firefox* al que se le asignará el nombre ```driver```.
```
from selenium import webdriver
from webdriver_manager.firefox import GeckoDriverManager
# driver = webdriver.Chrome('/home/user/drivers/chromedriver')
driver = webdriver.Firefox(executable_path=GeckoDriverManager().install())
```
### Acceso a una *URL*.
El método ```driver.get()``` permite al objeto insanciado acceder a una *URL* que se ingresa como aregumento.
* La siguiente celda realizará una conexión con el contenido localizado en [```http://localhost```](http://localhost).
```
driver.get("http://localhost")
driver.save_full_page_screenshot("pantalla.png")
```
### Selección de elementos.
El método ```driver.find_element()``` permite realizar búsquedas dentro del sitio.
```
driver.find_element(by=<metodo>, value="valor")
```
Donde:
* ```método``` puede ser alguna de las siguientes cadenas de caracteres:
* ```'xpath'```
* ```'css selector'```
* ```'class name'```
* ```'id'```
* ```'tag name'```
* ```'name'```
* ```'link text'```
* ```'partial link text'```
* ```valor``` puede ser un criterio de búsqueda específico.
En caso de que se encuentren uno o más elementos, éstos serán representados mediante un objeto de la clase ```WebElement```.
**NOTA:** Para fines de simplicidad, a partir de ahora se usará ```element``` para hacer referencia a un objeto instanciado de ```WebElement``` .
```
driver.find_element(by="tag name", value="body")
sitio = driver.find_element(by="tag name", value="body")
```
### El método ```element.screenshot()```.
```
sitio.screenshot("foto.png")
dir(sitio)
```
### Uso de los atributos y métodos de ```element```.
```
sitio.text
sitio.find_elements(by="tag name", value="p")
for item in sitio.find_elements(by="tag name", value="p"):
print(item.text + '\n----\n')
anchors = sitio.find_elements(by="css selector", value="a")
anchors
for item in anchors:
print(item.get_attribute("href"))
sitio.find_elements(by="xpath", value="//*[contains(text(),'Menú')]")
menu = sitio.find_elements(by="xpath",
value="//*[contains(text(),'Menú')]")
```
### El método ```element.click()```.
El método ```element.click()``` permite emular el evento correspondiente a hacer *click* con un dispositivo apuntador sobre el elemento de referencia, desencadenando una acción correspondiente en caso de que esté habilitada.
```
nuevo_sitio = menu[0].click()
driver.save_full_page_screenshot("menu.png")
```
### Gestión de ventanas.
```
driver.get('http://localhost')
reservaciones = driver.find_elements(by="xpath",
value="//*[contains(text(),'Reservaciones')]")
reservaciones[0].get_attribute("href")
reservaciones[0].click()
```
### El atributo ``` driver.switch.to```.
```
alerta = driver.switch_to.alert
alerta.text
alerta.send_keys("Pythonista")
alerta.accept()
saludo = driver.switch_to.alert
saludo.text
saludo.accept()
driver.save_full_page_screenshot('reservaciones.png')
driver.close()
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2022.</p>
| github_jupyter |
```
#export
from fastai.basics import *
#hide
from nbdev.showdoc import *
#default_exp callback.schedule
```
# Hyperparam schedule
> Callback and helper functions to schedule any hyper-parameter
```
from fastai.test_utils import *
```
## Annealing
```
#export
class _Annealer:
def __init__(self, f, start, end): store_attr(self, 'f,start,end')
def __call__(self, pos): return self.f(self.start, self.end, pos)
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return _Annealer(f, start, end)
return _inner
```
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
```
#export
#TODO Jeremy, make this pickle
#@annealer
#def SchedLin(start, end, pos): return start + pos*(end-start)
#@annealer
#def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
#@annealer
#def SchedNo (start, end, pos): return start
#@annealer
#def SchedExp(start, end, pos): return start * (end/start) ** pos
#
#SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
#SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
#SchedNo .__doc__ = "Constant schedule function with `start` value"
#SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def sched_lin(start, end, pos): return start + pos*(end-start)
def sched_cos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
def sched_no (start, end, pos): return start
def sched_exp(start, end, pos): return start * (end/start) ** pos
def SchedLin(start, end): return _Annealer(sched_lin, start, end)
def SchedCos(start, end): return _Annealer(sched_cos, start, end)
def SchedNo (start, end): return _Annealer(sched_no, start, end)
def SchedExp(start, end): return _Annealer(sched_exp, start, end)
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#hide
tst = pickle.dumps(SchedCos(0, 5))
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
for fn, t in zip(fns, annealings):
plt.plot(p, [fn(2, 1e-2)(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos.item())
return _inner
```
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
```
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.7], [SchedCos(0.3,0.6), SchedCos(0.6,0.2)])
plt.plot(p, [f(o) for o in p]);
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a scheduler with cosine annealing from `start`→`middle` & `middle`→`end`"
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
```
This is a useful helper function for the [1cycle policy](https://sgugger.github.io/the-1cycle-policy.html). `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats. For example:
```
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
```
## ParamScheduler -
```
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after,run_valid = TrainEvalCallback,False
def __init__(self, scheds): self.scheds = scheds
def before_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def before_batch(self): self._update_val(self.pct_train)
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def after_batch(self):
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder') and hasattr(self, 'hps'): self.recorder.hps = self.hps
_docs = {"before_fit": "Initialize container for hyper-parameters",
"before_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
```
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
```
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dls.train)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.before_fit)
show_doc(ParamScheduler.before_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
@log_args(but_as=Learner.fit)
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=None,
moms=None, cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
```
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
```
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dls.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
xb,yb = learn.dls.one_batch()
final_loss = learn.loss_func(learn.model(xb), yb)
assert final_loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, keys=None, figsize=None):
keys = self.hps.keys() if keys is None else L(keys)
rows,cols = (len(keys)+1)//2, min(2, len(keys))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(keys) > 1 else L(axs)
for p,ax in zip(keys, axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dls.train)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
@log_args(but_as=Learner.fit)
def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=None,
cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr is None else lr)
lr = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
learn = synth_learner()
learn.fit_flat_cos(2)
learn.recorder.plot_sched()
#export
@patch
@log_args(but_as=Learner.fit)
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=None):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
```
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
```
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dls.train) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
#export
@patch
@log_args(but_as=Learner.fit)
@delegates(Learner.fit_one_cycle)
def fine_tune(self:Learner, epochs, base_lr=2e-3, freeze_epochs=1, lr_mult=100,
pct_start=0.3, div=5.0, **kwargs):
"Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR"
self.freeze()
self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
base_lr /= 2
self.unfreeze()
self.fit_one_cycle(epochs, slice(base_lr/lr_mult, base_lr), pct_start=pct_start, div=div, **kwargs)
learn.fine_tune(1)
```
## LRFind -
```
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def before_fit(self):
super().before_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def before_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def before_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.opt.zero_grad() #Need to zero the gradients of the model before detaching the optimizer for future fits
tmp_f = self.path/self.model_dir/'_tmp.pth'
if tmp_f.exists():
self.learn.load('_tmp')
os.remove(tmp_f)
_docs = {"before_fit": "Initialize container for hyper-parameters and save the model",
"before_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"before_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.before_fit)
show_doc(LRFinder.before_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.before_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=5):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
#export
SuggestedLRs = collections.namedtuple('SuggestedLRs', ['lr_min', 'lr_steep'])
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True, suggestions=True):
"Launch a mock training to find a good learning rate, return lr_min, lr_steep if `suggestions` is True"
n_epoch = num_it//len(self.dls.train) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
if show_plot: self.recorder.plot_lr_find()
if suggestions:
lrs,losses = tensor(self.recorder.lrs[num_it//10:-5]),tensor(self.recorder.losses[num_it//10:-5])
if len(losses) == 0: return
lr_min = lrs[losses.argmin()].item()
grads = (losses[1:]-losses[:-1]) / (lrs[1:].log()-lrs[:-1].log())
lr_steep = lrs[grads.argmin()].item()
return SuggestedLRs(lr_min/10.,lr_steep)
```
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale.
A good value for the learning rates is then either:
- one tenth of the minimum before the divergence
- when the slope is the steepest
Those two values are returned by default by the Learning Rate Finder.
```
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
weights_pre_lr_find = L(learn.model.parameters())
lr_min,lr_steep = learn.lr_find()
weights_post_lr_find = L(learn.model.parameters())
test_eq(weights_pre_lr_find, weights_post_lr_find)
print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}")
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mancinimassimiliano/DeepLearningLab/blob/master/Lab4/solution/char_rnn_classification_solution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial on Recurrent Neural Networks
Recurrent Neural Networks (RNN) are models which are useful anytime we want to model sequences of data (e.g. video, text). In this tutorial (adapted from [here](https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html)), we will see how we can predict the language of a name using an RNN model taking single word characters as input.
Specifically, we will train the network on a list of surnames from 18 languages of origin, and predict which language a name is from based on the spelling:
```
$ python predict.py Hinton
(0.63) Scottish
(0.22) English
(0.02) Irish
$ python predict.py Schmidhuber
(0.83) German
(0.08) Czech
(0.07) Dutch
```
# Preparing the Data
The [link](https://download.pytorch.org/tutorial/data.zip) to download the needed data is provided within the official pytorch tutorial. The data must be downloaded and extracted in your virtual machine. We can do this through:
```
!wget https://download.pytorch.org/tutorial/data.zip
!unzip data.zip
```
Under the downloaded directory there are 18 text files named as "[Language].txt". Each file contains a bunch of names, one name per line. In the following, we will take care of data preprocessing by :
* Extracting all the names and numbers of categories from the files.
* Converting from Unicode to ASCII each name.
* Instantiating a dictionary containing all names (values) of a given language (key)
```
import glob
import unicodedata
import string
all_filenames = glob.glob('data/names/*.txt')
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii('Ślusàrski'))
# Build the category_lines dictionary, a list of names per language
category_lines = {}
all_categories = []
# Read a file and split into lines
def readLines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
for filename in all_filenames:
category = filename.split('/')[-1].split('.')[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
print('n_categories =', n_categories)
```
# Turning Names into Tensors
A crucial point in this problem is how to define the input to the network. Since the network threats numbers and not plain text, we must convert text to numerical representation. To this extent we represent each letter as a one-hot vector of size `<1 x n_letters>`. A one-hot vector is filled with 0s except for a 1 at index of the current letter, e.g. `"b" = <0 1 0 0 0 ...>`.
To make a word we join a bunch of those into a 2D matrix `<line_length x 1 x n_letters>`.
That extra 1 dimension is because PyTorch assumes everything is in batches - we're just using a batch size of 1 here.
```
import torch
# Just for demonstration, turn a letter into a <1 x n_letters> Tensor
def letter_to_tensor(letter):
tensor = torch.zeros(1, n_letters)
letter_index = all_letters.find(letter)
tensor[0][letter_index] = 1
return tensor
# Turn a line into a <line_length x n_letters>,
# (or <line_length x 1 x n_letters> if the batch dimension is added)
# of one-hot letter vectors
def line_to_tensor(line,add_batch_dimension=True):
tensor = torch.zeros(len(line), n_letters)
for li, letter in enumerate(line):
letter_index = all_letters.find(letter)
tensor[li][letter_index] = 1
if add_batch_dimension:
return tensor.unsqueeze(1)
else:
return tensor
# Create a batch of samples given a list of lines
def create_batch(lines):
tensors = []
for l in lines:
tensors.append(line_to_tensor(l,add_batch_dimension=False))
padded_tensor = torch.nn.utils.rnn.pad_sequence(tensors, batch_first = False, padding_value=0)
return padded_tensor
```
# Creating the Network
Instantiate a simple recurrent neural network. The newtork should have a recurrent layer followed by a fully connected layer mapping the features of the recurrent unit to the output space (i.e. number of categories).
To run a step of this network we need to pass an input (in our case, the Tensor for the current sequence/s) and a previous hidden state (which we initialize as zeros at first). We'll get back the logits (i.e. network activation before the softmax) for each each language.
```
# Create a simple recurrent network
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.RNN(input_size, hidden_size)
self.i2o = nn.Linear(hidden_size, output_size)
# Forward the whole sequence at once
def forward(self, input, hidden=None):
if hidden==None:
hidden = self.init_hidden(input.shape[1])
output, _ = self.i2h(input, hidden)
output = self.i2o(output[-1])
return output
# Instantiate the hidden state of the first element of the sequence dim: 1 x batch_size x hidden_size)
def init_hidden(self,shape=1):
return torch.zeros(1, shape, self.hidden_size)
class SimpleLSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleLSTM, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.LSTM(input_size, hidden_size)
self.i2o = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden=None, cell=None):
if hidden==None:
hidden = self.init_hidden(input.shape[1])
if cell==None:
cell = self.init_hidden(input.shape[1])
output, (_,_)= self.i2h(input, (hidden,cell))
output = self.i2o(output[-1])
return output
def init_hidden(self,shape=1):
return torch.zeros(1, shape, self.hidden_size)
def init_cell(self,shape=1):
return torch.zeros(1, shape, self.hidden_size)
class SimpleRNNwithCell(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleRNNwithCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.RNNCell(input_size, hidden_size)
self.i2o = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden=None):
if hidden==None:
hidden = self.init_hidden(input.shape[1])
for i in range(input.shape[0]):
hidden = self.i2h(input[i],hidden)
output = self.i2o(hidden)
return output
def init_hidden(self,shape=1):
return torch.zeros(shape, self.hidden_size)
```
# Preparing for Training
Before going into training we should make a few helper functions. The first is to interpret the output of the network, which we know to be a logits of each category. We can use `Tensor.topk` to get the index of the greatest value:
```
def category_from_output(output):
top_n, top_i = output.data.topk(1)
category_i = top_i[0][0]
return all_categories[category_i], category_i
```
We will also want a quick way to get a training example (a name and its language):
```
import random
def random_training_pair(bs=1):
lines = []
categories = []
for b in range(bs):
category = random.choice(all_categories)
line = random.choice(category_lines[category])
lines.append(line)
categories.append(category)
categories_tensor = torch.LongTensor([all_categories.index(c) for c in categories])
lines_tensor = create_batch(lines)
return categories_tensor, lines_tensor
```
# Training the Network
Now all it takes to train this network is show it a bunch of examples, have it make guesses, and tell it if it's wrong.
Since the output of the networks are logits and the task is classification, we can use a standard cross-entropy loss.
```
criterion = nn.CrossEntropyLoss()
```
Now we instantiate a standard training loop where we will:
* Forward the inpu to the network
* Compute the loss
* Backpropagate it
* Do a step of the optimizer
* Reset the optimizer/network's grad
```
def train(rnn, optimizer, categories_tensor, lines_tensor):
optimizer.zero_grad()
output = rnn(lines_tensor)
loss = criterion(output, categories_tensor)
loss.backward()
optimizer.step()
return output, loss.item()
```
Now we just have to:
* Instantiate the network
* Instantiate the optimizer
* Run the training steps for a given number of iterations
```
# Initialize the network:
n_hidden = 128
rnn = SimpleRNN(n_letters, n_hidden, n_categories)
# Initialize the optimizer
learning_rate = 0.005 # Example: different LR could work better
optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate)
# Initialize the training loop
batch_size = 2
n_iterations = 100000
print_every = 5000
# Keep track of losses
current_loss = 0
for iter in range(1, n_iterations + 1):
# Get a random training input and target
category_tensor, line_tensor = random_training_pair(bs=batch_size)
# Process it through the train function
output, loss = train(rnn, optimizer, category_tensor, line_tensor)
# Accumulate loss for printing
current_loss += loss
# Print iteration number and loss
if iter % print_every == 0:
print('%d %d%% %.4f ' % (iter, iter / n_iterations * 100, current_loss/print_every))
current_loss = 0
```
# Running on User Input
Finally, followith the original tutorial [in the Practical PyTorch repo](https://github.com/spro/practical-pytorch/tree/master/char-rnn-classification) we instantiate a prediction function and test on some user defined inputs.
```
normalizer = torch.nn.Softmax(dim=-1)
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
output = rnn(line_to_tensor(input_line))
output = normalizer(output)
# Get top N categories
topv, topi = output.data.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i]
category_index = topi[0][i]
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
```
| github_jupyter |
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# Simple fuzzy echo classification from dual-pol moments
```
import wradlib
from wradlib.util import get_wradlib_data_file
import os
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
try:
get_ipython().magic("matplotlib inline")
except:
plt.ion()
```
## Setting the file paths
```
rhofile = get_wradlib_data_file('netcdf/TAG-20120801-140046-02-R.nc')
phifile = get_wradlib_data_file('netcdf/TAG-20120801-140046-02-P.nc')
reffile = get_wradlib_data_file('netcdf/TAG-20120801-140046-02-Z.nc')
dopfile = get_wradlib_data_file('netcdf/TAG-20120801-140046-02-V.nc')
zdrfile = get_wradlib_data_file('netcdf/TAG-20120801-140046-02-D.nc')
mapfile = get_wradlib_data_file('hdf5/TAG_cmap_sweeps_0204050607.hdf5')
```
## Read the data (radar moments and static clutter map)
```
# We need to organize our data as a dictionary
dat = {}
dat["rho"], attrs_rho = wradlib.io.read_edge_netcdf(rhofile)
dat["phi"], attrs_phi = wradlib.io.read_edge_netcdf(phifile)
dat["ref"], attrs_ref = wradlib.io.read_edge_netcdf(reffile)
dat["dop"], attrs_dop = wradlib.io.read_edge_netcdf(dopfile)
dat["zdr"], attrs_zdr = wradlib.io.read_edge_netcdf(zdrfile)
dat["map"] = wradlib.io.from_hdf5(mapfile)[0][0]
```
## Identify non-meteorological echoes using fuzzy echo classification
See [Crisologo et al. (2015)](https://link.springer.com/article/10.1007/s13143-014-0049-y) and [Vulpiani et al. (2012)](https://journals.ametsoc.org/doi/abs/10.1175/JAMC-D-10-05024.1) for details.
```
weights = {"zdr": 0.4,
"rho": 0.4,
"rho2": 0.4,
"phi": 0.1,
"dop": 0.1,
"map": 0.5}
cmap, nanmask = wradlib.clutter.classify_echo_fuzzy(dat,
weights=weights,
thresh=0.5)
```
## View classfication results
```
fig = plt.figure(figsize=(18,16))
# Horizontal reflectivity
ax = plt.subplot(121, aspect="equal")
ax, pm = wradlib.vis.plot_ppi(np.ma.masked_invalid(dat["ref"]), ax=ax)
ax = wradlib.vis.plot_ppi_crosshair(site=(0,0,0),
ranges=[80,160,240])
plt.xlim(-240,240)
plt.ylim(-240,240)
plt.xlabel("# bins from radar")
plt.ylabel("# bins from radar")
cbar = plt.colorbar(pm, shrink=0.3)
cbar.set_label("dBZ", fontsize = "large")
# Echo classification
ax = plt.subplot(122, aspect="equal")
ax, pm = wradlib.vis.plot_ppi(np.ma.masked_array(cmap.astype(np.uint8),
np.isnan(dat["ref"])),
ax=ax, cmap="bwr")
ax = wradlib.vis.plot_ppi_crosshair(site=(0,0,0),
ranges=[80,160,240])
plt.xlim(-240,240)
plt.ylim(-240,240)
plt.xlabel("# bins from radar")
plt.ylabel("# bins from radar")
cbar = plt.colorbar(pm, shrink=0.3)
cbar.set_label("meterol. echo=0 - non-meteorol. echo=1",
fontsize = "large")
```
| github_jupyter |
#### make an empty dictionary named practice_dict
```
practice_dict={}
```
#### add name of student with their marks in above dictionary
```
a=['mayuur','sankket','akshay','vishnu','ashish']
b=[100,90,80,70,60]
practice_dict=dict(zip(a,b))
practice_dict
```
#### change the key name for one key in dictionary [example - {'mayurr' : 55} -----> {'mayur' : 55} ]
```
dict1={'mayuur': 100, 'sankket': 90, 'akshay': 80, 'vishnu': 70, 'ashish': 60}
dict1['mayur']=dict1['mayuur']
del dict1['mayuur']
dict1['sanket']=dict1['sankket']
del dict1['sankket']
dict1
```
#### change key as a value and value as key in above dictionary. [example - {'mayur': 55} -----> {55: 'mayur'}
```
key=['mayuur','sankket','akshay','vishnu','ashish']
value=[100,90,80,70,60]
practice_dict1=dict(zip(key,value))
value1=list(practice_dict1.values())
key1=list(practice_dict1.keys())
practice_dict=dict(zip(value1,key1))
practice_dict
```
#### ----------------------------------------------------------------------------------------------------------------------------------------------------------
# make a dictionary using two lists
```
city_names = ['pune', 'mumbai', 'nashik', 'ahmednagar']
covid_counts = [12000, 22000, 9000, 3000]
dict1=dict(zip(city_names,covid_counts))
dict1
```
#### remove pune from above dictionary
```
dict1.pop('pune')
dict1
```
#### add delhi = 20000 in dictionary
```
dict1['delhi']=20000
dict1
```
#### print keys of dictionary
```
dict1.keys()
```
#### print values of dictionary
```
dict1.values()
```
#### print items of dictionary
```
dict1.items()
```
#### print 3rd item of dictionary
```
for item_,item in enumerate(dict1):
if item_ == 3:
print(list(dict1.items())[item_])
```
#### ----------------------------------------------------------------------------------------------------------------------------------------------
### perform operations on dictionary using all dictionary functions
#### Access the value of key ‘history’
```
sample_dict = {
"class":{
"student":{
"name":"nikita",
"marks":{
"physics":70,
"history":80
}
}
}
}
sample_dict['class']['student']['marks']['history']
```
#### Initialize dictionary with default values
```
employees = ['mayur', 'aniket', 'John']
defaults = {"designation": 'Application Developer', "salary": 80000}
employees=dict.fromkeys(employees,defaults)
employees
```
#### Create a new dictionary by extracting the following keys from a given dictionary
```
# Expected output - {'name': 'akshay', 'salary': 8000}
sampleDict = {
"name": "akshay",
"age":22,
"salary": 80000,
"city": "Ahmednagar"
}
keys = ["name", "salary"]
sample_Dict={}
for k,v in sampleDict.items():
if k =='name':
sample_Dict[k]=v
if k =='salary':
sample_Dict[k]=v
sample_Dict
```
#### Check if a value 200 exists in a dictionary
expected output - True
```
sampleDict = {'a': 100, 'b': 200, 'c': 300}
for k,v in sampleDict.items():
if v ==200:
print("true")
```
#### Rename key city to location in the following dictionary
```
sampleDict = {
"name": "Vishnu",
"age":22,
"salary": 80000,
"city": "Mumbai"
}
sampleDict['location']=sampleDict['city']
del sampleDict['city']
sampleDict
```
#### Get the key corresponding to the minimum value from the following dictionary
```
sampleDict = {
'Physics': 82,
'Math': 65,
'history': 75
}
min(sampleDict.keys())
```
#### Given a Python dictionary, Change sanket salary to 85000
```
sample_dict = {
'emp1': {'name': 'mayur', 'salary': 75000},
'emp2': {'name': 'nikhil', 'salary': 80000},
'emp3': {'name': 'sanket', 'salary': 65000}
}
sample_dict['emp3']['salary']=85000
sample_dict
```
| github_jupyter |
### REGRESSION - KERAS
### The Auto MPG dataset
> The dataset is available from [UCI Machine Learning Repository.](https://archive.ics.uci.edu/ml/index.php)
### Imports
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names, na_values='?'
, comment='\t', sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.head()
```
### Data Cleaning
> Removing `NaN` values
```
dataset.isna().sum()
dataset = dataset.dropna()
```
> The ``"Origin"`` column is really categorical, not numeric. So convert that to a one-hot with ``pd.get_dummies``:
```
dataset
dataset["Origin"] = dataset["Origin"].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')
dataset
```
### Splitting datasets
```
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')
train_dataset.describe()
```
### Split features from labels
```
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
```
### Normalization
* It is good practice to normalize features that use different scales and ranges.
* One reason this is important is because the features are multiplied by the model weights. So the scale of the outputs and the scale of the gradients are affected by the scale of the inputs.
* Although a model might converge without feature normalization, normalization makes training much more stable.
### The Normalization layer
The ``preprocessing.Normalization`` layer is a clean and simple way to build that preprocessing into your model.
The first step is to create the layer:
````
normalizer = preprocessing.Normalization()
````
Then ``.adapt()`` it to the data:
````
normalizer.adapt(np.array(train_features))
````
This calculates the mean and variance, and stores them in the layer.
> [Docs](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization)
### Linear regression
### 1. One variable
* Start with a single-variable linear regression, to predict MPG from Horsepower.
* In this case there are two steps:
* Normalize the input horsepower.
* Apply a linear transformation ``(y = mx + b)`` to produce 1 output using layers.Dense.
```
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow import keras
train_features
horsepower =train_features['Horsepower'].values
horsepower.ndim
horsepower_normalizer = Normalization(input_shape=[1, ])
horsepower_normalizer.adapt(horsepower)
```
### Creating a model
```
horsepower_model = keras.Sequential([
horsepower_normalizer,
keras.layers.Dense(1)
])
horsepower_model.summary()
horsepower_model.compile(
optimizer = keras.optimizers.Adam(lr=.1),
loss = 'mean_absolute_error' ## 'mean_squared_error'
)
# No need to track the accuracy since it is a Regression task
%%time
history = horsepower_model.fit(
train_features['Horsepower'], train_labels,
epochs=100,
verbose=1,
validation_split = 0.2
)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
```
### Making predictions
```
train_labels[:3].values.astype('float32'), horsepower_model.predict(train_labels[:3])
```
### 2. Multiple inputs
> You can use an almost identical setup to make predictions based on multiple inputs. This model still does the same except that is a matrix and is a vector.
> This time use the Normalization layer that was adapted to the whole dataset.
```
normalizer = Normalization()
normalizer.adapt(train_features.values.astype('float32'))
linear_model = keras.Sequential([
normalizer,
keras.layers.Dense(units=1)
])
linear_model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error'
)
history = linear_model.fit(
train_features, train_labels,
epochs=100,
verbose=1,
validation_split = 0.2
)
plot_loss(history)
```
## A Deep Neural Net (DNN) regression
> The DNN will be the same as from the previous models exept that this will have some hidden layers
```
horsepower_model = keras.Sequential([
horsepower_normalizer,
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1)
])
horsepower_model.summary()
horsepower_model.compile(
optimizer = keras.optimizers.Adam(lr=.001),
loss = 'mean_absolute_error' ## 'mean_squared_error'
)
horsepower_model.fit(
train_features['Horsepower'], train_labels,
epochs=100,
verbose=1,
validation_split = 0.2
)
train_labels[:3].values.astype('float32'), horsepower_model.predict(train_labels[:3])
```
> The same applies to the full model.
| github_jupyter |
# Sonic The Hedgehog 1 with dqn
## Step 1: Import the libraries
```
import time
import retro
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
from IPython.display import clear_output
import math
%matplotlib inline
import sys
sys.path.append('../../')
from algos.agents.dqn_agent import DQNAgent
from algos.models.dqn_cnn import DQNCnn
from algos.preprocessing.stack_frame import preprocess_frame, stack_frame
```
## Step 2: Create our environment
Initialize the environment in the code cell below.
```
env = retro.make(game='SonicTheHedgehog-Genesis', state='GreenHillZone.Act1', scenario='contest')
env.seed(0)
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device: ", device)
```
## Step 3: Viewing our Enviroment
```
print("The size of frame is: ", env.observation_space.shape)
print("No. of Actions: ", env.action_space.n)
env.reset()
plt.figure()
plt.imshow(env.reset())
plt.title('Original Frame')
plt.show()
possible_actions = {
# No Operation
0: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# Left
1: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
# Right
2: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
# Left, Down
3: [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0],
# Right, Down
4: [0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0],
# Down
5: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
# Down, B
6: [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
# B
7: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Execute the code cell below to play Pong with a random policy.
```
def random_play():
score = 0
env.reset()
for i in range(200):
env.render()
action = possible_actions[np.random.randint(len(possible_actions))]
state, reward, done, _ = env.step(action)
score += reward
if done:
print("Your Score at end of game is: ", score)
break
env.reset()
env.render(close=True)
random_play()
```
## Step 4:Preprocessing Frame
```
plt.figure()
plt.imshow(preprocess_frame(env.reset(), (1, -1, -1, 1), 84), cmap="gray")
plt.title('Pre Processed image')
plt.show()
```
## Step 5: Stacking Frame
```
def stack_frames(frames, state, is_new=False):
frame = preprocess_frame(state, (1, -1, -1, 1), 84)
frames = stack_frame(frames, frame, is_new)
return frames
```
## Step 6: Creating our Agent
```
INPUT_SHAPE = (4, 84, 84)
ACTION_SIZE = len(possible_actions)
SEED = 0
GAMMA = 0.99 # discount factor
BUFFER_SIZE = 100000 # replay buffer size
BATCH_SIZE = 32 # Update batch size
LR = 0.0001 # learning rate
TAU = 1e-3 # for soft update of target parameters
UPDATE_EVERY = 100 # how often to update the network
UPDATE_TARGET = 10000 # After which thershold replay to be started
EPS_START = 0.99 # starting value of epsilon
EPS_END = 0.01 # Ending value of epsilon
EPS_DECAY = 100 # Rate by which epsilon to be decayed
agent = DQNAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, BUFFER_SIZE, BATCH_SIZE, GAMMA, LR, TAU, UPDATE_EVERY, UPDATE_TARGET, DQNCnn)
```
## Step 7: Watching untrained agent play
```
env.viewer = None
# watch an untrained agent
state = stack_frames(None, env.reset(), True)
for j in range(200):
env.render(close=False)
action = agent.act(state, eps=0.01)
next_state, reward, done, _ = env.step(possible_actions[action])
state = stack_frames(state, next_state, False)
if done:
env.reset()
break
env.render(close=True)
```
## Step 8: Loading Agent
Uncomment line to load a pretrained agent
```
start_epoch = 0
scores = []
scores_window = deque(maxlen=20)
```
## Step 9: Train the Agent with DQN
```
epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY)
plt.plot([epsilon_by_epsiode(i) for i in range(1000)])
def train(n_episodes=1000):
"""
Params
======
n_episodes (int): maximum number of training episodes
"""
for i_episode in range(start_epoch + 1, n_episodes+1):
state = stack_frames(None, env.reset(), True)
score = 0
eps = epsilon_by_epsiode(i_episode)
# Punish the agent for not moving forward
prev_state = {}
steps_stuck = 0
timestamp = 0
while timestamp < 10000:
action = agent.act(state, eps)
next_state, reward, done, info = env.step(possible_actions[action])
score += reward
timestamp += 1
# Punish the agent for standing still for too long.
if (prev_state == info):
steps_stuck += 1
else:
steps_stuck = 0
prev_state = info
if (steps_stuck > 20):
reward -= 1
next_state = stack_frames(state, next_state, False)
agent.step(state, action, reward, next_state, done)
state = next_state
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
clear_output(True)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
print('\rEpisode {}\tAverage Score: {:.2f}\tEpsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="")
return scores
scores = train(1000)
```
## Step 10: Watch a Smart Agent!
```
env.viewer = None
# watch an untrained agent
state = stack_frames(None, env.reset(), True)
for j in range(10000):
env.render(close=False)
action = agent.act(state, eps=0.91)
next_state, reward, done, _ = env.step(possible_actions[action])
state = stack_frames(state, next_state, False)
if done:
env.reset()
break
env.render(close=True)
```
| github_jupyter |
```
test_index = 0
from load_data import *
# load_data()
from load_data import *
X_train,X_test,y_train,y_test = load_data()
len(X_train),len(y_train)
len(X_test),len(y_test)
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.c1 = nn.Conv2d(1,64,5)
self.c2 = nn.Conv2d(64,128,5)
self.c3 = nn.Conv2d(128,256,5)
self.fc4 = nn.Linear(256*10*10,256)
self.fc6 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,4)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.c1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.c2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.c3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc6(preds))
preds = self.fc5(preds)
return preds
device = torch.device('cuda')
BATCH_SIZE = 32
IMG_SIZE = 112
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 12
from tqdm import tqdm
PROJECT_NAME = 'Weather-Clf'
import wandb
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
# for index in range(10):
# print(torch.argmax(preds[index]))
# print(y_batch[index])
# print('\n')
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,64,5)
self.conv2 = nn.Conv2d(64,128,5)
self.conv3 = nn.Conv2d(128,256,5)
self.fc1 = nn.Linear(256*10*10,64)
self.fc2 = nn.Linear(64,128)
self.fc3 = nn.Linear(128,256)
self.fc4 = nn.Linear(256,64)
self.fc5 = nn.Linear(64,6)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.conv1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.conv2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.conv3(preds)),(2,2))
print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc1(preds))
preds = F.relu(self.fc2(preds))
preds = F.relu(self.fc3(preds))
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc5(preds))
return preds
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
test_index += 1
wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item()})
wandb.finish()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('Data.csv')
df=df.fillna(df.mean())
df.head()
df.shape
df.iloc[0,:] #First row
df['tl_rank'].fillna(df['tl_rank'].mean())
df['ta_stars'].fillna(df['ta_stars'].mean())
df.head(45)
df.info()
```
This is Messy and convultued Lets Rename The Columns
```
df.columns
df['ride_type_all'].value_counts()
```
Word Cloud
```
df.describe()
```
# Logistical Regression
Classification Model
```
sns.set_style('whitegrid')
sns.countplot(x=df['ride_type_thrill'].replace('Ues', 'Yes'), data=df)
plt.figure(figsize=(10,7))
sns.boxplot(x=df['ride_type_thrill'].replace('Ues', 'Yes'), y=df['tl_rank'])
classic = []
for i in df['classic']:
if i == 'No':
i = 0
classic.append(i)
else:
i = 1
classic.append(i)
df['classic'] = classic
df['classic']
```
# Dealing with categorical Features
First we convert strings into numbers and take out any features that are non-numerical
Second you create dummy variables.
How to make dummy Variables
When setting X and Y, X will be all of the features you want and you Drop Y the dependent variable. Y is set to the column that you drop
```
thrill = pd.get_dummies(df['ride_type_thrill'].replace('Ues', 'Yes'),drop_first=True)
thrill.head()
spinning = pd.get_dummies(df['ride_type_spinning'],drop_first=True)
slow = pd.get_dummies(df['ride_type_slow'],drop_first=True)
small_drops = pd.get_dummies(df['ride_type_small_drops'],drop_first=True)
big_drops = pd.get_dummies(df['ride_type_big_drops'],drop_first=True)
dark = pd.get_dummies(df['ride_type_dark'],drop_first=True)
scary = pd.get_dummies(df['ride_type_scary'],drop_first=True)
water = pd.get_dummies(df['ride_type_water'],drop_first=True)
fast_pass = pd.get_dummies(df['fast_pass'],drop_first=True)
classic = pd.get_dummies(df['classic'],drop_first=True)
preschoolers = pd.get_dummies(df['age_interest_preschoolers'],drop_first=True)
kids = pd.get_dummies(df['age_interest_kids'],drop_first=True)
tweens = pd.get_dummies(df['age_interest_tweens'],drop_first=True)
teens = pd.get_dummies(df['age_interest_teens'],drop_first=True)
adults = pd.get_dummies(df['age_interest_adults'],drop_first=True)
df.drop(['ride_name', 'park_location', 'park_area', 'ride_type_all','ride_type_thrill', 'ride_type_spinning', 'ride_type_slow',
'ride_type_small_drops', 'ride_type_big_drops', 'ride_type_dark',
'ride_type_scary', 'ride_type_water', 'fast_pass',
'age_interest_all', 'age_interest_preschoolers', 'age_interest_kids',
'age_interest_tweens', 'age_interest_teens', 'age_interest_adults','open_date','age_of_ride_total'], axis=1, inplace=True)
Final = pd.concat([df,spinning,slow,small_drops,big_drops,dark,scary,water,fast_pass,classic,preschoolers,kids,tweens,teens,adults],axis=1)
Final['classic']
Final
X = Final.drop('classic', axis=1)
y = Final['classic']
y = y.astype('int') #kinda worked
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression(solver='lbfgs')
X_train.shape
y_train.shape
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
#
print(classification_report(y_test,predictions))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,predictions)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re, math
from string import punctuation
df = pd.read_excel("./data/Eni_Shell_data.xlsx")
df.shape
df.columns
```
### I. Nornalize column names
```
column_names = [
"oil_spill_id",
"company",
"jiv_number",
"date_reported",
"year",
"date_jiv_shell",
"facility_equipment",
"terrain",
"cause",
"barrels",
"cleanup_status_text",
"comments_shell",
"lga_eni",
"jiv_asset_id",
"in_decoders",
"jiv_url",
"jiv_url_hyperlinked",
"cause_jiv_verified",
"date_incident",
"date_investigation_start",
"leak_point_text",
"cause_incident_caused_by_dirty",
"cause_incident_caused_by",
"cause_amnesty_comment",
"cause_amnesty",
"location_jiv_verified",
"location_type",
"location_unit",
"lat_northing",
"long_eastling",
"location_transformation_notes",
"lat",
"long",
"area_decoders",
"area_unit",
"comment_type_jiv",
"comment_jiv",
"photo_lookup_id",
"photo_asset_id",
"in_decoders2",
"photo_url",
"photo_url_hyperlinked",
"damage_photo",
"damage_photo_followup",
"comment_jiv_duplicate",
"comment_jiv_text"
]
pd.DataFrame({"Original Column Names": df.columns, "Alias": column_names}).to_csv("columns.csv")
df.columns.shape, len(column_names)
df.columns = column_names
```
### II. Explore data
#### 1. Do "Included" and "Included.1" hold the same data?
```
df[["in_decoders", "in_decoders2"]].head()
df.in_decoders2.value_counts()
```
#### 2. explore Facility Equiptment
- '' inches
- pipeline name
- "at" location
```
df.facility_equipment.head(5)
# regexs
inch_single_quote_regex = re.compile(r"(\d+)''")
inch_double_quote_regex = re.compile(r'(\d+)"')
location_regex = re.compile(r"at\s(.*)")
inches = [np.nan] * df.shape[0]
facility_type_name = [np.nan] * df.shape[0]
facility_location = [np.nan] * df.shape[0]
no_inch_cnt = 0
missing_loc_count = 0
for i in range(df.shape[0]):
facility_info = df.facility_equipment.iloc[i].lower()
# a. exctract inches
try:
inches[i] = int(re.search(inch_single_quote_regex, facility_info).group(1))
except:
try:
inches[i] = int(re.search(inch_double_quote_regex, facility_info).group(1))
except:
no_inch_cnt += 1
# b. extract facility type
type_found = False
otherline_seen = "line" in facility_info
flowlines = set(["flowline", "fl"])
flowline_seen = flowlines.intersection(set(facility_info.split(" "))) or "flow line" in facility_info
pipeline_seen = "pipeline" in facility_info
well_seen = "well" in facility_info
wellhead_seen = "wellhead" in facility_info or "well head" in facility_info
manifold_seen = "manifold" in facility_info
trunklines = set(["trunkline", "tl"])
trunkline_seen = trunklines.intersection(set(facility_info.split(" "))) or "trunk line" in facility_info
deliverylines = set(["deliveryline", "dl"])
deliveryline_seen = deliverylines.intersection(set(facility_info.split(" "))) or "delivery line" in facility_info
bulklines = set(["bulkline", "bl"])
bulkline_seen = bulklines.intersection(set(facility_info.split(" "))) or "bulk line" in facility_info
flowstation_seen = "flowstation" in facility_info or "flow station" in facility_info
if otherline_seen:
facility_type_name[i] = "other line"
type_found = True
if pipeline_seen and not well_seen:
facility_type_name[i] = "pipeline"
type_found = True
if flowline_seen and "well" in facility_info:
facility_type_name[i] = "flowline, well"
type_found = True
if flowline_seen and not well_seen:
facility_type_name[i] = "flowline"
type_found = True
if well_seen and not wellhead_seen and not flowline_seen and not pipeline_seen:
facility_type_name[i] = "well"
type_found = True
if wellhead_seen:
facility_type_name[i] = "wellhead"
type_found = True
if manifold_seen:
facility_type_name[i] = "manifold"
type_found = True
if trunkline_seen:
facility_type_name[i] = "trunkline"
type_found = True
if deliveryline_seen:
facility_type_name[i] = "deliveryline"
type_found = True
if bulkline_seen:
facility_type_name[i] = "bulkline"
type_found = True
if flowstation_seen:
facility_type_name[i] = "flowstation"
type_found = True
if not type_found:
facility_type_name[i] = "other"
# c. extract location
try:
facility_location[i] = str(re.search(location_regex, facility_info).group(1)).strip(punctuation)
except:
missing_loc_count +=1
df["inches"] = pd.Series(inches)
df["facility_type"] = pd.Series(facility_type_name)
df["facility_location"] = pd.Series(facility_location)
df.facility_type.value_counts()
df.facility_location.isnull().sum()
df[["facility_location", "lga_eni"]].head()
```
#### * check if when Shell location is given, Eni is missing and vice versa
```
# shell location -> facility_location
# eni location -> lga_eni
shell_loc = [0] * df.shape[0]
eni_loc = [0] * df.shape[0]
for i in range(df.shape[0]):
shell_cur_loc = df.facility_location.iloc[i]
eni_curr_loc = df.lga_eni.iloc[i]
if type(shell_cur_loc) != float:
shell_loc[i] = 1
if type(eni_curr_loc) != float:
eni_loc[i] = 1
shell_loc_series = pd.Series(shell_loc)
eni_loc_series = pd.Series(eni_loc)
loc_series = shell_loc_series.add(eni_loc_series)
loc_series.value_counts()
df["location_joint"] = pd.Series(loc_series)
df[["oil_spill_id", "company","facility_location", "lga_eni"]][df.location_joint == 2]
df[["oil_spill_id", "company","facility_location", "lga_eni"]][df.location_joint == 1].head()
```
#### * clean the facility_locations for ENI
```
df.facility_location[df.location_joint == 2] = np.nan
df[["oil_spill_id", "company","facility_location", "lga_eni"]][df.location_joint == 2]
del df["location_joint"]
```
#### * aggregate Shell and Eni locations in the same column
```
locations = [np.nan] * df.shape[0]
for i in range(df.shape[0]):
shell_loc = df.facility_location.iloc[i]
eni_loc = df.lga_eni.iloc[i]
if type(eni_loc) == float:
locations[i] = shell_loc
else:
locations[i] = eni_loc
df['location'] = pd.Series(locations).values
del df["lga_eni"]
del df["facility_location"]
```
#### * extract the pipeline name
```
# handles from and to loc in cases such as: "nun-river kolo creek" or "biedu - nun-river"
def adjust_splits(splitted_list):
from_loc = ""
to_loc = ""
if splitted_list[1] == "river" or splitted_list[1] == "creek":
from_loc = splitted_list[0] + " " + splitted_list[1]
to_loc = " ".join(splitted_list[2:])
else:
from_loc = splitted_list[0]
to_loc = " ".join(splitted_list[1:])
return from_loc, to_loc
def discover_splits(current_facility_name):
dash_split = current_facility_name.split("-")
to_split = current_facility_name.split(" to ")
baskslash_split = current_facility_name.split("/")
from_loc = ""
to_loc = ""
if len(dash_split) > 1:
from_loc, to_loc = adjust_splits(list(map(str.strip, dash_split)))
if len(baskslash_split) > 1:
from_loc, to_loc = adjust_splits(list(map(str.strip, baskslash_split)))
if len(to_split) > 1:
from_loc, to_loc = adjust_splits(list(map(str.strip, to_split)))
facility_start[i] = from_loc
facility_end[i] = to_loc
return from_loc, to_loc
df["facility_equipment_lower"] = df.facility_equipment.str.lower()
facility_names = [np.nan] * df.shape[0]
facility_start = [np.nan] * df.shape[0]
facility_end = [np.nan] * df.shape[0]
missing_fac_name_cnt = 0
for i in range(df.shape[0]):
facility = df.facility_equipment_lower.iloc[i]
current_facility_type = df.facility_type.iloc[i]
# string replacements: "kolocreek" for "kolo creek", and "imo river ii" for "imo river 2"
facility = facility.replace("kolocreek", "kolo creek")
facility = facility.replace("bomu trans niger", "bomu")
facility = facility.replace("imo river ii", "imo river 2")
facility = facility.replace("imo river i", "imo river 1")
facility = facility.replace("imo river 1", "imo river")
facility = facility.replace("imo river 2", "imo river")
facility = facility.replace("trans niger", "")
facility = facility.replace("–", "-")
if current_facility_type != "other":
current_inches = df.inches.iloc[i]
curr_type__abbr = current_facility_type[:5]
# find the start index of the substring
if math.isnan(current_inches):
# start extracting from index 0
start_index = 0
else:
# start extracting from the index of the word after the first space
start_index = facility.find(" ") + 1
# find the end index of the substring
end_index = facility.find(curr_type__abbr)
if end_index != -1:
current_facility_name = facility[start_index:end_index-1]
current_facility_name = current_facility_name.replace(" ", " ") # replace double space with single
if current_facility_type not in ["well", "wellhead", "flowline", "flowline, well"]:
from_loc, to_loc = discover_splits(current_facility_name)
if from_loc != "" and to_loc != "":
facility_names[i] = from_loc + " - " + to_loc
else:
facility_names[i] = current_facility_name
else:
facility_names[i] = current_facility_name
else:
from_loc, to_loc = discover_splits(current_facility_name)
if from_loc != "" and to_loc != "":
facility_names[i] = from_loc + " - " + to_loc
else:
missing_fac_name_cnt += 1
facility_names[i] = current_facility_name
print("There are", missing_fac_name_cnt, "missing facility names.")
df["facility_name"] = pd.Series(facility_names).values
df["facility_start"] = pd.Series(facility_start).values
df["facility_end"] = pd.Series(facility_end).values
del df["facility_equipment_lower"]
df[["facility_equipment", "inches", "facility_name", "facility_start", "facility_end", "facility_type", "location"]].to_csv("./data/EniShell_transformed.csv")
```
### 3. Verify company
```
df.company.value_counts()
```
### 4. Inspect cause
#### * map Shell/Eni causes to 3 large categories
```
df.groupby( [ "company", "cause"] ).count()
#### * fit Shell/Eni cause to 3 categories
sabotage_theft = ["Sabotage", "Sabotage/ Theft", "Hacksaw cut", "Vandalization", "Use of explosive", "Hacksaw cut & explosive", "Hacksaw cut & fire", "Oil theft", "Drilled hole"]
company_fault = ["Operational", "Equipment failure", "Corrosion", "Induced corrosion", "Operational error/Oil theft", "Structure failure", "Operational error" ]
other = ["Other", "Mystery Spill", "Road Traffic Accident", "Unknown"]
mapped_cuased = [""] * df.shape[0]
for i in range(df.shape[0]):
reported_cause = df.cause.iloc[i]
if reported_cause in sabotage_theft:
mapped_cuased[i] = "sabotage/theft"
if reported_cause in company_fault:
mapped_cuased[i] = "company's fault"
if reported_cause in other:
mapped_cuased[i] = "other"
df["cause_mapped"] = pd.Series(mapped_cuased).values
df[:].groupby(["cause_mapped", "company"]).count()
df[:].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
```
#### * compare Decoder's cause to 3 large categories
```
df.cause_amnesty.value_counts()
df[df.cause_amnesty == "Operational"].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
df[df.cause_amnesty == "Undetermined"].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
df[df.cause_amnesty == "Third party (undetermined)"].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
df[df.cause_amnesty == "Third party (theft)"].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
```
#### * compare "Equipment failure" to cause reported by Shell/Eni
```
df.cause_incident_caused_by.value_counts().head(10)
df[df.cause_incident_caused_by == "Equipment failure"].groupby(["company", "cause_mapped", "cause_amnesty"]).count()
```
#### * process "leak Point"'s text
```
leak_points = set()
for i in range(df.shape[0]):
lp = df.leak_point_text.iloc[i]
if type(lp) != float:
for point in lp.split(";"):
leak_points.add(point.strip())
leak_points
```
#### * write df to file
* clean comments from new lines
```
df.to_csv("./data/EniShell_transformed.csv", sep="¬")#, line_terminator=";")
df[df.jiv_asset_id == 1783]
```
| github_jupyter |
# Star Unpacking
Any object that is an iterable, whether built-in (string, list, tuple, etc) or a custom class will work for unpacking.
<div class="alert alert-block alert-success">
<b>Try This!</b><br>
```python
s = "Hello World!"
s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11, s12 = s
print(s7)
```
</div>
<div class="alert alert-block alert-warning">
<b>Food For Thought:</b><br>
What would happen if you didn't have the correct number of items on the left that correlates to the number of items to be unpacked on the right?
</div>
## What If I Don't Need All Of The Unpacked Data?
Sometimes when unpacking, you may not require certain values. There is no special syntax for this, so you can use a throw-away variable.
<div class="alert alert-block alert-danger">
<b>WARNING!</b><br>If you have data you do not need from unpacking, remember to use your <b>del</b> option to clear up your memory!
</div>
**Example:**
```python
data = ['Susie Q', 22, (1986, 7, 15)]
_, age, birthdate = data
del _
```
<div class="alert alert-block alert-info">
<b>Remember:</b><br>Whatever variable(s) you choose, be mindful that they are not used elsewhere. Otherwise, you will overwrite the data!
</div>
## What If I Don't Know The Number Of Items To Unpack?
This is what is referred to as "iterables of arbitrary length" - and if not dealt with properly can cause you a lot of headache.
To address this, you would use "star expressions".
### Example 1
<div class="alert alert-block alert-success">
<b>Try This!</b><br>
Let's say you had a dataset where you wanted to drop the lowest and highest items and return the average.
```python
def drop_first_last(data):
"""
This function takes in an arbitrary dataset and returns the average of what's left.
"""
first, *middle, last = data
return avg(middle)
```
</div>
When you use this particular technique, it is worth noting that regardless of what variables are stored into the one with the asterisk that this new variable is **always** a `list`.
<div class="alert alert-block alert-warning">
<b>Food For Thought:</b><br>
What does the data now look like for each variable that information was unpacked into?
</div>
### Example 2
Let's say you have a "Record" of data consisting of a customer name, phone, email, and contract or order numbers.
```python
record = ('Sam', '972-867-5309', 'samIam@someemail.com', 42, 201, 874)
```
<div class="alert alert-block alert-info">
How would you unpack these? What would each variable's data look like?
</div>
As you've probably been able to determine, it doesn't matter where in your unpacking you have the starred variable. It can be the first, in the middle, or even the last unpacked variable.
Star unpacking allows a developer to leverage known patterns instead of doing a ton of extra coding and checking.
### Example 3 - Strings
Let's say you have a string - let's take a [MongoURL connection string](https://docs.mongodb.com/manual/reference/connection-string/) for example.
Example replica set:
`mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/admin?replicaSet=myRepl`
Example with access control enforced:
`mongodb://myDBReader:D1fficultP%40ssw0rd@mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/admin?replicaSet=myRepl`
NOTE: If the username or password includes the at sign @, colon :, slash /, or the percent sign % character, use [percent encoding](https://tools.ietf.org/html/rfc3986#section-2.1).
You can leverage star unpacking to split the data pieces into what you need. Using the [components information](https://docs.mongodb.com/manual/reference/connection-string/#components), how could we get the information we needed?
<div class="alert alert-block alert-success">
<b>Try this!</b>
```python
replica_set = 'mongodb://myDBReader:D1fficultP%40ssw0rd@mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/admin?replicaSet=myRepl'
_, uri_str = replica_set.split(r'//')
user_pw, uri_str = uri_str.split('@')
user, pw = user_pw.split(':')
del user_pw
host_ports, uri_str_2 = uri_str.split('/')
del uri_str
db, *options = uri_str_2.split('?')
del uri_str_2
*host_ports = host_ports.split(',')
```
<div class="alert alert-block alert-danger">
<b>WARNING!</b><br>If you try to use multiple stars in your unpacking, python will not be able to intuitively determine where to stop/end. Be sure there is only 1 variable that has the star for unpacking.
</div>
<hr>
# Keeping Last N Items
Often times it happens that you only need the last N items of some data set, such as but not limited to:
- logs
- grades
- last N quarters of sales
One of the little known features of python is in it's **collections** module: `collections.dequeue`
The `dequeue(maxlen=N)` function will keep a rolling "queue" of fixed size **N**. As new items are added, older items are automatically removed.
Obviously you could do this manually, but why cause yourself the grief of extra code and possible troubleshooting needs? Not only that, but the deque solution is more elegant, pythonic, and _**runs a lot faster**_.
<div class="alert alert-block alert-success">
<b>Try This!</b><br>
```python
from collections import deque
q = deque(maxlen=3)
for item in range(1,6):
q.append(item)
print(q)
```
</div>
Best practices utilize a generator function to decouple the process of searching from the code and getting the results.
## What If We Don't Use maxlen?
This will simply create an unbound queue that you can append or pop items on either end.
<div class="alert alert-block alert-success">
<b>Try This!</b><br>
```python
from collections import deque # this is only needed once in a Jupyter notebook or python script
q = deque()
for item in range(1,5):
if item == 4: # append left
q.appendleft(item)
else:
q.append(item)
if item >= 3:
print(q)
print(q.pop())
print(q)
print(q.popleft())
print(q)
```
</div>
| github_jupyter |
# Creating a System
## Conventional methods
Systems are defined by a recycle stream (i.e. a tear stream; if any), and a path of unit operations and nested systems. A System object takes care of solving recycle streams by iteratively running its path of units and subsystems until the recycle converges to steady state. Systems can be manually created or automatically generated via the flowsheet or by context management.
### Manually generated
Manually creating a system is **not recommended** as it requires an exponential amount of time and effort for an individual to layout an accurate path. Here we create a trivial system manually as a simple exercise:
```
import biosteam as bst
bst.settings.set_thermo(['Water'])
feed = bst.Stream('feed', Water=100)
recycle = bst.Stream('recycle')
effluent = bst.Stream('effluent')
T1 = bst.MixTank('T1', ins=[feed, recycle])
P1 = bst.Pump('P1', T1-0)
S1 = bst.Splitter('S1', P1-0, [effluent, recycle], split=0.5)
manual_sys = bst.System('manual_sys', path=[T1, P1, S1], recycle=recycle)
manual_sys.simulate()
manual_sys.diagram(
kind='cluster', # Cluster diagrams highlight recycle streams and nested systems.
number=True, # This numbers each unit according to their path order
)
manual_sys.show()
```
Note that the inlets and outlets to a system are inherently connected to the unit operations within the system, but we can still connect systems just like unit operations, as depicted future examples.
### Autogenerated from the flowsheet
The **recommended** way of creating systems is to use the flowsheet. Here we expand on the existing process and create a new system using the flowsheet:
```
water = bst.Stream('water', Water=10)
P2 = bst.Pump('P2', manual_sys-0) # -pipe- notation equivalent to manual_sys.outs[0]
M2 = bst.Mixer('M2', [P2-0, water])
flowsheet_sys = bst.main_flowsheet.create_system('flowsheet_sys')
flowsheet_sys.simulate()
flowsheet_sys.diagram(kind='cluster', number=True)
flowsheet_sys.show()
```
### Autogenerated by context management
System objects' context management feature allows for creating systems of only the units created within the given context:
```
downstream_recycle = bst.Stream('downstream_recycle')
product = bst.Stream('product')
with bst.System('context_sys') as context_sys:
T2 = bst.MixTank('T2', ins=['', downstream_recycle])
P3 = bst.Pump('P3', T2-0)
S2 = bst.Splitter('S2', P3-0, [product, downstream_recycle], split=0.5)
# The feed is empty, no need to run system (yet)
context_sys.diagram('cluster')
context_sys.show()
```
Let's connect two systems together and create a new system from the flowsheet:
```
# -pipe- notation equivalent to context_sys.ins[:] = [flowsheet_sys.outs[0]]
flowsheet_sys-0-context_sys
complete_sys = bst.main_flowsheet.create_system('complete_sys')
complete_sys.simulate()
complete_sys.diagram('cluster')
complete_sys.show()
```
## Drop-in systems
### A simple example
When a system is created by a function, it's called a drop-in system. Here, we create a sugarcane to ethanol production system without facilities (e.g., cooling tower, boiler) by using drop-in systems:
```
from biorefineries.sugarcane import chemicals
from biosteam import Stream, System, settings, main_flowsheet
from biorefineries.sugarcane import (
create_juicing_system_with_fiber_screener as create_juicing_system,
create_sucrose_to_ethanol_system
)
main_flowsheet.clear() # Remove previous unit operations to prevent ID-conflict warnings
settings.set_thermo(chemicals)
denaturant = Stream('denaturant',
Octane=230.69,
units='kg/hr',
price=0.756)
sucrose_solution = Stream('sucrose_solution')
juicing_sys = create_juicing_system(
ID='juicing_sys', # ID of system
outs=[sucrose_solution], # Place sucrose_solution at the 0th outlet (all other streams are defaulted)
)
sucrose_to_ethanol_sys = create_sucrose_to_ethanol_system(ins=[sucrose_solution, denaturant])
# Here are a couple of other ways to connect systems:
# Manually:
# >>> sucrose_to_ethanol_sys.ins[0] = juicing_sys.outs[0]
# With -pipe- notation:
# >>> juicing_sys-0-0-sucrose_to_ethanol_sys
# Manually create a new system and simulate
sugarcane_to_ethanol_sys = System('sugarcane_to_ethanol_sys',
path=[juicing_sys, sucrose_to_ethanol_sys])
sugarcane_to_ethanol_sys.simulate()
sugarcane_to_ethanol_sys.diagram(kind='surface')
sugarcane_to_ethanol_sys.show(data=False)
```
The number of inlets and outlets are rather large. It may be helpful to specify what inlets and outlets do we want to expose:
```
s = main_flowsheet.stream
sugarcane_to_ethanol_sys.load_inlet_ports([s.sugarcane])
sugarcane_to_ethanol_sys.load_outlet_ports([s.ethanol, s.bagasse])
sugarcane_to_ethanol_sys.show(data=False)
```
The ethanol product is now the 0th stream
```
sucrose_to_ethanol_sys.outs[0].show()
```
### System factories
Both `create_juicing_system` and `create_sucrose_to_ethanol_system` are [SystemFactory](../process_tools/SystemFactory.txt) objects, which accept the system `ID`, `ins`, and `outs` (similar to unit operations) and return a new system. Let's first have a look at some of the system factories in the [biorefineries.sugarcane](https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/tree/master/BioSTEAM%202.x.x/biorefineries/sugarcane) library:
```
create_juicing_system.show()
print()
create_sucrose_to_ethanol_system.show()
```
[SystemFactory](../process_tools/SystemFactory.txt) objects are composed of a function `f` which creates the unit operations, a predefined system `ID`, and `ins` and `outs` dictionaries that serve as keyword arguments to initialize the system's default inlets and outlets.
The signature of a SystemFactory is `f(ID=None, ins=None, outs=None, mockup=False, area=None, udct=None, ...)`. The additional parameters (i.e. mockup, area, and udct) will be discussed in the next section.
### Saving time with mock systems
When creating a biorefinery, we may not be interested in all the subsystems we created with SystemFactory objects. We can save a few milliseconds in computational time (per system) by using mock systems:
```
main_flowsheet.clear() # Remove previous unit operations to prevent ID-conflict warnings
juicing_sys = create_juicing_system(
outs=[sucrose_solution],
mockup=True
)
sucrose_to_ethanol_sys = create_sucrose_to_ethanol_system(
ins=[sucrose_solution, denaturant],
mockup=True
)
# Note that mock systems don't have anything other than `ins`, `outs`, and `units`
juicing_sys.show()
sucrose_to_ethanol_sys.show()
# We can create the system using the flowsheet
sugarcane_to_ethanol_sys = main_flowsheet.create_system('sugarcane_to_ethanol_sys')
sugarcane_to_ethanol_sys.simulate()
sugarcane_to_ethanol_sys.diagram()
sucrose_to_ethanol_sys.outs[0].show()
```
### Using the area naming convention
The area naming convention follows {letter}{area + number} where the letter depends on
the unit operation as follows:
* C: Centrifuge
* D: Distillation column
* E: Evaporator
* F: Flash tank
* H: Heat exchange
* M: Mixer
* P: Pump (including conveying belt)
* R: Reactor
* S: Splitter (including solid/liquid separator)
* T: Tank or bin for storage
* U: Other units
* J: Junction, not a physical unit (serves to adjust streams)
* PS: Process specificiation, not a physical unit (serves to adjust streams)
For example, the first mixer in area 100 would be named M101. When calling a SystemFactory object, we can pass the `area` to name unit operations according to the area convention. In the following example, we name all unit operations in the juicing system under area 300:
```
main_flowsheet.clear() # Remove previous unit operations
juicing_sys = create_juicing_system(area=300, mockup=True)
juicing_sys.show()
```
To access unit operations by their default ID (as originally defined in SystemFactory code), you can request a unit dictionary by passing `udct`=True:
```
main_flowsheet.clear() # Remove previous unit operations
# When udct is True, both the system and the unit dictionary are returned
juicing_sys, udct = create_juicing_system(mockup=True, area=300, udct=True)
unit = udct['U201']
print(repr(unit)) # Originally, this unit was named U201
```
### Creating system factories
Create a SystemFactory object for creating sugarcane to ethanol systems:
```
from biosteam import System, SystemFactory
@SystemFactory(
ID='sugarcane_to_ethanol_sys',
ins=[create_juicing_system.ins[0], # Reuse default from juicing system factory
dict(ID='denaturant',
price=0.756)],
outs=[dict(ID='ethanol',
price=0.789),
dict(ID='bagasse')]
)
def create_sugarcane_to_ethanol_system(ins, outs):
# ins and outs will be stream objects
sugarcane, denaturant = ins
ethanol, bagasse = outs
juicing_sys = create_juicing_system(
ins=sugarcane,
outs=[None, bagasse], # None will default to a stream
mockup=True
)
sucrose_to_ethanol_sys = create_sucrose_to_ethanol_system(
ins=(juicing_sys-0, denaturant),
outs=ethanol,
mockup=True,
)
# The system factory builds a system from units created by the function
create_sugarcane_to_ethanol_system.show()
```
Create the sugarcane to ethanol system and simulate:
```
main_flowsheet.clear() # Remove previous unit operations
sugarcane_to_ethanol_sys = create_sugarcane_to_ethanol_system()
sugarcane_to_ethanol_sys.simulate()
sugarcane_to_ethanol_sys.show()
```
Biorefinery systems can be created by connecting smaller systems, allowing us to create alternative configurations with ease. The [biorefineries](https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park) library has yet to fully implement SystemFactory objects across all functions that create systems, but that is the goal.
| github_jupyter |
# Ray RLlib - Explore RLlib - Sample Application: CartPole
© 2019-2020, Anyscale. All Rights Reserved

We were briefly introduced to the `CartPole` example and the OpenAI gym `CartPole-v1` environment ([gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)) in the [reinforcement learning introduction](../01-Introduction-to-Reinforcement-Learning.ipynb). This lesson uses [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) to train a policy for `CartPole`.
Recall that the `gym` Python module provides MDP interfaces to a variety of simulators, like the simple simulator for the physics of balancing a pole on a cart that is used by the CartPole environment. The `CartPole` problem is described at https://gym.openai.com/envs/CartPole-v1.

([source](https://gym.openai.com/envs/CartPole-v1/))
Even though this is a relatively simple and quick example to run, its results can be understood quite visually. `CartPole` is one of OpenAI Gym's ["classic control"](https://gym.openai.com/envs/#classic_control) examples.
For more background about this problem, see:
* ["Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problem"](https://ieeexplore.ieee.org/document/6313077), AG Barto, RS Sutton, and CW Anderson, *IEEE Transactions on Systems, Man, and Cybernetics* (1983). The same Sutton and Barto who wrote [*Reinforcement Learning: An Introduction*](https://mitpress.mit.edu/books/reinforcement-learning-second-edition).
* ["Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)"](https://towardsdatascience.com/cartpole-introduction-to-reinforcement-learning-ed0eb5b58288), [Greg Surma](https://twitter.com/GSurma).
First, import Ray and the PPO module in RLlib, then start Ray.
```
import ray
import ray.rllib.agents.ppo as ppo
import pandas as pd
import json, os, shutil, sys
sys.path.append('../..') # so we can import from "util"
from util.line_plots import plot_line, plot_line_with_min_max, plot_line_with_stddev
```
Model *checkpoints* will get saved after each iteration into directories under `tmp/ppo/cart`, i.e., relative to this directory.
The default directories for checkpoints are `$HOME/ray_results/<algo_env>/.../checkpoint_N`.
> **Note:** If you prefer to use a different directory root, change it in the next cell _and_ in the `rllib rollout` command below.
```
checkpoint_root = 'tmp/ppo/cart'
```
Clean up output of previous lessons (optional):
```
# Where checkpoints are written:
shutil.rmtree(checkpoint_root, ignore_errors=True, onerror=None)
# Where some data will be written and used by Tensorboard below:
ray_results = f'{os.getenv("HOME")}/ray_results/'
shutil.rmtree(ray_results, ignore_errors=True, onerror=None)
```
Start Ray:
```
ray.init(ignore_reinit_error=True)
```
The Ray Dashboard is useful for monitoring Ray:
```
print(f'Dashboard URL: http://{ray.get_webui_url()}')
```
Next we'll train an RLlib policy with the [`CartPole-v1` environment](https://gym.openai.com/envs/CartPole-v1/).
If you've gone through the _Multi-Armed Bandits_ lessons, you may recall that we used [Ray Tune](http://tune.io), the Ray Hyperparameter Tuning system, to drive training. Here we'll do it ourselves.
By default, training runs for `10` iterations. Increase the `N_ITER` setting if you want to train longer and see the resulting rewards improve. However, if the max score of `200` is achieved early, you can use a smaller number of iterations.
- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used. In a cluster, these actors will be spread over the available nodes.
- `num_sgd_iter` is the number of epochs of SGD (stochastic gradient descent, i.e., passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO, for each _minibatch_ ("chunk") of training data. Using minibatches is more efficient than training with one record at a time.
- `sgd_minibatch_size` is the SGD minibatch size (batches of data) that will be used to optimize the PPO surrogate objective.
- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers. Here, we have two hidden layers of size 100, each.
- `num_cpus_per_worker` when set to 0 prevents Ray from pinning a CPU core to each worker, which means we could run out of workers in a constrained environment like a laptop or a cloud VM.
> **Note:** If you change the values shown for `config['model']['fcnet_hiddens']`, make the same change in the `rllib rollout` command below!
```
SELECT_ENV = "CartPole-v1" # Specifies the OpenAI Gym environment for Cart Pole
N_ITER = 10 # Number of training runs.
config = ppo.DEFAULT_CONFIG.copy() # PPO's default configuration. See the next code cell.
config["log_level"] = "WARN" # Suppress too many messages, but try "INFO" to see what can be printed.
# Other settings we might adjust:
config['num_workers'] = 1 # Use > 1 for using more CPU cores, including over a cluster
config['num_sgd_iter'] = 10 # Number of SGD (stochastic gradient descent) iterations per training minibatch.
# I.e., for each minibatch of data, do this many passes over it to train.
config['sgd_minibatch_size'] = 250 # The amount of data records per minibatch
config['model']['fcnet_hiddens'] = [100, 50] #
config['num_cpus_per_worker'] = 0 # This avoids running out of resources in the notebook environment when this cell is re-executed
```
Out of curiousity, let's see what configuration settings are defined for PPO. Note in particular the parameters for the deep learning `model`:
```
ppo.DEFAULT_CONFIG
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
results = []
episode_data = []
episode_json = []
for n in range(N_ITER):
result = agent.train()
results.append(result)
episode = {'n': n,
'episode_reward_min': result['episode_reward_min'],
'episode_reward_mean': result['episode_reward_mean'],
'episode_reward_max': result['episode_reward_max'],
'episode_len_mean': result['episode_len_mean']}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
file_name = agent.save(checkpoint_root)
print(f'{n:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}. Checkpoint saved to {file_name}')
```
The episode rewards should increase after multiple iterations. Try tweaking the config parameters. Smaller values for the `num_sgd_iter`, `sgd_minibatch_size`, or the `model`'s `fcnet_hiddens` will train faster, but take longer to improve the policy.
```
df = pd.DataFrame(data=episode_data)
df
import bokeh.io
# The next two lines prevent Bokeh from opening the graph in a new window.
bokeh.io.reset_output()
bokeh.io.output_notebook()
plot_line_with_min_max(df, x_col='n', y_col='episode_reward_mean', min_col='episode_reward_min', max_col='episode_reward_max',
title='Cart Pole Episode Rewards', x_axis_label = 'n', y_axis_label='reward')
```
([image](../../images/rllib/Cart-Pole-Episode-Rewards3.png))
Also, print out the policy and model to see the results of training in detail…
```
import pprint
policy = agent.get_policy()
model = policy.model
pprint.pprint(model.variables())
pprint.pprint(model.value_function())
print(model.base_model.summary())
```
## Rollout
Next we'll use the [RLlib rollout CLI](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies), to evaluate the trained policy.
This visualizes the `CartPole` agent operating within the simulation: moving the cart left or right to avoid having the pole fall over.
We'll use the last saved checkpoint, `checkpoint_10` (or whatever you set for `N_ITER` above) for the rollout, evaluated through `2000` steps.
> **Notes:**
>
> 1. If you changed `checkpoint_root` above to be different than `tmp/ppo/cart`, then change it here, too. Note that bugs in variable substitution in Jupyter notebooks, we can't use variables in the next cell, unfortunately.
> 2. If you changed the model parameters, specifically the `fcnet_hiddens` array in the `config` object above, make the same change here.
You may need to make one more modification, depending on how you are running this tutorial:
1. Running on your laptop? - Remove the line `--no-render`.
2. Running on the Anyscale Service? The popup windows that would normally be created by the rollout can't be viewed in this case. Hence, the `--no-render` flag suppresses them. The code cell afterwords provides a sample video. You can try adding `--video-dir tmp/ppo/cart`, which will generate MP4 videos, then download them to view them. Or copy the `Video` cell below and use it to view the movies.
```
!rllib rollout tmp/ppo/cart/checkpoint_10/checkpoint-10 \
--config "{\"env\": \"CartPole-v1\", \"model\": {\"fcnet_hiddens\": [100, 50]}}" \
--run PPO \
--no-render \
--steps 2000
```
Here is a sample episode.
> **Note:** This video was created by running the previous `rllib rollout` command with the argument `--video-dir some_directory`. It creates one video per episode.
```
from IPython.display import Video
cart_pole_sample_video='../../images/rllib/Cart-Pole-Example-Video.mp4'
Video(cart_pole_sample_video)
```
Finally, launch [TensorBoard](https://ray.readthedocs.io/en/latest/rllib-training.html#getting-started) as discussed in [02 Introduction to RLlib](../02-Introduction-to-RLlib.ipynb). Select the Cart Pole runs and visualize the key metrics from training with RLlib.
```shell
tensorboard --logdir=$HOME/ray_results
```
For more examples of working with Gym environments, go through the next lesson, [Bipedal Walker](02-Bipedal-Walker.ipynb), then any of the "extra" lessons:
* [Extras: Application - Mountain Car](extras/Extra-Application-Mountain-Car.ipynb) -- Based on the `MountainCar-v0` environment from OpenAI Gym.
* [Extras: Application - Taxi](extras/Extra-Application-Taxi.ipynb) -- Based on the `Taxi-v3` environment from OpenAI Gym.
* [Extras: Application - Frozen Lake](extras/Extra-Application-Frozen-Lake.ipynb) -- Based on the `FrozenLake-v0` environment from OpenAI Gym.
Use TensorBoard to visualize their training runs, metrics, etc., as well. (These notebooks won't mention this suggestion.)
## Exercise ("Homework")
In addition to _Cart Pole_, _Bipedal Walker_, and _Mountain Car_, there are other so-called ["classic control"](https://gym.openai.com/envs/#classic_control) examples you can try. Make a copy of this notebook and edit as required.
```
ray.shutdown() # "Undo ray.init()".
```
| github_jupyter |
## Imports
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from minepy import MINE
from scipy.stats import pearsonr,spearmanr,describe
from scipy.spatial.distance import pdist, squareform
import numpy as np
import copy
import dcor
sns.set()
```
## Pearson’s Correlation Coefficient


#### Generate Data
```
np.random.seed(1077939816)
sample_size = 100
noise_mean = 0
noise_std = 1
theta = np.random.randn(2)
x_1 = np.random.randn(sample_size)*10
y_1 = theta[0]*x_1+theta[1]
y_1_noise = y_1 + np.random.normal(noise_mean,noise_std,size = sample_size).T
ro_1, p_1 = pearsonr(x_1,y_1)
ro_2, p_2 = pearsonr(x_1,y_1_noise)
fig, axs = plt.subplots(nrows=1, ncols=2,figsize = (10,5))
axs.flat[0].scatter(x_1,y_1)
axs.flat[0].set_title("pearsonr:{0},p-value:{1}".format(np.round(ro_1,4),np.round(p_1,4)))
axs.flat[1].scatter(x_1,y_1_noise)
axs.flat[1].set_title("pearsonr:{0},p-value:{1}".format(np.round(ro_2,4),np.round(p_2,4)));
np.random.seed(15)
x_2 = np.random.rand(sample_size)*5
y_2_cuadr = x_2**2 + np.random.normal(noise_mean,noise_std,size = sample_size)
y_2_sin = np.sin(x_2) + np.random.normal(noise_mean,noise_std,size = sample_size)
y_2_log = np.log(x_2) + np.random.normal(noise_mean,noise_std-0.5,size = sample_size)
```
#### Calculate pearsonr and plot
```
ro_0, p_0 = pearsonr(x_2,y_2_cuadr)
ro_1, p_1 = pearsonr(x_2,y_2_log)
ro_2, p_2 = pearsonr(x_2,y_2_sin)
fig, axs = plt.subplots(nrows=1, ncols=3,figsize = (12,4))
axs.flat[0].scatter(x_2,y_2_cuadr)
axs.flat[0].set_title("quadr-ro:{0},p-value:{1}".format(np.round(ro_0,4),np.round(p_0,4)))
axs.flat[1].scatter(x_2,y_2_log)
axs.flat[1].set_title("log-ro:{0},p-value:{1}".format(np.round(ro_1,4),np.round(p_1,4)))
axs.flat[2].scatter(x_2,y_2_sin);
axs.flat[2].set_title("sin-ro:{0},p-value:{1}".format(np.round(ro_2,4),np.round(p_2,4)));
```
### Spearman correlation

```
ro_0, p_0 = spearmanr(x_2,y_2_log)
ro_1, p_1 = spearmanr(x_2,y_2_log)
ro_2, p_2 = spearmanr(x_2,y_2_sin)
fig, axs = plt.subplots(nrows=1, ncols=3,figsize = (12,4))
axs.flat[0].scatter(x_2,y_2_cuadr)
axs.flat[0].set_title("quadr-ro:{0},p-value:{1}".format(np.round(ro_0,4),np.round(p_0,4)))
axs.flat[1].scatter(x_2,y_2_log)
axs.flat[1].set_title("log-ro:{0},p-value:{1}".format(np.round(ro_1,4),np.round(p_1,4)))
axs.flat[2].scatter(x_2,y_2_sin);
axs.flat[2].set_title("sin-ro:{0},p-value:{1}".format(np.round(ro_2,4),np.round(p_2,4)));
x = np.linspace(0,10)
y = np.sin(x) + np.random.normal(0,0.1,50)
r, p = spearmanr(x,y)
plt.scatter(x,y)
plt.title("spearmanr:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
plt.show()
x = np.linspace(-10,10)
y = x**2 + np.random.normal(0,5,50)
r, p = spearmanr(x,y)
plt.scatter(x,y)
plt.title("spearmanr:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
```
### Distance Correlation

```
def distcorr(Xval, Yval, pval=True, nruns=2000):
""" Compute the distance correlation function, returning the p-value.
Based on Satra/distcorr.py (gist aa3d19a12b74e9ab7941)
>>> a = [1,2,3,4,5]
>>> b = np.array([1,2,9,4,4])
>>> distcorr(a, b)
(0.76267624241686671, 0.404)
"""
X = np.atleast_1d(Xval)
Y = np.atleast_1d(Yval)
if np.prod(X.shape) == len(X):
X = X[:, None]
if np.prod(Y.shape) == len(Y):
Y = Y[:, None]
X = np.atleast_2d(X)
Y = np.atleast_2d(Y)
n = X.shape[0]
if Y.shape[0] != X.shape[0]:
raise ValueError('Number of samples must match')
a = squareform(pdist(X))
b = squareform(pdist(Y))
A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
dcov2_xy = (A * B).sum() / float(n * n)
dcov2_xx = (A * A).sum() / float(n * n)
dcov2_yy = (B * B).sum() / float(n * n)
dcor = np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy))
if pval:
greater = 0
for i in range(nruns):
Y_r = copy.copy(Yval)
np.random.shuffle(Y_r)
if distcorr(Xval, Y_r, pval=False) >= dcor:
greater += 1
return (dcor, greater / float(nruns))
else:
return dcor
def dist_corr(X, Y, pval=True, nruns=2000):
""" Distance correlation with p-value from bootstrapping
"""
dc = dcor.distance_correlation(X, Y)
pv = dcor.independence.distance_covariance_test(X, Y, exponent=1.0, num_resamples=nruns)[0]
if pval:
return (dc, pv)
else:
return dc
ro_0, p_0 = dist_corr(x_2,y_2_log)
ro_1, p_1 = dist_corr(x_2,y_2_log)
ro_2, p_2 = dist_corr(x_2,y_2_sin)
fig, axs = plt.subplots(nrows=1, ncols=3,figsize = (12,4))
axs.flat[0].scatter(x_2,y_2_cuadr)
axs.flat[0].set_title("quadr-ro:{0},p-value:{1}".format(np.round(ro_0,4),np.round(p_0,4)))
axs.flat[1].scatter(x_2,y_2_log)
axs.flat[1].set_title("log-ro:{0},p-value:{1}".format(np.round(ro_1,4),np.round(p_1,4)))
axs.flat[2].scatter(x_2,y_2_sin);
axs.flat[2].set_title("sin-ro:{0},p-value:{1}".format(np.round(ro_2,4),np.round(p_2,4)))
plt.show()
x = np.linspace(0,10)
y = np.sin(x) + np.random.normal(0,0.1,50)
r, p = dist_corr(x,y)
plt.scatter(x,y)
plt.title("distcorr:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
plt.show()
x = np.linspace(-10,10)
y = x**2 + np.random.normal(0,5,50)
r, p = dist_corr(x,y)
plt.scatter(x,y)
plt.title("distcorr:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
from sklearn.datasets import make_classification
X1, Y1 = make_classification(n_features=2, n_redundant=0, n_informative=1,
n_clusters_per_class=1)
d,p = dist_corr(X1[:,0],X1[:,1])
plt.scatter(X1[:,0],X1[:,1])
plt.title("distcorr:{0},p-value:{1}".format(np.round(d,3),np.round(p,3)));
```
### Maximum Information Coefficient


```
def mic(X,Y,pval=True,nruns=100):
mine = MINE(alpha=0.6, c=15, est="mic_approx")
mine.compute_score(X,Y)
mic = mine.mic()
if pval:
greater = 0
for i in range(nruns):
Y_r = copy.copy(Y)
np.random.shuffle(Y_r)
mine.compute_score(X,Y_r)
cur_mine = mine.mic()
if cur_mine >= mic:
greater += 1
return (mic, greater / float(nruns))
else:
return mic
ro_0, p_0 = mic(x_2,y_2_log)
ro_1, p_1 = mic(x_2,y_2_log)
ro_2, p_2 = mic(x_2,y_2_sin)
fig, axs = plt.subplots(nrows=1, ncols=3,figsize = (12,4))
axs.flat[0].scatter(x_2,y_2_cuadr)
axs.flat[0].set_title("MIC-ro:{0},p-value:{1}".format(np.round(ro_0,4),np.round(p_0,4)))
axs.flat[1].scatter(x_2,y_2_log)
axs.flat[1].set_title("MIC-ro:{0},p-value:{1}".format(np.round(ro_1,4),np.round(p_1,4)))
axs.flat[2].scatter(x_2,y_2_sin);
axs.flat[2].set_title("MIC-ro:{0},p-value:{1}".format(np.round(ro_2,4),np.round(p_2,4)))
plt.show()
x = np.linspace(0,10)
y = np.sin(x) + np.random.normal(0,0.1,50)
r, p = mic(x,y)
plt.scatter(x,y)
plt.title("MIC:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
plt.show()
x = np.linspace(-10,10)
y = x**2 + np.random.normal(0,5,50)
r, p = mic(x,y)
plt.scatter(x,y)
plt.title("MIC:{0},p-value:{1}".format(np.round(r,3),np.round(p,3)));
X1, Y1 = make_classification(n_features=2, n_redundant=0, n_informative=1,
n_clusters_per_class=1)
d,p = mic(X1[:,0],X1[:,1])
plt.scatter(X1[:,0],X1[:,1])
plt.title("mic:{0},p-value:{1}".format(np.round(d,3),np.round(p,3)));
```
| github_jupyter |
## Additional training functions
[`train`](/train.html#train) provides a number of extension methods that are added to [`Learner`](/basic_train.html#Learner) (see below for a list and details), along with three simple callbacks:
- [`ShowGraph`](/train.html#ShowGraph)
- [`GradientClipping`](/train.html#GradientClipping)
- [`BnFreeze`](/train.html#BnFreeze)
- [`AccumulateScheduler`](/train.html#AccumulateScheduler)
```
from fastai.gen_doc.nbdoc import *
from fastai.train import *
from fastai.vision import *
```
## [`Learner`](/basic_train.html#Learner) extension methods
These methods are automatically added to all [`Learner`](/basic_train.html#Learner) objects created after importing this module. They provide convenient access to a number of callbacks, without requiring them to be manually created.
```
show_doc(fit_one_cycle)
show_doc(one_cycle_scheduler)
```
See [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler) for details.
```
show_doc(lr_find)
```
See [`LRFinder`](/callbacks.lr_finder.html#LRFinder) for details.
```
show_doc(to_fp16)
```
See [`MixedPrecision`](/callbacks.fp16.html#MixedPrecision) for details.
```
show_doc(to_fp32)
show_doc(mixup)
```
See [`MixUpCallback`](/callbacks.mixup.html#MixUpCallback) for more details.
```
show_doc(Interpretation)
show_doc(Interpretation.from_learner)
show_doc(Interpretation.top_losses)
```
For example in [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) is implemented using argmax on preds to set `self.pred_class` whereas an optional sigmoid is used for `MultilabelClassificationInterpretation`
```
show_doc(ClassificationInterpretation)
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = cnn_learner(data, models.resnet18)
learn.fit(1)
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
show_doc(ClassificationInterpretation.top_losses)
```
Returns tuple of *(losses,indices)*.
```
interp.top_losses(9)
show_doc(ClassificationInterpretation.plot_confusion_matrix)
```
If [`normalize`](/vision.data.html#normalize), plots the percentages with `norm_dec` digits. `slice_size` can be used to avoid out of memory error if your set is too big. `kwargs` are passed to `plt.figure`.
```
interp.plot_confusion_matrix()
show_doc(ClassificationInterpretation.confusion_matrix)
interp.confusion_matrix()
show_doc(ClassificationInterpretation.most_confused)
show_doc(MultiLabelClassificationInterpretation)
jekyll_warn("MultiLabelClassificationInterpretation is not implemented yet. Feel free to implement it :)")
```
#### Working with large datasets
When working with large datasets, memory problems can arise when computing the confusion matrix. For example, an error can look like this:
RuntimeError: $ Torch: not enough memory: you tried to allocate 64GB. Buy new RAM!
In this case it is possible to force [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) to compute the confusion matrix for data slices and then aggregate the result by specifying slice_size parameter.
```
interp.confusion_matrix(slice_size=10)
interp.plot_confusion_matrix(slice_size=10)
interp.most_confused(slice_size=10)
```
## Additional callbacks
We'll show examples below using our MNIST sample. As usual the `on_something` methods are directly called by the fastai library, no need to call them yourself.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(ShowGraph, title_level=3)
```
```python
learn = cnn_learner(data, models.resnet18, metrics=accuracy, callback_fns=ShowGraph)
learn.fit(3)
```

```
show_doc(ShowGraph.on_epoch_end)
show_doc(GradientClipping)
learn = cnn_learner(data, models.resnet18, metrics=accuracy,
callback_fns=partial(GradientClipping, clip=0.1))
learn.fit(1)
show_doc(GradientClipping.on_backward_end)
show_doc(BnFreeze)
```
For batchnorm layers where `requires_grad==False`, you generally don't want to update their moving average statistics, in order to avoid the model's statistics getting out of sync with its pre-trained weights. You can add this callback to automate this freezing of statistics (internally, it calls `eval` on these layers).
```
learn = cnn_learner(data, models.resnet18, metrics=accuracy, callback_fns=BnFreeze)
learn.fit(1)
show_doc(BnFreeze.on_epoch_begin)
show_doc(AccumulateScheduler)
```
Let's force `batch_size=2` to mimic a scenario where we can't fit enough batch samples to our memory. We can then set `n_step` as desired to have an effective batch_size of `effective_batch_size=batch_size*n_step`.
It is also important to use loss func with `reduce='sum'` in order to calculate exact average accumulated gradients.
Another important note for users is that `batchnorm` is not yet adapted to accumulated gradients. So you should use this callback at your own risk until a hero fixes it :)
Here we demonstrate this callback with a model without `batchnorm` layers, alternatively you can use `nn.InstanceNorm` or [`nn.GroupNorm`](https://pytorch.org/docs/stable/nn.html#torch.nn.GroupNorm).
```
from torchvision.models import vgg11
data = ImageDataBunch.from_folder(path, bs=2)
learn = cnn_learner(data, resnet18, metrics=accuracy, loss_func=CrossEntropyFlat(reduction='sum'),
callback_fns=partial(AccumulateScheduler, n_step=16))
learn.fit(1)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
```
show_doc(ClassificationInterpretation.plot_top_losses)
show_doc(ClassificationInterpretation.from_learner)
show_doc(ClassificationInterpretation.top_losses)
show_doc(ClassificationInterpretation.confusion_matrix)
show_doc(ClassificationInterpretation.most_confused)
show_doc(ClassificationInterpretation.plot_confusion_matrix)
show_doc(ClassificationInterpretation.plot_multi_top_losses)
```
## Open This Notebook
<button style="display: flex; align-item: center; padding: 4px 8px; font-size: 14px; font-weight: 700; color: #1976d2; cursor: pointer;" onclick="window.location.href = 'https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ffastai%252Ffastai%252Fmaster%252Fdocs_src%252Ftrain.ipynb';"><img src="https://www.gstatic.com/images/branding/product/1x/cloud_24dp.png" /><span style="line-height: 24px; margin-left: 10px;">Open in GCP Notebooks</span></button>
| github_jupyter |
## Testing Gamepad
```
from readPad import *
foo=rPad() #Supported ports are 1,2,3,4
df=foo.record(duration=5, rate=float(1 / 120), file="",type="df",) #These are the default values
df
df.columns
```
We see that we have to to normalize Lx, Ly, Rx, Ry
```
#Round to specific decimals places under an entire DataFrame
df=df.round(decimals = 1)
df.tail(300)
# plotting Lx
df.plot( y="Lx", use_index=True)
df.plot( y="Ly", use_index=True)
df.plot( y="Rx", use_index=True)
df.plot( y="Ry", use_index=True)
df=foo.record() #These are the default values
df=foo.capture(stopper='START')
from writePad import *
foo2=wPad()
foo2.playback(df)
###
foo.capture_array(stopper='START')
from readPad import *
def gamepad_check():
con = rPad(1)
#print(f'State:{con.read}')
dictionary=con.read
lista=list(dictionary.values())
# Convert bolean to 1
lista = list(map(int, lista))
# Create a list that applies updated() to all elements of lista
listab=list(map(normalize, lista[2:6]))
lista[2:6] = listab[:]
listac=list(map(normalizet, lista[0:2]))
lista[0:2] = listac[:]
return lista
foo=rPad() #Supported ports are 1,2,3,4
df=foo.record(duration=5, rate=float(1 / 120), file="",type="df",) #These are the default values
df
#from getgamepad import *
con = rPad()#Supported ports are 1,2,3,4
print(f'State:{con.read}')
dictionary=con.read
lista=list(dictionary.values())
lista
lista = list(map(int, lista))
lista
# Create a function that normalize "Axis are -32768 to 32767"
#Lx', 'Ly', 'Rx', 'Ry'
def normalize(x): return round(x / 32768,1)
#'LT', 'RT'
def normalizet(x): return round(x / 255,1)
# Create a list that applies updated() to all elements of lista
listab=list(map(normalize, lista[2:6]))
#listab[:]
lista[2:6] = listab[:]
listac=list(map(normalizet, lista[0:2]))
lista[0:2] = listac[:]
lista
lista[2:6]
# Create a variable where we will put the updated list
listUpdated = []
for x in lista[2:6]:
listUpdated.append(x / 32768)
# Create a function that for each item in casualties, adds 10
for x in casualties:
# View casualties variables
listUpdated
a = [1, 2, 3, 4, 1, 5, 3, 2, 6, 1, 1]
replacements = {1:10, 2:20, 3:'foo'}
replacer = replacements.get # For faster gets.
print([replacer(n, n) for n in a])
lista[3]
#The following line is technically inaccurate as Bryan says "Axis are -32768 to 32767"
output[['Lx', 'Ly', 'Rx',
'Ry']] = output[['Lx', 'Ly', 'Rx', 'Ry']] / 32768
output[['LT', 'RT']] = output[['LT', 'RT']] / 255
gamepad_check()
con = rPad(1)
print(f'State:{con.read}')
d=con.read
#list(d.keys())
#lista=list(d.values())
#lista
[int(elem) for elem in lista]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W3D3_ReinforcementLearningForGames/student/W3D3_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 1: Learn to play games with RL
**Week 3, Day 3: Reinforcement Learning for Games**
**By Neuromatch Academy**
__Content creators:__ Mandana Samiei, Raymond Chua, Tim Lilicrap, Blake Richards
__Content reviewers:__ Arush Tagade, Lily Cheng, Melvin Selim Atay
__Content editors:__ Melvin Selim Atay, Spiros Chavlis
__Production editors:__ Namrata Bafna, Spiros Chavlis
---
# Tutorial Objectives
In this tutotial, you will learn how to implement a game loop and improve the performance of a random player.
The specific objectives for this tutorial:
* Understand the format of two-players games
* Learn about value network and policy network
* Learn about Monte Carlo Tree Search (MCTS) and compare its performance to policy-based and value-based players
```
# @title Video 0: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1kq4y1H7MQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"v4wafEsgopE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Tutorial slides
# @markdown These are the slides for the videos in the tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/3zn9w/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
In this section, we have:
1. **Import cell**: imports all libraries you use in the tutorial
2. **Hidden Figure settings cell**: sets up the plotting style (copy exactly)
3. **Hidden Plotting functions cell**: contains all functions used to create plots throughout the tutorial (so students don't waste time looking at boilerplate matplotlib but can here if they wish to). Please use only matplotlib for plotting for consistency.
4. **Hidden Helper functions cell**: This should contain functions that students have previously used or that are very simple. Any helper functions that are being used for the first time and are important should be placed directly above the relevant text or exercise (see Section 1.1 for an example)
```
# @title Download the modules
# @markdown Run this cell!
# @markdown Download from OSF. Original repo: https://github.com/raymondchua/nma_rl_games.git
import os
from io import BytesIO
from urllib.request import urlopen
from zipfile import ZipFile
REPO_PATH = "nma_rl_games"
download_str = "Downloading"
if os.path.exists(REPO_PATH):
download_str = "Redownloading"
!rm -rf $REPO_PATH
# download from github repo directly
#!git clone git://github.com/raymondchua/nma_rl_games.git --quiet
zipurl = 'https://osf.io/kf4p9/download'
print(f"{download_str} and unzipping the file... Please wait.")
with urlopen(zipurl) as zipresp:
with ZipFile(BytesIO(zipresp.read())) as zfile:
zfile.extractall()
print("Download completed!")
import sys
sys.path.append('nma_rl_games/alpha-zero')
# @markdown Import modules designed for use in this notebook
import Arena
from utils import *
from Game import Game
from MCTS import MCTS
from NeuralNet import NeuralNet
from othello.OthelloPlayers import *
from othello.OthelloLogic import Board
from othello.OthelloGame import OthelloGame
from othello.pytorch.NNet import NNetWrapper as NNet
# @title Install dependencies
!pip install coloredlogs --quiet
# Imports
from __future__ import print_function
import os
import math
import time
import torch
import random
import logging
import argparse
import coloredlogs
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from pickle import Pickler, Unpickler
from tqdm.notebook import tqdm
from torchvision import datasets, transforms
from collections import deque
from random import shuffle
from pickle import Pickler, Unpickler
log = logging.getLogger(__name__)
coloredlogs.install(level='INFO') # Change this to DEBUG to see more info.
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
args = dotdict({
'numIters': 1, # in training setting this was 1000 and num of episodes=100
'numEps': 1, # Number of complete self-play games to simulate during a new iteration.
'tempThreshold': 15, # To control exploration and exploitation
'updateThreshold': 0.6, # During arena playoff, new neural net will be accepted if threshold or more of games are won.
'maxlenOfQueue': 200, # Number of game examples to train the neural networks.
'numMCTSSims': 15, # Number of games moves for MCTS to simulate.
'arenaCompare': 10, # Number of games to play during arena play to determine if new net will be accepted.
'cpuct': 1,
'maxDepth':5, # Maximum number of rollouts
'numMCsims': 5, # Number of monte carlo simulations
'mc_topk': 3, # top k actions for monte carlo rollout
'checkpoint': './temp/',
'load_model': False,
'load_folder_file': ('/dev/models/8x100x50','best.pth.tar'),
'numItersForTrainExamplesHistory': 20,
# define neural network arguments
'lr': 0.001, # lr: learning rate
'dropout': 0.3,
'epochs': 10,
'batch_size': 64,
'device': DEVICE,
'num_channels': 512,
})
```
---
# Section 1: Create a game/agent loop for RL
```
# @title Video 1: A game loop for RL
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iw411979L", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"s4BK_yrknf4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
***Goal***: How to setup a game environment with multiple players for reinforcement learning experiments.
***Exercise***:
* Build an agent that plays random moves
* Connect with connect 4 game
* Generate games including wins and losses
```
class OthelloGame(Game):
square_content = {
-1: "X",
+0: "-",
+1: "O"
}
@staticmethod
def getSquarePiece(piece):
return OthelloGame.square_content[piece]
def __init__(self, n):
self.n = n
def getInitBoard(self):
# return initial board (numpy board)
b = Board(self.n)
return np.array(b.pieces)
def getBoardSize(self):
# (a,b) tuple
return (self.n, self.n)
def getActionSize(self):
# return number of actions, n is the board size and +1 is for no-op action
return self.n*self.n + 1
def getNextState(self, board, player, action):
# if player takes action on board, return next (board,player)
# action must be a valid move
if action == self.n*self.n:
return (board, -player)
b = Board(self.n)
b.pieces = np.copy(board)
move = (int(action/self.n), action%self.n)
b.execute_move(move, player)
return (b.pieces, -player)
def getValidMoves(self, board, player):
# return a fixed size binary vector
valids = [0]*self.getActionSize()
b = Board(self.n)
b.pieces = np.copy(board)
legalMoves = b.get_legal_moves(player)
if len(legalMoves)==0:
valids[-1]=1
return np.array(valids)
for x, y in legalMoves:
valids[self.n*x+y]=1
return np.array(valids)
def getGameEnded(self, board, player):
# return 0 if not ended, 1 if player 1 won, -1 if player 1 lost
# player = 1
b = Board(self.n)
b.pieces = np.copy(board)
if b.has_legal_moves(player):
return 0
if b.has_legal_moves(-player):
return 0
if b.countDiff(player) > 0:
return 1
return -1
def getCanonicalForm(self, board, player):
# return state if player==1, else return -state if player==-1
return player*board
def getSymmetries(self, board, pi):
# mirror, rotational
assert(len(pi) == self.n**2+1) # 1 for pass
pi_board = np.reshape(pi[:-1], (self.n, self.n))
l = []
for i in range(1, 5):
for j in [True, False]:
newB = np.rot90(board, i)
newPi = np.rot90(pi_board, i)
if j:
newB = np.fliplr(newB)
newPi = np.fliplr(newPi)
l += [(newB, list(newPi.ravel()) + [pi[-1]])]
return l
def stringRepresentation(self, board):
return board.tobytes()
def stringRepresentationReadable(self, board):
board_s = "".join(self.square_content[square] for row in board for square in row)
return board_s
def getScore(self, board, player):
b = Board(self.n)
b.pieces = np.copy(board)
return b.countDiff(player)
@staticmethod
def display(board):
n = board.shape[0]
print(" ", end="")
for y in range(n):
print(y, end=" ")
print("")
print("-----------------------")
for y in range(n):
print(y, "|", end="") # print the row #
for x in range(n):
piece = board[y][x] # get the piece to print
print(OthelloGame.square_content[piece], end=" ")
print("|")
print("-----------------------")
```
## Section 1.1: Create a random player
### Coding Exercise 1.1: Implement a random player
```
class RandomPlayer():
def __init__(self, game):
self.game = game
def play(self, board):
#################################################
## TODO for students: ##
## 1. Please compute the valid moves using getValidMoves(). ##
## 2. Compute the probability over actions.##
## 3. Pick a random action based on the probability computed above.##
# Fill out function and remove ##
raise NotImplementedError("Implement the random player")
#################################################
valids = ...
prob = ...
a = ...
return a
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_72142174.py)
## Section 1.2. Initiate the game board
```
# Display the board
set_seed(seed=SEED)
game = OthelloGame(6)
board = game.getInitBoard()
game.display(board)
# observe the game board size
print(f'Board size = {game.getBoardSize()}')
# observe the action size
print(f'Action size = {game.getActionSize()}')
```
## Section 1.3. Create two random agents to play against each other
```
# define the random player
player1 = RandomPlayer(game).play # player 1 is a random player
player2 = RandomPlayer(game).play # player 2 is a random player
# define number of games
num_games = 20
# start the competition
set_seed(seed=SEED)
arena = Arena.Arena(player1, player2 , game, display=None) # to see the steps of the competition set "display=OthelloGame.display"
result = arena.playGames(num_games, verbose=False) # return ( number of games won by player1, num of games won by player2, num of games won by nobody)
print(f"\n\n{result}")
```
```
(11, 9, 0)
```
## Section 1.4. Compute win rate for the random player (player 1)
```
print(f"Number of games won by player1 = {result[0]}, "
f"Number of games won by player2 = {result[1]} out of {num_games} games")
win_rate_player1 = result[0]/num_games
print(f"\nWin rate for player1 over 20 games: {round(win_rate_player1*100, 1)}%")
```
```
Number of games won by player1 = 11, Number of games won by player2 = 9 out of 20 games
Win rate for player1 over 20 games: 55.0%
```
---
# Section 2: Train a value function from expert game data
**Goal:** Learn how to train a value function from a dataset of games played by an expert.
**Exercise:**
* Load a dataset of expert generated games.
* Train a network to minimize MSE for win/loss predictions given board states sampled throughout the game. This will be done on a very small number of games. We will provide a network trained on a larger dataset.
```
# @title Video 2: Train a value function
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1jf4y157xQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"RVo6rVP9iC0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Section 2.1. Load expert data
```
def loadTrainExamples(folder, filename):
trainExamplesHistory = []
modelFile = os.path.join(folder, filename)
examplesFile = modelFile + ".examples"
if not os.path.isfile(examplesFile):
print(f'File "{examplesFile}" with trainExamples not found!')
r = input("Continue? [y|n]")
if r != "y":
sys.exit()
else:
print("File with train examples found. Loading it...")
with open(examplesFile, "rb") as f:
trainExamplesHistory = Unpickler(f).load()
print('Loading done!')
# examples based on the model were already collected (loaded)
return trainExamplesHistory
path = F"/content/nma_rl_games/alpha-zero/pretrained_models/data/"
loaded_games = loadTrainExamples(folder=path, filename='checkpoint_1.pth.tar')
```
## Section 2.2. Define the Neural Network Architecture for Othello
### Coding Exercise 2.2: Implement the NN `OthelloNNet` for Othello
```
class OthelloNNet(nn.Module):
def __init__(self, game, args):
# game params
self.board_x, self.board_y = game.getBoardSize()
self.action_size = game.getActionSize()
self.args = args
super(OthelloNNet, self).__init__()
self.conv1 = nn.Conv2d(1, args.num_channels, 3, stride=1, padding=1)
self.conv2 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1,
padding=1)
self.conv3 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1)
self.conv4 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1)
self.bn1 = nn.BatchNorm2d(args.num_channels)
self.bn2 = nn.BatchNorm2d(args.num_channels)
self.bn3 = nn.BatchNorm2d(args.num_channels)
self.bn4 = nn.BatchNorm2d(args.num_channels)
self.fc1 = nn.Linear(args.num_channels * (self.board_x - 4) * (self.board_y - 4), 1024)
self.fc_bn1 = nn.BatchNorm1d(1024)
self.fc2 = nn.Linear(1024, 512)
self.fc_bn2 = nn.BatchNorm1d(512)
self.fc3 = nn.Linear(512, self.action_size)
self.fc4 = nn.Linear(512, 1)
def forward(self, s):
# s: batch_size x board_x x board_y
s = s.view(-1, 1, self.board_x, self.board_y) # batch_size x 1 x board_x x board_y
s = F.relu(self.bn1(self.conv1(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn2(self.conv2(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn3(self.conv3(s))) # batch_size x num_channels x (board_x-2) x (board_y-2)
s = F.relu(self.bn4(self.conv4(s))) # batch_size x num_channels x (board_x-4) x (board_y-4)
s = s.view(-1, self.args.num_channels * (self.board_x - 4) * (self.board_y - 4))
s = F.dropout(F.relu(self.fc_bn1(self.fc1(s))), p=self.args.dropout, training=self.training) # batch_size x 1024
s = F.dropout(F.relu(self.fc_bn2(self.fc2(s))), p=self.args.dropout, training=self.training) # batch_size x 512
pi = self.fc3(s) # batch_size x action_size
v = self.fc4(s) # batch_size x 1
#################################################
## TODO for students: Please compute a probability distribution over 'pi' using log softmax (for numerical stability)
# Fill out function and remove
raise NotImplementedError("Calculate the probability distribution and the value")
#################################################
# return a probability distribution over actions at the current state and the value of the current state.
return ..., ...
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_6639b79e.py)
## Section 2.3. Define the Value network
During the training the ground truth will be uploaded from the **MCTS simulations** available at 'checkpoint_x.path.tar.examples'.
### Coding Exercise 2.3: Implement the `ValueNetwork`
```
class ValueNetwork(NeuralNet):
def __init__(self, game):
self.nnet = OthelloNNet(game, args)
self.board_x, self.board_y = game.getBoardSize()
self.action_size = game.getActionSize()
self.nnet.to(args.device)
def train(self, games):
"""
examples: list of examples, each example is of form (board, pi, v)
"""
optimizer = optim.Adam(self.nnet.parameters())
for examples in games:
for epoch in range(args.epochs):
print('EPOCH ::: ' + str(epoch + 1))
self.nnet.train()
v_losses = [] # to store the losses per epoch
batch_count = int(len(examples) / args.batch_size) # len(examples)=200, batch-size=64, batch_count=3
t = tqdm(range(batch_count), desc='Training Value Network')
for _ in t:
sample_ids = np.random.randint(len(examples), size=args.batch_size) # read the ground truth information from MCTS simulation using the loaded examples
boards, pis, vs = list(zip(*[examples[i] for i in sample_ids])) # length of boards, pis, vis = 64
boards = torch.FloatTensor(np.array(boards).astype(np.float64))
target_vs = torch.FloatTensor(np.array(vs).astype(np.float64))
# predict
boards, target_vs = boards.contiguous().to(args.device), target_vs.contiguous().to(args.device)
#################################################
## TODO for students:
## 1. Compute the value predicted by OthelloNNet() ##
## 2. First implement the loss_v() function below and then use it to update the value loss. ##
# Fill out function and remove
raise NotImplementedError("Compute the output")
#################################################
# compute output
_, out_v = ...
l_v = ... # total loss
# record loss
v_losses.append(l_v.item())
t.set_postfix(Loss_v=l_v.item())
# compute gradient and do SGD step
optimizer.zero_grad()
l_v.backward()
optimizer.step()
def predict(self, board):
"""
board: np array with board
"""
# timing
start = time.time()
# preparing input
board = torch.FloatTensor(board.astype(np.float64))
board = board.contiguous().to(args.device)
board = board.view(1, self.board_x, self.board_y)
self.nnet.eval()
with torch.no_grad():
_, v = self.nnet(board)
return v.data.cpu().numpy()[0]
def loss_v(self, targets, outputs):
#################################################
## TODO for students: Please compute Mean squared error and return as output. ##
# Fill out function and remove
raise NotImplementedError("Calculate the loss")
#################################################
# Mean squared error (MSE)
return ...
def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'):
filepath = os.path.join(folder, filename)
if not os.path.exists(folder):
print("Checkpoint Directory does not exist! Making directory {}".format(folder))
os.mkdir(folder)
else:
print("Checkpoint Directory exists! ")
torch.save({'state_dict': self.nnet.state_dict(),}, filepath)
print("Model saved! ")
def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'):
# https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98
filepath = os.path.join(folder, filename)
if not os.path.exists(filepath):
raise ("No model in path {}".format(filepath))
checkpoint = torch.load(filepath, map_location=args.device)
self.nnet.load_state_dict(checkpoint['state_dict'])
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_df8a4fe0.py)
## Section 2.4. Train the value network and observe the MSE loss progress
Only run this cell if you do not have access to the pretrained models in the rl_for_games repositry.
```
set_seed(seed=SEED)
game = OthelloGame(6)
vnet = ValueNetwork(game)
vnet.train(loaded_games)
```
---
# Section 3: Use a trained value network to play games
**Goal**: Learn how to use a value function in order to make a player that works better than a random player.
**Exercise:**
* Sample random valid moves and use the value function to rank them
* Choose the best move as the action and play it
Show that doing so beats the random player
**Hint:** You might need to change the sign of the value based on the player
```
# @title Video 3: Play games using a value function
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1u54y1J7E6", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"HreQzd7iusI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 3: Value-based player
```
model_save_name = 'ValueNetwork.pth.tar'
path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/"
set_seed(seed=SEED)
game = OthelloGame(6)
vnet = ValueNetwork(game)
vnet.load_checkpoint(folder=path, filename=model_save_name)
class ValueBasedPlayer():
def __init__(self, game, vnet):
self.game = game
self.vnet = vnet
def play(self, board):
valids = self.game.getValidMoves(board, 1)
candidates = []
max_num_actions = 3
va = np.where(valids)[0]
va_list = va.tolist()
shuffle(va_list)
#################################################
## TODO for students: In the first part, please return the next board state using getNextState(), then predict ##
## the value of next state using value network, and finally add the value and action as a tuple to the candidate list. ##
## Note that you need to reverse the sign of the value. Since we are in a 2-player game regime, in zero-sum games
## the players flip every turn. ##
## and the one returned from the network is computed for the current player but the next one to play is the opponent! ##
# Fill out function and remove
raise NotImplementedError("Implement the value-based player")
#################################################
for a in va_list:
nextBoard, _ = ...
value = ...
candidates += ...
if len(candidates) == max_num_actions:
break
candidates.sort()
return candidates[0][1]
# playing games between a value-based player and a random player
set_seed(seed=SEED)
num_games = 20
player1 = ValueBasedPlayer(game, vnet).play
player2 = RandomPlayer(game).play
arena = Arena.Arena(player1, player2, game, display=OthelloGame.display)
## Uncomment the code below to check your code!
# result = arena.playGames(num_games, verbose=False)
# print(f"\n\n{result}")
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_5e948309.py)
**Result of pitting a value-based player against a random player**
```
print(f"Number of games won by player1 = {result[0]}, "
f"Number of games won by player2 = {result[1]}, out of {num_games} games")
win_rate_player1 = result[0]/num_games # result[0] is the number of times that player 1 wins
print(f"\nWin rate for player1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
```
Number of games won by player1 = 11, Number of games won by player2 = 9, out of 20 games
Win rate for player1 over 20 games: 55.0%
```
---
# Section 4: Train a policy network from expert game data
**Goal**: How to train a policy network via supervised learning / behavioural cloning.
**Exercise**:
* Train a network to predict the next move in an expert dataset by maximizing the log likelihood of the next action.
```
# @title Video 4: Train a policy network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1tg411M7Rg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"DVSJE2d9tNI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 4: Implement `PolicyNetwork`
```
class PolicyNetwork(NeuralNet):
def __init__(self, game):
self.nnet = OthelloNNet(game, args)
self.board_x, self.board_y = game.getBoardSize()
self.action_size = game.getActionSize()
self.nnet.to(args.device)
def train(self, games):
"""
examples: list of examples, each example is of form (board, pi, v)
"""
optimizer = optim.Adam(self.nnet.parameters())
for examples in games:
for epoch in range(args.epochs):
print('EPOCH ::: ' + str(epoch + 1))
self.nnet.train()
pi_losses = []
batch_count = int(len(examples) / args.batch_size)
t = tqdm(range(batch_count), desc='Training Policy Network')
for _ in t:
sample_ids = np.random.randint(len(examples), size=args.batch_size)
boards, pis, _ = list(zip(*[examples[i] for i in sample_ids]))
boards = torch.FloatTensor(np.array(boards).astype(np.float64))
target_pis = torch.FloatTensor(np.array(pis))
# predict
boards, target_pis = boards.contiguous().to(args.device), target_pis.contiguous().to(args.device)
#################################################
## TODO for students: ##
## 1. Compute the policy (pi) predicted by OthelloNNet() ##
## 2. Implement the loss_pi() function below and then use it to update the policy loss. ##
# Fill out function and remove
raise NotImplementedError("Compute the output")
#################################################
# compute output
out_pi, _ = ...
l_pi = ...
# record loss
pi_losses.append(l_pi.item())
t.set_postfix(Loss_pi=l_pi.item())
# compute gradient and do SGD step
optimizer.zero_grad()
l_pi.backward()
optimizer.step()
def predict(self, board):
"""
board: np array with board
"""
# timing
start = time.time()
# preparing input
board = torch.FloatTensor(board.astype(np.float64))
board = board.contiguous().to(args.device)
board = board.view(1, self.board_x, self.board_y)
self.nnet.eval()
with torch.no_grad():
pi,_ = self.nnet(board)
return torch.exp(pi).data.cpu().numpy()[0]
def loss_pi(self, targets, outputs):
#################################################
## TODO for students: To implement the loss function, please compute and return the negative log likelihood of targets. ##
# Fill out function and remove
raise NotImplementedError("Compute the loss")
#################################################
return ...
def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'):
filepath = os.path.join(folder, filename)
if not os.path.exists(folder):
print("Checkpoint Directory does not exist! Making directory {}".format(folder))
os.mkdir(folder)
else:
print("Checkpoint Directory exists! ")
torch.save({'state_dict': self.nnet.state_dict(),}, filepath)
print("Model saved! ")
def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'):
# https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98
filepath = os.path.join(folder, filename)
if not os.path.exists(filepath):
raise ("No model in path {}".format(filepath))
checkpoint = torch.load(filepath, map_location=args.device)
self.nnet.load_state_dict(checkpoint['state_dict'])
set_seed(seed=SEED)
game = OthelloGame(6)
## we use the same actor-critic network to output a policy
# pnet = PolicyNetwork(game)
# pnet.train(loaded_games)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_d4081c2d.py)
### Train the policy network
Only run this cell if you do not have access to the pretrained models in the rl_for_games repositry..
```
set_seed(seed=SEED)
game = OthelloGame(6)
pnet = PolicyNetwork(game)
pnet.train(loaded_games)
```
---
# Section 5: Use a trained policy network to play games
**Goal**: How to use a policy network to play games.
**Exercise:**
* Use the policy network to give probabilities for the next move.
* Build a player that takes the move given the maximum probability by the network.
* Compare this to another player that samples moves according to the probability distribution output by the network.
```
# @title Video 5: Play games using a policy network
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1DU4y1n7gD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"hhhBmSXIZGY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 5: `Implement the PolicyBasedPlayer`
```
model_save_name = 'PolicyNetwork.pth.tar'
path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/"
set_seed(seed=SEED)
game = OthelloGame(6)
pnet = PolicyNetwork(game)
pnet.load_checkpoint(folder=path, filename=model_save_name)
class PolicyBasedPlayer():
def __init__(self, game, pnet, greedy=True):
self.game = game
self.pnet = pnet
self.greedy = greedy
def play(self, board):
valids = self.game.getValidMoves(board, 1)
#################################################
## TODO for students: ##
## 1. Compute the action probabilities using policy network pnet()
## 2. Mask invalid moves using valids variable and the action probabilites computed above.
## 3. Compute the sum over valid actions and store them in sum_vap.
# Fill out function and remove
raise NotImplementedError("Define the play")
#################################################
action_probs = ...
vap = ... # masking invalid moves
sum_vap = ...
if sum_vap > 0:
vap /= sum_vap # renormalize
else:
# if all valid moves were masked we make all valid moves equally probable
print("All valid moves were masked, doing a workaround.")
vap = vap + valids
vap /= np.sum(vap)
if self.greedy:
# greedy policy player
a = np.where(vap == np.max(vap))[0][0]
else:
# sample-based policy player
a = np.random.choice(self.game.getActionSize(), p=vap)
return a
# playing games
set_seed(seed=SEED)
num_games = 20
player1 = PolicyBasedPlayer(game, pnet, greedy=True).play
player2 = RandomPlayer(game).play
arena = Arena.Arena(player1, player2, game, display=OthelloGame.display)
## Uncomment below to test!
# result = arena.playGames(num_games, verbose=False)
# print(f"\n\n{result}")
# print(f"\nWin rate for player1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_83adff8a.py)
```
Win rate for player1 over 20 games: 80.0%
```
## Section 5.1. Comparing a player that samples from the action probabilities versus the policy player that returns the maximum probability
There's often randomness in the results as we are running the players for a low number of games (only 20 games due compute+time costs). So, when students are running the cells they might not get the expected result. To better measure the strength of players you can run more games!
```
set_seed(seed=SEED)
num_games = 20
game = OthelloGame(6)
player1 = PolicyBasedPlayer(game, pnet, greedy=False).play
player2 = RandomPlayer(game).play
arena = Arena.Arena(player1, player2, game, display=OthelloGame.display)
result = arena.playGames(num_games, verbose=False)
print(f"\n\n{result}")
win_rate_player1 = result[0]/num_games
print(f"Win rate for player1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
```
Win rate for player1 over 20 games: 95.0%
```
## Section 5.2. Compare greedy policy based player versus value based player
```
set_seed(seed=SEED)
num_games = 20
game = OthelloGame(6)
player1 = PolicyBasedPlayer(game, pnet).play
player2 = ValueBasedPlayer(game, vnet).play
arena = Arena.Arena(player1, player2, game, display=OthelloGame.display)
result = arena.playGames(num_games, verbose=False)
print(f"\n\n{result}")
win_rate_player1 = result[0]/num_games
print(f"Win rate for player 1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
```
Win rate for player 1 over 20 games: 50.0%
```
## Section 5.3. Compare greedy policy based player versus sample-based policy player
```
set_seed(seed=SEED)
num_games = 20
game = OthelloGame(6)
player1 = PolicyBasedPlayer(game, pnet).play # greedy player
player2 = PolicyBasedPlayer(game, pnet, greedy=False).play # sample-based player
arena = Arena.Arena(player1, player2, game, display=OthelloGame.display)
result = arena.playGames(num_games, verbose=False)
print(f"\n\n{result}")
win_rate_player1 = result[0]/num_games
print(f"Win rate for player 1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
```
Win rate for player 1 over 20 games: 50.0%
```
---
# Section 6: Plan using Monte Carlo rollouts
**Goal**:
Teach the students the core idea behind using simulated rollouts to understand the future and value actions.
**Exercise**:
* Build a loop to run Monte Carlo simulations using the policy network.
* Use this to obtain better estimates of the value of moves.
```
# @title Video 6: Play using Monte-Carlo rollouts
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1MM4y1T77C", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"EpoIjzytpxQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 6: `MonteCarlo`
```
class MonteCarlo():
def __init__(self, game, nnet, args):
self.game = game
self.nnet = nnet
self.args = args
self.Ps = {} # stores initial policy (returned by neural net)
self.Es = {} # stores game.getGameEnded ended for board s
# call this rollout
def simulate(self, canonicalBoard):
"""
This function performs one monte carlo rollout
"""
s = self.game.stringRepresentation(canonicalBoard)
init_start_state = s
temp_v = 0
isfirstAction = None
for i in range(self.args.maxDepth): # maxDepth
if s not in self.Es:
self.Es[s] = self.game.getGameEnded(canonicalBoard, 1)
if self.Es[s] != 0:
# terminal state
temp_v= -self.Es[s]
break
self.Ps[s], v = self.nnet.predict(canonicalBoard)
valids = self.game.getValidMoves(canonicalBoard, 1)
self.Ps[s] = self.Ps[s] * valids # masking invalid moves
sum_Ps_s = np.sum(self.Ps[s])
if sum_Ps_s > 0:
self.Ps[s] /= sum_Ps_s # renormalize
else:
# if all valid moves were masked make all valid moves equally probable
# NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else.
# If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process.
log.error("All valid moves were masked, doing a workaround.")
self.Ps[s] = self.Ps[s] + valids
self.Ps[s] /= np.sum(self.Ps[s])
#################################################
## TODO for students: Take a random action.
## 1. Take the random action.
## 2. Find the next state and the next player from the environment.
## 3. Get the canonical form of the next state.
# Fill out function and remove
raise NotImplementedError("Take the action, find the next state")
#################################################
a = ...
next_s, next_player = self.game.getNextState(..., ..., ...)
next_s = self.game.getCanonicalForm(..., ...)
s = self.game.stringRepresentation(next_s)
temp_v = v
return temp_v
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_2af8513a.py)
---
# Section 7: Use Monte Carlo simulations to play games
**Goal:**
Teach students how to use simple Monte Carlo planning to play games.
```
# @title Video 7: Play with planning
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Kg411M78Y", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"-KV8DvNjn5Q", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 7: Monte-Carlo simulations
* Incorporate Monte Carlo simulations into an agent.
* Run the resulting player versus the random, value-based, and policy-based players.
```
# Load MC model from the repository
mc_model_save_name = 'MC.pth.tar'
path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/"
class MonteCarloBasedPlayer():
def __init__(self, game, nnet, args):
self.game = game
self.nnet = nnet
self.args = args
#################################################
## TODO for students: Instantiate the Monte Carlo class.
# Fill out function and remove
raise NotImplementedError("Use Monte Carlo!")
#################################################
self.mc = ...
self.K = self.args.mc_topk
def play(self, canonicalBoard):
self.qsa = []
s = self.game.stringRepresentation(canonicalBoard)
Ps, v = self.nnet.predict(canonicalBoard)
valids = self.game.getValidMoves(canonicalBoard, 1)
Ps = Ps * valids # masking invalid moves
sum_Ps_s = np.sum(Ps)
if sum_Ps_s > 0:
Ps /= sum_Ps_s # renormalize
else:
# if all valid moves were masked make all valid moves equally probable
# NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else.
# If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process.
log = logging.getLogger(__name__)
log.error("All valid moves were masked, doing a workaround.")
Ps = Ps + valids
Ps /= np.sum(Ps)
num_valid_actions = np.shape(np.nonzero(Ps))[1]
if num_valid_actions < self.K:
top_k_actions = np.argpartition(Ps,-num_valid_actions)[-num_valid_actions:]
else:
top_k_actions = np.argpartition(Ps,-self.K)[-self.K:] # to get actions that belongs to top k prob
#################################################
## TODO for students:
## 1. For each action in the top-k actions
## 2. Get the next state using getNextState() function. You can find the implementation of this function in Section 1 in OthelloGame() class.
## 3. Get the canonical form of the getNextState().
# Fill out function and remove
raise NotImplementedError("Loop for the top actions")
#################################################
for action in ...:
next_s, next_player = self.game.getNextState(..., ..., ...)
next_s = self.game.getCanonicalForm(..., ...)
values = []
# do some rollouts
for rollout in range(self.args.numMCsims):
value = self.mc.simulate(canonicalBoard)
values.append(value)
# average out values
avg_value = np.mean(values)
self.qsa.append((avg_value, action))
self.qsa.sort(key=lambda a: a[0])
self.qsa.reverse()
best_action = self.qsa[0][1]
return best_action
def getActionProb(self, canonicalBoard, temp=1):
if self.game.getGameEnded(canonicalBoard, 1) != 0:
return np.zeros((self.game.getActionSize()))
else:
action_probs = np.zeros((self.game.getActionSize()))
best_action = self.play(canonicalBoard)
action_probs[best_action] = 1
return action_probs
set_seed(seed=SEED)
game = OthelloGame(6)
rp = RandomPlayer(game).play # all players
num_games = 20 # Feel free to change this number
n1 = NNet(game) # nNet players
n1.load_checkpoint(folder=path, filename=mc_model_save_name)
args1 = dotdict({'numMCsims': 10, 'maxRollouts':5, 'maxDepth':5, 'mc_topk': 3})
## Uncomment below to check Monte Carlo agent!
# mc1 = MonteCarloBasedPlayer(game, n1, args1)
# n1p = lambda x: np.argmax(mc1.getActionProb(x))
# arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display)
# MC_result = arena.playGames(num_games, verbose=False)
# print(f"\n\n{MC_result}")
# print(f"\nNumber of games won by player1 = {MC_result[0]}, "
# f"number of games won by player2 = {MC_result[1]}, out of {num_games} games")
# win_rate_player1 = result[0]/num_games
# print(f"\nWin rate for player1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_9d813162.py)
```
Number of games won by player1 = 11, number of games won by player2 = 9, out of 20 games
Win rate for player1 over 20 games: 50.0%
```
---
# Section 8: Plan using Monte Carlo Tree Search
**Goal:**
Teach students to understand the core ideas behind Monte Carlo Tree Search.
```
# @title Video 8: Plan with MCTS
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV11v411n7gg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"tKBcMtoEzQA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 8: MCTS planner
* Plug together pre-built Selection, Expansion & Backpropagation code to complete an MCTS planner.
* Deploy the MCTS planner to understand an interesting position, producing value estimates and action counts.
```
class MCTS():
"""
This class handles the MCTS tree.
"""
def __init__(self, game, nnet, args):
self.game = game
self.nnet = nnet
self.args = args
self.Qsa = {} # stores Q values for s,a (as defined in the paper)
self.Nsa = {} # stores #times edge s,a was visited
self.Ns = {} # stores #times board s was visited
self.Ps = {} # stores initial policy (returned by neural net)
self.Es = {} # stores game.getGameEnded ended for board s
self.Vs = {} # stores game.getValidMoves for board s
def search(self, canonicalBoard):
"""
This function performs one iteration of MCTS. It is recursively called
till a leaf node is found. The action chosen at each node is one that
has the maximum upper confidence bound as in the paper.
Once a leaf node is found, the neural network is called to return an
initial policy P and a value v for the state. This value is propagated
up the search path. In case the leaf node is a terminal state, the
outcome is propagated up the search path. The values of Ns, Nsa, Qsa are
updated.
NOTE: the return values are the negative of the value of the current
state. This is done since v is in [-1,1] and if v is the value of a
state for the current player, then its value is -v for the other player.
Returns:
v: the negative of the value of the current canonicalBoard
"""
s = self.game.stringRepresentation(canonicalBoard)
if s not in self.Es:
self.Es[s] = self.game.getGameEnded(canonicalBoard, 1)
if self.Es[s] != 0:
# terminal node
return -self.Es[s]
if s not in self.Ps:
# leaf node
self.Ps[s], v = self.nnet.predict(canonicalBoard)
valids = self.game.getValidMoves(canonicalBoard, 1)
self.Ps[s] = self.Ps[s] * valids # masking invalid moves
sum_Ps_s = np.sum(self.Ps[s])
if sum_Ps_s > 0:
self.Ps[s] /= sum_Ps_s # renormalize
else:
# if all valid moves were masked make all valid moves equally probable
# NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else.
# If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process.
log = logging.getLogger(__name__)
log.error("All valid moves were masked, doing a workaround.")
self.Ps[s] = self.Ps[s] + valids
self.Ps[s] /= np.sum(self.Ps[s])
self.Vs[s] = valids
self.Ns[s] = 0
return -v
valids = self.Vs[s]
cur_best = -float('inf')
best_act = -1
#################################################
## TODO for students:
## Implement the highest upper confidence bound depending whether we observed the state-action pair which is stored in self.Qsa[(s, a)]. You can find the formula in the slide 52 in video 8 above.
# Fill out function and remove
raise NotImplementedError("Complete the for loop")
#################################################
# pick the action with the highest upper confidence bound
for a in range(self.game.getActionSize()):
if valids[a]:
if (s, a) in self.Qsa:
u = ... + ... * ... * math.sqrt(...) / (1 + ...)
else:
u = ... * ... * math.sqrt(... + 1e-8)
if u > cur_best:
cur_best = u
best_act = a
a = best_act
next_s, next_player = self.game.getNextState(canonicalBoard, 1, a)
next_s = self.game.getCanonicalForm(next_s, next_player)
v = self.search(next_s)
if (s, a) in self.Qsa:
self.Qsa[(s, a)] = (self.Nsa[(s, a)] * self.Qsa[(s, a)] + v) / (self.Nsa[(s, a)] + 1)
self.Nsa[(s, a)] += 1
else:
self.Qsa[(s, a)] = v
self.Nsa[(s, a)] = 1
self.Ns[s] += 1
return -v
def getNsa(self):
return self.Nsa
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_83be26c4.py)
---
# Section 9: Use MCTS to play games
**Goal:**
Teach the students how to use the results of an MCTS to play games.
**Exercise:**
* Plug the MCTS planner into an agent.
* Play games against other agents.
* Explore the contributions of prior network, value function, number of simulations / time to play, and explore/exploit parameters.
```
# @title Video 9: Play with MCTS
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1ng411M7Gz", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ejG3kN_leRk", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 9: Agent that uses an MCTS planner
* Plug the MCTS planner into an agent.
* Play games against other agents.
* Explore the contributions of prior network, value function, number of simulations / time to play, and explore/exploit parameters.
```
# Load MCTS model from the repository
mcts_model_save_name = 'MCTS.pth.tar'
path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/"
class MonteCarloTreeSearchBasedPlayer():
def __init__(self, game, nnet, args):
self.game = game
self.nnet = nnet
self.args = args
self.mcts = MCTS(game, nnet, args)
def play(self, canonicalBoard, temp=1):
for i in range(self.args.numMCTSSims):
#################################################
## TODO for students:
# Run MCTS search function.
# Fill out function and remove
raise NotImplementedError("Plug the planner")
#################################################
...
s = self.game.stringRepresentation(canonicalBoard)
#################################################
## TODO for students:
# Call the Nsa function from MCTS class and store it in the self.Nsa
# Fill out function and remove
raise NotImplementedError("Compute Nsa (number of times edge s,a was visited)")
#################################################
self.Nsa = ...
self.counts = [self.Nsa[(s, a)] if (s, a) in self.Nsa else 0 for a in range(self.game.getActionSize())]
if temp == 0:
bestAs = np.array(np.argwhere(self.counts == np.max(self.counts))).flatten()
bestA = np.random.choice(bestAs)
probs = [0] * len(self.counts)
probs[bestA] = 1
return probs
self.counts = [x ** (1. / temp) for x in self.counts]
self.counts_sum = float(sum(self.counts))
probs = [x / self.counts_sum for x in self.counts]
return np.argmax(probs)
def getActionProb(self, canonicalBoard, temp=1):
action_probs = np.zeros((self.game.getActionSize()))
best_action = self.play(canonicalBoard)
action_probs[best_action] = 1
return action_probs
set_seed(seed=SEED)
game = OthelloGame(6)
rp = RandomPlayer(game).play # all players
num_games = 20 # games
n1 = NNet(game) # nnet players
n1.load_checkpoint(folder=path, filename=mcts_model_save_name)
args1 = dotdict({'numMCTSSims': 50, 'cpuct':1.0})
## Uncomment below to check your agent!
# mcts1 = MonteCarloTreeSearchBasedPlayer(game, n1, args1)
# n1p = lambda x: np.argmax(mcts1.getActionProb(x, temp=0))
# arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display)
# MCTS_result = arena.playGames(num_games, verbose=False)
# print(f"\n\n{MCTS_result}")
# print(f"\nNumber of games won by player1 = {MCTS_result[0]}, "
# f"number of games won by player2 = {MCTS_result[1]}, out of {num_games} games")
# win_rate_player1 = MCTS_result[0]/num_games
# print(f"\nWin rate for player1 over {num_games} games: {round(win_rate_player1*100, 1)}%")
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D3_ReinforcementLearningForGames/solutions/W3D3_Tutorial1_Solution_9dd0dfb6.py)
```
Number of games won by player1 = 19, num of games won by player2 = 1, out of 20 games
Win rate for player1 over 20 games: 95.0%
```
---
# Section 10: Ethical aspects
```
# @title Video 10: Unstoppable opponents
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19M4y1K75Z", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"4LKZwDP_Qac", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
---
# Summary
In this tutotial, you have learned how to implement a game loop and improve the performance of a random player. More specifically, you are now able to understand the format of two-players games. We learned about value-based and policy-based players, and we compare them with the MCTS method.
```
# @title Video 11: Outro
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1w64y167qd", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"8JcHw-2cwtM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# tf.function で性能アップ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/performance"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/performance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/performance.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/customization/performance.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow 2.0 では Eager Execution が既定で有効になっています。ユーザーインターフェイスは直感的で柔軟です(演算を一度だけ行う場合にはずっと簡単に、かつ迅速に実行されます)。しかしながら、それは性能と展開の面での犠牲の上に成り立っています。
最高性能を得ながら、モデルをどこへでも展開できるようにするには、`tf.function` を使ってプログラムから計算グラフを作成します。
AutoGraph のおかげで、驚くほど多くの Python コードが tf.function でそのまま動作しますが、気をつけなければならない落とし穴も存在します。
ポイントと推奨事項は下記の通りです。
- オブジェクトの変更やリストへの追加のような Python の副作用に依存しないこと
- tf.functions は NumPy の演算や Python の組み込み演算よりも、TensorFlow の演算に適していること
- 迷ったときは、`for x in y` というイディオムを使うこと
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import contextlib
# 遭遇するかもしれないいくつかのエラーをデモするためのヘルパー関数
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}: {}'.format(error_class, e))
except Exception as e:
print('Got unexpected exception \n {}: {}'.format(type(e), e))
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
```
あなたが定義した `tf.function` は TensorFlow Core の演算に似たものです。例えばそれを即時に実行することも、計算グラフで使うこともできますし、勾配を計算することも可能です。
```
# function は演算のように振る舞う
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
# function は勾配を計算できる
@tf.function
def add(a, b):
return a + b
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
# function 内で function を使うこともできる
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
```
## トレーシングとポリモーフィズム
Python の動的型付けは、関数をさまざまな型の引数で呼び出すことができ、Python がそれぞれのシナリオで異なる動作をするということを意味します。
他方で、TensorFlow の計算グラフでは、dtype と shape の次元が静的であることが必要です。`tf.function` は、正しい計算グラフを生成するために必要なときには関数を再トレースして、このギャップをつなぐ役割を果たします。
異なる型の引数を使って関数を呼び出し、何が起きるか見てみましょう。
```
# Function はポリモーフィック
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
```
トレースの動作を制御するためには、下記のようなテクニックを使います。
- 新しい `tf.function` を作成する。別々の `tf.function` オブジェクトがトレースを共有することはない。
- 特定のトレースを得るには `get_concrete_function` メソッドを使用する。
- 計算グラフの呼び出し時に1回だけトレースを行うには、 `input_signature` を指定して `tf.function` を呼び出す。
```
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.string))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
print("Using a concrete trace with incompatible types will throw an error")
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(tf.equal(x % 2, 0), x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# 1次元のテンソルを input signature として指定しているので、これは失敗する
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
```
## いつ再トレースするのか?
ポリモーフィックな `tf.function` はトレーシングによって生成された具象関数のキャッシュを保持しています。キャッシュのキーは、実際にはその関数の引数及びキーワード引数から生成されたキーのタプルです。`tf.Tensor` 引数から生成されるキーは、テンソルの shape と型です。Python の組み込み型引数から生成されるキーはその値です。それ以外の Python の型では、キーはオブジェクトの `id()` に基づいており、メソッドはクラスのインスタンスひとつずつ独立にトレースされます。将来、TensorFlowには、Python オブジェクトについて安全にテンソルに変換できるような、より洗練されたキャッシングが追加されるかもしれません。
## 引数は Python か? Tensor か?
しばしば、ハイパーパラメータやグラフ構成を制御するために Python の組み込み型の引数が使われます。例えば、`num_layers=10` や `training=True` あるいは `nonlinearity='relu'` のようにです。このため、この Python の組み込み型の引数が変更されると、計算グラフを再びトレースする必要があるということになります。
しかし、グラフの生成を制御するために Python の組み込み型の引数を使用する必要はありません。これらのケースでは、Python引数の値の変更が不必要な再トレースを引き起こす可能性があります。例えば、この訓練ループでは、AutoGraph は動的に展開を行います。複数回トレースを行っていますが、生成される計算グラフは全く変わりません。これは少し非効率です。
```
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = {}".format(num_steps))
for _ in tf.range(num_steps):
train_one_step()
train(num_steps=10)
train(num_steps=20)
```
ここでの簡単な回避方法は、生成されたグラフの shape が変わらないのであれば、引数をテンソルにキャストすることです。
```
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
```
## `tf.function` の中の副作用
一般的には、(印字やオブジェクト変更のような)Python の副作用は、トレーシングの最中にだけ発生します。それでは、どうしたら `tf.function` で安定的に副作用を起こすことができるでしょうか?
一般的な原則は、トレースをデバッグする際にだけ Python の副作用を使用するというものです。あるいは、`tf.Variable.assign`、`tf.print`、そして `tf.summary` のような TensorFlow の演算を使うことで、コードがトレースされるときにも、TensorFlowランタイムによって都度呼び出される際にも、確実に実行されるようにできます。一般には、関数型のスタイルを使用することで最も良い結果を得られます。
```
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
```
`tf.function` が呼び出されるたびに Python のコードを実行したいのであれば、`tf.py_function` がぴったりです。`tf.py_function` の欠点は、ポータブルでないこと、それほど性能が高くないこと、(マルチGPU、TPUの)分散環境ではうまく動作しないことなどです。また、`tf.py_function` は計算グラフに組み込まれるため、入出力すべてをテンソルにキャストします。
```
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def f(x):
tf.py_function(side_effect, inp=[x], Tout=[])
f(1)
f(1)
f(1)
assert len(external_list) == 3
# .numpy() call required because py_function casts 1 to tf.constant(1)
assert external_list[0].numpy() == 1
```
## Python の状態に注意
ジェネレーターやイテレーターなど Python の機能の多くは、状態を追跡するために Python のランタイムに依存しています。これらの仕組みは、一般的には Eager モードでも期待通りに動作しますが、トレーシングの振る舞いにより、`tf.function` の中では予期しないことが起きることがあります。
1例として、イテレーターの状態が進むのは Python の副作用であり、トレーシングの中だけで発生します。
```
external_var = tf.Variable(0)
@tf.function
def buggy_consume_next(iterator):
external_var.assign_add(next(iterator))
tf.print("Value of external_var:", external_var)
iterator = iter([0, 1, 2, 3])
buggy_consume_next(iterator)
# 次のコードは、イテレーターの次の値を使うのではなく、最初の値を再利用する
buggy_consume_next(iterator)
buggy_consume_next(iterator)
```
イテレーターが tf.function の中で生成されすべて使われる場合には、正しく動作するはずです。しかし、イテレーター全体がトレースされることとなり、巨大な計算グラフの生成をまねく可能性があります。これは、望みどおりの動作かもしれません。しかし、もし Python のリストとして表されたメモリー上の巨大なデータセットを使って訓練を行うとすると、これは非常に大きな計算グラフを生成することになり、`tf.function` がスピードアップにはつながらないと考えられます。
Python データを繰り返し使用する場合、もっとも安全な方法は tf.data.Dataset でラップして、`for x in y` というイディオムを使用することです。AutoGraph には、`y` がテンソルあるいは tf.data.Dataset である場合、`for` ループを安全に変換する特別な機能があります。
```
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # ダミー計算
return loss
small_data = [(1, 1)] * 2
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
```
Python/Numpy のデータを Dataset でラップする際には、`tf.data.Dataset.from_generator` と `tf.data.Dataset.from_tensors` の違いに留意しましょう。前者はデータを Python のまま保持し `tf.py_function` を通じて取得するため、性能に影響する場合があります。これに対して後者はデータのコピーを計算グラフの中の、ひとつの大きな `tf.constant()` に結びつけるため、メモリー消費に影響する可能性があります。
TFRecordDataset/CsvDataset/などを通じてデータをファイルから読み込むことが、データを使用する最も効率的な方法です。TensorFlow 自身が Python とは関係なく非同期のデータ読み込みとプリフェッチを管理することができるからです。
## 自動的な依存関係の制御
プログラミングモデルとしての関数が一般的なデータフローグラフに対して非常に優位である点は、意図したコードの振る舞いがどのようなものであるかということについて、より多くの情報をランタイムに与えられるということにあります。
例えば、同じ変数を何度も読んだり書いたりするコードを書く場合、データフローグラフではもともと意図されていた演算の順番を自然に組み込むわけではありません。`tf.function` の中では、もともとの Python コードの文の実行順序を参照することで、実行順序の曖昧さを解消します。これにより、`tf.function` の中のステートフルな演算の順序が、先行実行モードのセマンティクスを模していることになります。
これは、手動で制御の依存関係を加える必要がないことを意味しています。`tf.function` は十分賢いので、あなたのコードが正しく動作するために必要十分な最小限の制御の依存関係を追加してくれます。
```
# 自動的な依存関係の制御
a = tf.Variable(1.0)
b = tf.Variable(2.0)
@tf.function
def f(x, y):
a.assign(y * b)
b.assign_add(x * a)
return a + b
f(1.0, 2.0) # 10.0
```
## 変数
`tf.function` の中では、意図したコードの実行順序を活用するという同じアイデアを使って、変数の作成と活用を簡単に行うことができます。しかし、ひとつだけ非常に重要な欠点があります。それは、変数を使った場合、先行実行モードとグラフモードでは動作が変わるコードを書いてしまう可能性があるということです。
特に、呼び出しの都度新しい変数を作成する場合にこれが発生します。トレーシングの意味では、`tf.function` は呼び出しのたびに同じ変数を再利用しますが、Eager モードでは呼び出しごとに新しい変数を生成します。この間違いを防止するため、`tf.function` は危険な変数の生成動作を見つけるとエラーを発生させます。
```
@tf.function
def f(x):
v = tf.Variable(1.0)
v.assign_add(x)
return v
with assert_raises(ValueError):
f(1.0)
# しかし、曖昧さの無いコードは大丈夫
v = tf.Variable(1.0)
@tf.function
def f(x):
return v.assign_add(x)
print(f(1.0)) # 2.0
print(f(2.0)) # 4.0
# 初めて関数が実行されるときだけ変数が生成されることを保証できれば
# tf.function 内で変数を作成できる
class C: pass
obj = C(); obj.v = None
@tf.function
def g(x):
if obj.v is None:
obj.v = tf.Variable(1.0)
return obj.v.assign_add(x)
print(g(1.0)) # 2.0
print(g(2.0)) # 4.0
# 変数の初期化は、関数の引数や他の変数の値に依存可能
# 制御の依存関係を生成するのと同じ手法で、正しい初期化の順序を発見可能
state = []
@tf.function
def fn(x):
if not state:
state.append(tf.Variable(2.0 * x))
state.append(tf.Variable(state[0] * 3.0))
return state[0] * x * state[1]
print(fn(tf.constant(1.0)))
print(fn(tf.constant(3.0)))
```
# AutoGraph の使用
[autograph](https://www.tensorflow.org/guide/function) ライブラリは `tf.function` に完全に統合されており、計算グラフの中で動的に実行される条件文や繰り返しを書くことができます。
`tf.cond` や `tf.while_loop` は `tf.function` でも使えますが、制御フローを含むコードは、命令形式で書いたほうが書きやすいし理解しやすいです。
```
# 単純な繰り返し
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
# 興味があれば AutoGraph が生成するコードを調べることができる
# ただし、アセンブリ言語を読むような感じがする
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
print(tf.autograph.to_code(f))
```
## AutoGraph: 条件分岐
AutoGraph は `if` 文を等価である `tf.cond` の呼び出しに変換します。
この置換は条件がテンソルである場合に行われます。そうでない場合には、条件分岐はトレーシングの中で実行されます。
```
def test_tf_cond(f, *args):
g = f.get_concrete_function(*args).graph
if any(node.name == 'cond' for node in g.as_graph_def().node):
print("{}({}) uses tf.cond.".format(
f.__name__, ', '.join(map(str, args))))
else:
print("{}({}) executes normally.".format(
f.__name__, ', '.join(map(str, args))))
@tf.function
def hyperparam_cond(x, training=True):
if training:
x = tf.nn.dropout(x, rate=0.5)
return x
@tf.function
def maybe_tensor_cond(x):
if x < 0:
x = -x
return x
test_tf_cond(hyperparam_cond, tf.ones([1], dtype=tf.float32))
test_tf_cond(maybe_tensor_cond, tf.constant(-1))
test_tf_cond(maybe_tensor_cond, -1)
```
`tf.cond` には、色々と注意すべき細かな点があります。
- `tf.cond` は条件分岐の両方をトレーシングし、条件に従って実行時に適切な分岐を選択することで機能します。分岐の両方をトレースすることで、Python プログラムを予期せず実行する可能性があります。
- `tf.cond` では、分岐の一方が後ほど使用されるテンソルを作成する場合、もう一方の分岐もそのテンソルを作成することが必要です。
```
@tf.function
def f():
x = tf.constant(0)
if tf.constant(True):
x = x + 1
print("Tracing `then` branch")
else:
x = x - 1
print("Tracing `else` branch")
return x
f()
@tf.function
def f():
if tf.constant(True):
x = tf.ones([3, 3])
return x
# 分岐のどちらの枝でも `x` を定義する必要があるためエラーが発生
with assert_raises(ValueError):
f()
```
## AutoGraph と繰り返し
AutoGraph には繰り返しの変換にいくつかの単純なルールがあります。
- `for`: イテラブルがテンソルである場合に変換する
- `while`: while 条件がテンソルに依存している場合に変換する
繰り返しが変換される場合、`tf.while_loop` によって動的に展開されます。あるいは、 `for x in tf.data.Dataset` という特別なケースの場合には、 `tf.data.Dataset.reduce` に変換されます。
繰り返しが変換されない場合、それは静的に展開されます。
```
def test_dynamically_unrolled(f, *args):
g = f.get_concrete_function(*args).graph
if any(node.name == 'while' for node in g.as_graph_def().node):
print("{}({}) uses tf.while_loop.".format(
f.__name__, ', '.join(map(str, args))))
elif any(node.name == 'ReduceDataset' for node in g.as_graph_def().node):
print("{}({}) uses tf.data.Dataset.reduce.".format(
f.__name__, ', '.join(map(str, args))))
else:
print("{}({}) gets unrolled.".format(
f.__name__, ', '.join(map(str, args))))
@tf.function
def for_in_range():
x = 0
for i in range(5):
x += i
return x
test_dynamically_unrolled(for_in_range)
@tf.function
def for_in_tfrange():
x = tf.constant(0, dtype=tf.int32)
for i in tf.range(5):
x += i
return x
test_dynamically_unrolled(for_in_tfrange)
@tf.function
def for_in_tfdataset():
x = tf.constant(0, dtype=tf.int64)
for i in tf.data.Dataset.range(5):
x += i
return x
test_dynamically_unrolled(for_in_tfdataset)
@tf.function
def while_py_cond():
x = 5
while x > 0:
x -= 1
return x
test_dynamically_unrolled(while_py_cond)
@tf.function
def while_tf_cond():
x = tf.constant(5)
while x > 0:
x -= 1
return x
test_dynamically_unrolled(while_tf_cond)
```
繰り返しに、テンソルに依存する `break` や、途中での `return` がある場合、一番外側の条件あるいはイテラブルはテンソルである必要があります。
比較してみましょう。
```
@tf.function
def while_py_true_py_break(x):
while True: # py true
if x == 0: # py break
break
x -= 1
return x
test_dynamically_unrolled(while_py_true_py_break, 5)
@tf.function
def buggy_while_py_true_tf_break(x):
while True: # py true
if tf.equal(x, 0): # tf break
break
x -= 1
return x
with assert_raises(TypeError):
test_dynamically_unrolled(buggy_while_py_true_tf_break, 5)
@tf.function
def while_tf_true_tf_break(x):
while tf.constant(True): # tf true
if x == 0: # py break
break
x -= 1
return x
test_dynamically_unrolled(while_tf_true_tf_break, 5)
@tf.function
def buggy_py_for_tf_break():
x = 0
for i in range(5): # py for
if tf.equal(i, 3): # tf break
break
x += i
return x
with assert_raises(TypeError):
test_dynamically_unrolled(buggy_py_for_tf_break)
@tf.function
def tf_for_py_break():
x = 0
for i in tf.range(5): # tf for
if i == 3: # py break
break
x += i
return x
test_dynamically_unrolled(tf_for_py_break)
```
動的に展開される繰り返しの結果を集計するため、`tf.TensorArray` を使いたくなるかもしれません。
```
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
```
`tf.cond` と同様に、`tf.while_loop` にも、色々と注意すべき細かな点があります。
- 繰り返しの実行回数が 0 である可能性があるため、while_loop の後で使用されるテンソルは、繰り返しの前に初期化されなければならない
- すべての繰り返しの変数は、各繰り返しを通じてその形状と dtype が変わらないことが必要
```
@tf.function
def buggy_loop_var_uninitialized():
for i in tf.range(3):
x = i
return x
with assert_raises(ValueError):
buggy_loop_var_uninitialized()
@tf.function
def f():
x = tf.constant(0)
for i in tf.range(3):
x = i
return x
f()
@tf.function
def buggy_loop_type_changes():
x = tf.constant(0, dtype=tf.float32)
for i in tf.range(3): # tf.int32 型のテンソルを1つづつ取り出して…
x = i
return x
with assert_raises(tf.errors.InvalidArgumentError):
buggy_loop_type_changes()
@tf.function
def buggy_concat():
x = tf.ones([0, 10])
for i in tf.range(5):
x = tf.concat([x, tf.ones([1, 10])], axis=0)
return x
with assert_raises(ValueError):
buggy_concat()
@tf.function
def concat_with_padding():
x = tf.zeros([5, 10])
for i in tf.range(5):
x = tf.concat([x[:i], tf.ones([1, 10]), tf.zeros([4-i, 10])], axis=0)
x.set_shape([5, 10])
return x
concat_with_padding()
```
| github_jupyter |
```
from joblib import dump, load
import numpy as np
import cv2
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
import matplotlib.pyplot as plt
#set the directory for custom scripts
import sys
sys.path.append('/Users/macbook/Box/git_hub/flask3/scripts/')
#import custom scripts
import sql_con
from sql_con import df_from_query
import hsv_shift as hsv
#import the clustered ds swatches
hsv_knn_chroma = load('/Users/macbook/Box/git_hub/Insight_Project_clean/models/ds_h_chroma.joblib')
hsv_knn_neutral = load('/Users/macbook/Box/git_hub/Insight_Project_clean/models/ds_h_neutrals.joblib')
img = cv2.imread('/Users/macbook/Box/insight_project_data/test_image/tucan2.jpg')
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_HSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
img_sml = cv2.resize(img_HSV,None,fx=0.25,fy=0.25,interpolation=cv2.INTER_AREA)
plt.imshow(img_sml);
pixels = np.float32(img_sml.reshape(-1, 3))
pixels.shape
#custom function imports image and converts to hsv and 1-D pixel array
pixels = hsv.import_convert_pixelize('/Users/macbook/Box/insight_project_data/test_image/tucan2.jpg')
pixels.shape
```
# convert the image to pixels and seperate out the neutrals from the chroma
```
#custom function takes the pixel array and splits it into chroma and neutrals and returns two dataframes
#takes longer with larger images
shifted_colors, shifted_neutrals = hsv.shift_h_split(pixels, .25, .25)
shifted_colors
```
# cluster the colors using k-means and return the values
```
X_pixels = shifted_colors[['h']]
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=8, random_state=42, algorithm = 'full')
kmeans.fit(X_pixels)
image2show = kmeans.cluster_centers_[kmeans.labels_]
kmeans_df = pd.DataFrame(image2show, columns=['h'])
kmeans_df['label'] = kmeans.labels_
```
## Take the clustered colors and match them to the pigment KNN model
```
X = kmeans_df[['h']]
predict_colors = hsv_knn_chroma.predict(X)
colors2 = np.array(np.unique(predict_colors, return_counts=True)).T
colors2df = pd.DataFrame(colors2, columns = ['name', 'count'])
names = colors2df[colors2df['count']>1000].sort_values(by=['count'], ascending = False)
names
type(names)
names
```
## cluster the neutrals
```
X_pixels_n = shifted_neutrals[['h']]
from sklearn.cluster import KMeans
kmeans_n = KMeans(n_clusters=2, random_state=42, algorithm = 'full')
kmeans_n.fit(X_pixels_n)
image2show_n = kmeans_n.cluster_centers_[kmeans_n.labels_]
kmeans_df_n = pd.DataFrame(image2show_n, columns=['h'])
kmeans_df_n['label'] = kmeans_n.labels_
kmeans_df_n
```
# Cluster the neutrals to Knn model
```
X_n = kmeans_df_n[['h']]
predict_neutrals = hsv_knn_neutral.predict(X_n)
neutrals = np.array(np.unique(predict_neutrals, return_counts=True)).T
neutrals_df = pd.DataFrame(neutrals, columns = ['name', 'count'])
names_n = neutrals_df.sort_values(by=['count'], ascending = False)
names_n
```
## get information from SQL
```
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
dbname = 'colors'
username = 'macbook'
pswd = 'DarwinRulez!1'
engine = create_engine('postgresql://%s:%s@localhost/%s'%(username,pswd,dbname))
print('postgresql://%s:%s@localhost/%s'%(username,pswd,dbname))
print(engine.url)
con = None
con = psycopg2.connect(database = dbname, user = username, host='localhost', password=pswd)
def sql_query_from_list(list):
test = pd.DataFrame()
list_param = []
for i in range(0,len(list)):
color = list[i]
sql_param = """SELECT * FROM web_data
WHERE name = %(color)s"""
param = pd.read_sql_query(sql_param,con, params = {'color':color})
test = pd.concat([test,param], axis = 0, ignore_index=True)
return test
neutral_names = names_n.name.value_counts().index.to_list()
color_names = names.name.value_counts().index.to_list()
colors = sql_query_from_list(color_names)
colors
total = colors.Price_15_ml.sum()
total
```
| github_jupyter |
```
# This notebook assumes to be running from your FireCARES VM (eg. python manage.py shell_plus --notebook --no-browser)
import sys
import os
import time
import pandas as pd
import numpy as np
sys.path.insert(0, os.path.realpath('..'))
import folium
import django
django.setup()
from django.db import connections
from pretty import pprint
from firecares.firestation.models import FireDepartment, FireStation, NFIRSStatistic
from django.db.models import Avg, Max, Min, Q
from django.contrib.gis.geos import GEOSGeometry
from IPython.display import display
from firecares.utils import lenient_summation, dictfetchall
pd.set_option("display.max_rows",100)
def display_geom(geom):
_map = folium.Map(location=[geom.centroid.y, geom.centroid.x],
tiles='Stamen Toner')
_map.choropleth(geo_str=geom.geojson, line_weight=0, fill_opacity=0.2, fill_color='green')
ll = geom.extent[1::-1]
ur = geom.extent[3:1:-1]
_map.fit_bounds([ll, ur])
return _map
fd = FireDepartment.objects.get(id=95982)
```
#### Predictions 2015 csv processing
```
df = pd.read_csv('/firecares/predictions.2015.csv')
cols = ['lr_fire', 'mr_fire', 'h.fire', 'lr_inj', 'mr_inj', 'h.inj', 'lr_death', 'mr_death', 'h.death', 'lr_size_2', 'mr_size_2', 'h.size2', 'lr_size_3', 'mr_size_3', 'h.size3']
# Find stats for Richmond, VA
richmond = df[df['fd_id'] == 93345]
# Sum all Richmond rows
df2 = richmond.groupby(['fd_id', 'state'])[cols].sum()
df.groupby(['state'])[cols].mean()
# High standard deviation for high-risk-level fire values
display(richmond.std())
display(richmond.mean())
display(richmond.sum())
display(richmond.max())
# Actuals from NFIRS, average of residential structure fires over years (for high structure risk level)
pprint(list(fd.nfirsstatistic_set.filter(metric='residential_structure_fires', year__gte=2010, level=4).values('count', 'year')))
high_avg = fd.nfirsstatistic_set.filter(metric='residential_structure_fires', year__gte=2010, level=4).aggregate(Avg('count')).get('count__avg')
print 'Actual average over high-risk structure types per year: {}'.format(high_avg)
# The current predicted # of fires for high risk structures
print 'Predicted # fires for high-risk structures: {}'.format(sum([df2['h.fire'][0], df2['h.size2'][0], df2['h.size3'][0]]))
```
#### Displayed value verification on FireCARES
```
# Verify "Number of fires -> Average since 2010" displayed values
low = fd.nfirsstatistic_set.filter(metric='residential_structure_fires', year__gte=2010, level=1).aggregate(Avg('count')).get('count__avg')
metrics = fd.metrics.residential_fires_3_year_avg
assert low == metrics.low
display(low)
display(metrics)
# Verify predicted deaths and injuries for "Low" structure hazard levels displayed values
low = df2['lr_death'][0] + df2['lr_inj'][0]
assert abs(low - fd.metrics.deaths_and_injuries_sum.low) < 0.0001
display(fd.metrics.deaths_and_injuries_sum)
# Verify sum of death and injuries over all risk levels
v = sum(filter(lambda x: x >= 0, [df2['lr_death'][0], df2['lr_inj'][0], df2['mr_death'][0], df2['mr_inj'][0], df2['h.death'][0], df2['h.inj'][0]]))
assert abs(v - fd.metrics.deaths_and_injuries_sum.all) < 0.0001
```
#### Structure count graph vs heatmap N/As
```
# Building fires counts/locations
fd = FireDepartment.objects.get(id=93345)
df = pd.read_csv('/firecares/93345-building-fires.csv')
display(df.count())
display(df)
display(df.replace(np.nan, 'Unknown').groupby('risk_category').agg('count')[['alarm']].rename(columns={'alarm': 'Count'}))
from django.db import connections
from django.contrib.gis.geos import GEOSGeometry
with connections['nfirs'].cursor() as c:
q = """SELECT ST_Multi(ST_Union(bg.geom))
FROM nist.tract_years ty
INNER JOIN census_block_groups_2010 bg
ON ty.tr10_fid = ('14000US'::text || "substring"((bg.geoid10)::text, 0, 12))
WHERE ty.fc_dept_id = %(id)s
GROUP BY ty.fc_dept_id"""
c.execute(q, {'id': 93345})
geom = GEOSGeometry(c.fetchone()[0])
assert geom == fd.owned_tracts_geom
# Coverage for "owned" census tracts
_map = folium.Map(zoom_start=12,
location=[geom.centroid.y, geom.centroid.x],
tiles='Stamen Toner')
# Green background is the "owned" censuses for the FD based on response to incidents
_map.choropleth(geo_str=fd.owned_tracts_geom.geojson, line_weight=0, fill_opacity=0.2, fill_color='green')
# Red outline is jurisdiction
_map.choropleth(geo_str=fd.geom.geojson, fill_opacity=0, line_weight=10, line_color='red')
folium.LayerControl().add_to(_map)
colors = {'Low': 'green', 'Medium': 'yellow', 'High': 'red', 'nan': 'gray'}
for r in df[['x', 'y', 'risk_category']].values:
if r[0] and r[1]:
folium.CircleMarker(location=[float(r[1]), float(r[0])],
fill_color=colors[str(r[2])], radius=200, fill_opacity=0.4, popup='{}, {}'.format(r[1], r[0])).add_to(_map)
display(_map)
# Structure/parcel counts over coverage area by hazard level
q = """SELECT sum(case when l.risk_category = 'Low' THEN 1 ELSE 0 END) as low,
sum(CASE WHEN l.risk_category = 'Medium' THEN 1 ELSE 0 END) as medium,
sum(CASE WHEN l.risk_category = 'High' THEN 1 ELSE 0 END) high,
sum(CASE WHEN l.risk_category is null THEN 1 ELSE 0 END) as unknown
FROM parcel_risk_category_local l
JOIN (SELECT ST_SetSRID(%(owned_geom)s::geometry, 4326) as owned_geom) x
ON owned_geom && l.wkb_geometry
WHERE ST_Intersects(owned_geom, l.wkb_geometry)"""
with connections['nfirs'].cursor() as c:
c.execute(q, {'owned_geom': fd.owned_tracts_geom.hex})
res = dictfetchall(c)
display(res)
# Find multiple incidents at each location
df2 = df.groupby(['x', 'y']).agg('count').sort('alarm', ascending=False)
dup_df = df2[df2['alarm'] > 1][['alarm']]
display(dup_df)
print 'Location count w/ multiple incidents: {}'.format(len(dup_df))
q = """
select alarm, a.inc_type, alarms,ff_death, oth_death, ST_X(geom) as x, st_y(geom) as y, COALESCE(b.risk_category, 'Unknown') as risk_category, b.parcel_id
from buildingfires a
left join (SELECT * FROM
(SELECT state, fdid, inc_date, inc_no, exp_no, geom, b.parcel_id, b.risk_category, ROW_NUMBER() OVER (PARTITION BY state, fdid, inc_date, inc_no, exp_no, geom ORDER BY st_distance(st_centroid(b.wkb_geometry), a.geom)) AS r
FROM (select * from incidentaddress where state=%(state)s and fdid=%(fdid)s) a
left join parcel_risk_category_local b on a.geom && b.wkb_geometry) x
WHERE x.r = 1) b
using (state, inc_date, exp_no, fdid, inc_no) where state=%(state)s and fdid=%(fdid)s
order by a.alarm"""
odf = pd.read_sql_query(q, connections['nfirs'], params={'fdid': fd.fdid, 'state': fd.state})
# Map parcel misses
q = """
select alarm, bf.inc_type, bf.alarms, ff_death, oth_death, ST_X(ia.geom) as x, ST_Y(ia.geom) as y, COALESCE(rc.risk_category, 'Unknown') as risk_category, rc.parcel_id
from buildingfires bf
left join incidentaddress ia
using (state, inc_date, exp_no, fdid, inc_no)
left join parcel_risk_category_local rc
on rc.parcel_id = ia.parcel_id
where bf.state=%(state)s and bf.fdid=%(fdid)s
order by bf.alarm"""
df = pd.read_sql_query(q, connections['nfirs'], params={'fdid': fd.fdid, 'state': fd.state})
# Display building fires that don't have a parcel (parcel miss)
misses = df[df['parcel_id'].isnull()]
print 'Misses: {}'.format(misses.shape[0])
import folium
_map = folium.Map(zoom_start=12,
location=[fd.geom.centroid.y, fd.geom.centroid.x],
tiles='Stamen Toner')
_map.choropleth(geo_str=fd.geom.geojson, line_weight=0, fill_opacity=0.1, fill_color='green')
folium.LayerControl().add_to(_map)
for r in misses[['x', 'y']].values:
if r[0] and r[1]:
folium.CircleMarker(location=[float(r[1]), float(r[0])],
fill_color='gray', radius=20, fill_opacity=0.4, popup='{}, {}'.format(r[1], r[0])).add_to(_map)
display(_map)
# Parcel coverage for Richmond w/ parcel miss incidents
q = """
select ST_Union(p.wkb_geometry)
from parcel_risk_category_local p
where ST_Intersects(p.wkb_geometry, ST_SetSRID(%(owned_geom)s::geometry, 4326))
"""
parcels = pd.read_sql_query(q, connections['nfirs'], params={'owned_geom': fd.owned_tracts_geom.hex})
g = GEOSGeometry(parcels.values[0][0])
_map = display_geom(g)
pts = 0
for r in misses[['x', 'y']].values:
if r[0] and r[1]:
pts += 1
folium.Marker(location=[float(r[1]), float(r[0])], popup='{}, {}'.format(r[1], r[0])).add_to(_map)
display(_map)
display(pts)
# Count missing geocodes on all priority departments
from firecares.firestation.models import FireDepartment
priorities = list(FireDepartment.priority_departments.all().values('fdid', 'state', 'name'))
q = """
select fdid, state, sum(CASE WHEN geom is null THEN 1 ELSE 0 END) as null_count, count(1), sum(CASE WHEN geom is null THEN 1 ELSE 0 END) / count(1)::decimal * 100.0 as percent_null
from incidentaddress
where fdid = %(fdid)s and state = %(state)s
group by fdid, state
"""
df = pd.DataFrame(columns=['fdid', 'state', 'null_count', 'count', 'percent_null'])
for i in priorities:
fdid, state, name = i.values()
df = df.append(pd.read_sql_query(q, connections['nfirs'], params={'fdid': fdid, 'state': state}))
df = df.sort('percent_null', ascending=False)
display(df)
```
#### Get parcel counts for departments based on owned census tracts
```
fd = FireDepartment.objects.get(id=95982)
fd
q = """SELECT ST_Multi(ST_Union(bg.geom))
FROM nist.tract_years ty
INNER JOIN census_block_groups_2010 bg
ON ty.tr10_fid = ('14000US'::text || "substring"((bg.geoid10)::text, 0, 12))
WHERE ty.fc_dept_id = %(id)s
GROUP BY ty.fc_dept_id"""
with connections['nfirs'].cursor() as c:
c.execute(q, {'id': fd.id})
geom = c.fetchone()[0]
fd.owned_tracts_geom = geom
fd.save()
display(display_geom(fd.owned_tracts_geom))
display(fd.owned_tracts_geom.area)
q = """SELECT sum(case when l.risk_category = 'Low' THEN 1 ELSE 0 END) as low,
sum(CASE WHEN l.risk_category = 'Medium' THEN 1 ELSE 0 END) as medium,
sum(CASE WHEN l.risk_category = 'High' THEN 1 ELSE 0 END) high,
sum(CASE WHEN l.risk_category is null THEN 1 ELSE 0 END) as unknown
FROM parcel_risk_category_local l
JOIN (SELECT ST_SetSRID(%(owned_geom)s::geometry, 4326) as owned_geom) x
ON owned_geom && l.wkb_geometry
WHERE ST_Intersects(owned_geom, l.wkb_geometry)"""
with connections['nfirs'].cursor() as c:
c.execute(q, {'owned_geom': fd.owned_tracts_geom.hex})
res = dictfetchall(c)
display(res)
q = """
select ST_Union(p.wkb_geometry)
from parcel_risk_category_local p
where ST_Intersects(p.wkb_geometry, ST_SetSRID(%(owned_geom)s::geometry, 4326))
"""
parcels = pd.read_sql_query(q, connections['nfirs'], params={'owned_geom': fd.owned_tracts_geom.hex})
g = GEOSGeometry(parcels.values[0][0])
display_geom(g)
parcel_coverage_area = g.area
owned_tracts = fd.owned_tracts_geom.area
print '% coverage of owned census tracts by parcels: {}'.format(parcel_coverage_area / owned_tracts)
```
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd
import torch
import os
from utils import *
from tqdm import tqdm
import matplotlib.pyplot as plt
columns = [
'MKE_sfc',
'Rd_dx_sfc',
'relative_vorticity_sfc',
'grad_SSH_sfc',
]
device = 'cpu'
model_path = '../ml_eke/nn/trained_models/ResNet_4_custom.pkl'
model_name = os.path.basename(model_path).split('.')[0]
model_mse = torch.load(model_path, map_location=torch.device(device))
datapaths_2_3 = ('./data/2_3_SSH/', './data/2_3_SSH/')
first_suffixes_2_3 = ('_17_001.nc', '_17_001.nc')
predictand_2_3 = ['MEKE_sfc']
datapaths_1_4 = ('./data/1_4_SSH/', './data/1_4_SSH/')
first_suffixes_1_4 = ('_1916_001.nc', '_1916_001.nc')
predictand_1_4 = ['MEKE_z']
# set suffix to _1_4 or _2_3
datapaths = datapaths_2_3
first_suffixes = first_suffixes_2_3
predictand = predictand_2_3
last_dir = os.path.normpath(datapaths[0]).split(os.sep)[-1]
model_data = pop_data(datapaths[0], datapaths[0], skip_vars = ['x','y','depth','depth_stdev'], extra_pref=None, first_suffix=first_suffixes[0])
# Uncomment if needed (not needed with 1/4 and 2/3 datasets)
#model_data.extend_inventory(datapaths[1],first_suffix=first_suffixes[1])
scaler = np.load('./data/scaler_cf_all_4.npy')
num_samples = 1
dataset = get_samples(0, num_samples, columns, model_data, predictands=predictand).values[:, :len(columns)]
samples, targets, masks = get_samples_2D(0, num_samples, columns, model_data, predictands=predictand)
feat_avg = scaler[0,:]
feat_sd = scaler[1,:]
# torchscript
model_mse.eval()
class InferenceCell(torch.nn.Module):
def __init__(self):
super(InferenceCell, self).__init__()
self.model = model_mse
self.feat_avg = torch.tensor(feat_avg, dtype=torch.float32).to(device)
self.feat_sd = torch.tensor(feat_sd, dtype=torch.float32).to(device)
def forward(self, x):
# The features for which log of absolute value has to be taken:
x[:,[2]] = (torch.log(torch.abs(x[:,[2]]))+36.0)*torch.sign(x[:,[2]])
# The features for which log has to be taken:
x[:,[0]] = torch.log(x[:,[0]])
x[:,[3]] = torch.log(x[:,[3]])
x = (x - self.feat_avg) / self.feat_sd
x = self.model(x)
return x
inference_cell = InferenceCell()
traced_sample = dataset[0:10,:].copy()
x = torch.tensor(traced_sample).to(device)
traced_cell = torch.jit.trace(inference_cell, (x))
traced_sample = dataset[-10:,:].copy()
x = torch.tensor(traced_sample).to(device)
print("return: ", traced_cell(x).cpu().detach().numpy())
save = False
load = False
xpu = 'gpu' if device.startswith('cuda') else 'cpu'
torchscript_name = f'../ml_eke/nn/trained_models/{model_name}.{xpu}.pt'
last_dir = os.path.normpath(datapaths[0]).split(os.sep)[-1]
if save:
traced_cell.save(torchscript_name)
if load:
traced_cell = torch.jit.load(f'../ml_eke/nn/trained_models/{model_name}.{xpu}.pt')
from time import time
XX = torch.tensor(dataset[0:20000,:]).to(device)
t1 = time()
y = traced_cell(XX)
t2 = time()
y = y.detach().cpu().numpy()
print(f'Elapsed time per sample {(t2-t1)/dataset.shape[0]/1e-6} microseconds')
plt.figure(figsize=(8,8))
plt.hist(y, bins=100, density=True, alpha=0.2, color='red')
plt.draw()
# Need to chunk predictions, if GPU memory is not enough. This should be OK for 16 GB
chunked = True
samples_tab = np.reshape(samples.data, [-1,4])
y = np.zeros((samples_tab.shape[0],1))
# Sometimes, esp. in the 2/3 model, the last feature, grad_SSH_sfc, is 0, which is not acceptable for our NN
samples_tab[samples_tab[:,3]==0,3] = 1e-15
if chunked:
XX = torch.tensor(samples_tab[:samples_tab.shape[0]//4,:]).to(device)
y[:samples_tab.shape[0]//4] = traced_cell(XX).cpu().detach().numpy()
XX = torch.tensor(samples_tab[samples_tab.shape[0]//4:samples_tab.shape[0]//2,:]).to(device)
y[samples_tab.shape[0]//4:samples_tab.shape[0]//2] = traced_cell(XX).cpu().detach().numpy()
XX = torch.tensor(samples_tab[samples_tab.shape[0]//2:samples_tab.shape[0]//4*3,:]).to(device)
y[samples_tab.shape[0]//2:samples_tab.shape[0]//4*3] = traced_cell(XX).cpu().detach().numpy()
XX = torch.tensor(samples_tab[samples_tab.shape[0]//4*3:,:]).to(device)
y[samples_tab.shape[0]//4*3:] = traced_cell(XX).cpu().detach().numpy()
else:
XX = torch.tensor(samples_tab).to(device)
y = traced_cell(XX).cpu().detach().numpy()
y = np.reshape(y, masks.shape)
y[~masks] = np.nan
plt.figure(figsize=(20,10))
plt.pcolormesh(np.log10(np.exp(y.squeeze())), vmin=-3, vmax=-0.0)
plt.colorbar()
plt.draw()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.