code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# CatBoostRegressor with RobustScaler
This Code template is for regression analysis using CatBoostRegressor and Robust Scaler Feature Scaling technique. CatBoost is an algorithm for gradient boosting on decision trees.
<img src="https://cdn.blobcity.com/assets/gpu_recommended.png" height="25" style="margin-bottom:-15px" />
### Required Packages
```
!pip install catboost
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import LabelEncoder, RobustScaler
from sklearn.model_selection import train_test_split
from catboost import CatBoostRegressor
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ''
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
It scales features using statistics that are robust to outliers. This method removes the median and scales the data in the range between 1st quartile and 3rd quartile. i.e., in between 25th quantile and 75th quantile range. This range is also called an Interquartile range.
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html">More about Robust Scaler</a>
```
robust = RobustScaler()
x_train = robust.fit_transform(x_train)
x_test = robust.transform(x_test)
```
### Model
CatBoost is an algorithm for gradient boosting on decision trees. Developed by Yandex researchers and engineers, it is the successor of the MatrixNet algorithm that is widely used within the company for ranking tasks, forecasting and making recommendations
#### Tuning parameters
1. **learning_rate**:, float, default = it is defined automatically for Logloss, MultiClass & RMSE loss functions depending on the number of iterations if none of these parameters is set
>The learning rate. Used for reducing the gradient step.
2. **l2_leaf_reg**: float, default = 3.0
>Coefficient at the L2 regularization term of the cost function. Any positive value is allowed.
3. **bootstrap_type**: string, default = depends on the selected mode and processing unit
>Bootstrap type. Defines the method for sampling the weights of objects.
* Supported methods:
* Bayesian
* Bernoulli
* MVS
* Poisson (supported for GPU only)
* No
4. **subsample**: float, default = depends on the dataset size and the bootstrap type
>Sample rate for bagging. This parameter can be used if one of the following bootstrap types is selected:
* Poisson
* Bernoulli
* MVS
For more information refer: [API](https://catboost.ai/docs/concepts/python-reference_catboostregressor.html)
```
# Build Model here
model = CatBoostRegressor(verbose=False)
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
## Creator: Abhishek Garg, Github: [Profile](https://github.com/abhishek-252)
| github_jupyter |
<pre><h1>E1-313 TIPR Assignment 2 Code base & Report</h1>
<h2>Neural Network Implementation in Python3</h2>
<h3><i> - Achint Chaudhary</i></h3>
<h3>15879, M.Tech (CSA)</h3>
<h5>Note:</h5> Please Scroll Down for Report section, or search "Part 1"
<img src="Images/dnn_architecture.png"/>
<pre>
<h3>Standard Library Imports</h3>
```
import sys, os, shutil, itertools as it
from copy import deepcopy
from datetime import datetime
import numpy as np
import pandas as pd
from scipy import ndimage
import matplotlib as mpl
import matplotlib.pyplot as plt
from skimage import io
from sklearn.metrics import f1_score, accuracy_score, classification_report
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
import warnings
warnings.filterwarnings("error")
try:
res_stdout
except:
res_stdout = (sys.stdout if sys.stdout else sys.__stdout__)
verbose = bool( eval(input('Do you want Verbose??: 0/1 ')))
```
<h3>Changing File I/O & Matplotlib inlining if not verbose</h3>
```
if not verbose:
sys.stdout = sys.__stdout__ = open('stdoutbuffer','a',buffering=1)
mpl.use('Agg')
else:
sys.stdout = sys.__stdout__ = res_stdout
%matplotlib inline
```
<pre>
<h3>Activation Functions</h3>
```
class ActV:
def sigmoid(x):
return 1/(1+np.exp(-x))
def relu(x):
return np.maximum(0,x)
def tanh(x):
return 2*ActV.sigmoid(x)-1
def swish(x):
return x*ActV.sigmoid(x)
def softmax(x):
x = x-x.max(axis=1,keepdims=True)
_ = np.exp(x)
return _/np.sum(_,axis=1,keepdims=True)
class ActD:
def sigmoid(x):
_ = ActV.sigmoid( x )
return _ * (1-_)
def relu(x):
'1 for x>=0'
return (np.sign(x)>=0)
def tanh(x):
return 1-(ActV.tanh(x))**2
def swish(x):
'y’ = y + σ(x) . (1 – y)'
_1 = ActV.swish(x)
_2 = ActV.sigmoid(x)
return _1 + _2*(1-_1)
def softmax(x):# Still in doubt, it should be a matrix
_ = ActV.softmax( x )
return _ * (1-_)
```
<h3>Adding "Swish" function to Keras</h3>
```
# Ref: https://stackoverflow.com/questions/43915482/how-do-you-create-a-custom-activation-function-with-keras
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
def swish2(x):
return x*K.sigmoid(x)
get_custom_objects().update({'swish': Activation(swish2)})
def addswish(model):
model.add(Activation(swish2))
```
<h3>Cost Functions & Performance Metrics</h3>
```
class CostV:
def cross_entropy(act, pred):
pred = np.where(act!=1,pred+np.e,pred) # Handling perfect prediction
pred = np.where(np.logical_and(act==1,pred==0),pred+10**-8,pred) # Handling imperfect prediction
return -1*np.mean( act*np.log(pred) ,axis=0,keepdims=True)
def MSE(act, pred):
return np.mean( (pred-act)**2 ,axis=0,keepdims=True)
class CostD:
def cross_entropy(act, pred):
return pred - act
def MSE(act, pred):
return 2*(pred-act)
class Metrices:
def accuracy(act, pred):
return np.mean((act==pred).all(axis=1))
def one_hot(y):
return 1*(y==y.max(axis=1,keepdims=True))
def cattooht(Y):
Y = np.ravel(Y)
_ = sorted(set(Y))
tmp = np.zeros((Y.shape[0],len(_)),dtype='int32')
for i in range(len(Y)):
tmp[i][_.index(Y[i])] = 1
return tmp,_
```
<h3>Xavier-He Initialization</h3>
<img src="Images/XHE.png" height="450" width="600" align="left">
```
def initWB(IP,OP,function='relu',He=True,mode='gaussian'):
if He:
# Xavier & He initialization
_ = 1/(IP+OP)**0.5
if function in ('sigmoid','softmax'):
r, s = 6**0.5, 2**0.5
elif function=='tanh':
r, s = 4*6**0.5, 4*2**0.5
else: # relu or swish function
r, s = 12**0.5, 2
r, s = r*_, s*_
else:
r, s = 1, 1
# Generating matrices
if mode=='uniform':
return 2*r*np.random.random((IP,OP))-r , 2*r*np.random.random((1,OP))-r
elif mode=='gaussian':
return np.random.randn(IP,OP)*s , np.random.randn(1,OP)*s
else:
raise Exception('Code should be unreachable')
```
<h3>Data split function family</h3>
```
def RSplit(X,Y,K=10):
'Random Split Function'
_ = list(range(X.shape[0]))
index_set = []
indxs = set(_)
batch_size = round(X.shape[0]/K)
np.random.shuffle(_)
for k in range(0,X.shape[0],batch_size):
test = set(_[k:k+batch_size])
train = indxs - test
index_set.append((list(train),list(test)))
return index_set
def SSplit(X,Y,K=10,seed=True):
'Stratified Split Function'
if seed:
np.random.seed(42)
Y = pd.DataFrame([tuple(y) for y in Y])
classes = set(Y)
c2i = {}
for index,label in Y.iterrows():
label = label[0]
if label in c2i:
c2i[label].add(index)
else:
c2i[label] = {index}
# Each class -> list of indices
for i in c2i:
c2i[i] = list(c2i[i])
np.random.shuffle(c2i[i])
# Each class with its set of train, test split indices
c2is = {}
for cls in c2i:
a = int(np.round(len(c2i[cls])/K))
c2is[cls] = []
for fold in range(K):
test_indices = c2i[cls][a*fold:a*(fold+1)]
train_indices = c2i[cls][0:a*fold] + c2i[cls][a*(fold+1):]
c2is[cls].append((train_indices,test_indices))
np.random.shuffle(c2is[cls])
index_set = []
for i in range(K):
train,test = set(),set()
for cls in c2is:
_ = c2is[cls][i]
train.update(set(_[0]))
test.update (set(_[1]))
index_set.append((list(train),list(test)))
return index_set
def BSplit(X,Y,K=10):
'Biased Split Function'
indx = sorted(np.arange(X.shape[0]),key = lambda i:list(Y[i]))
indices = set(indx)
index_set = []
step = int(np.ceil(len(indx)/K))
for i in range(0,len(indx),step):
test = set(indx[i:i+step])
train = indices - test
index_set.append((list(train),list(test)))
return index_set
def Split(X,Y,K=10,mode='R'):
if mode=='S':
return SSplit(X,Y,K)
elif mode=='B':
return BSplit(X,Y,K)
else:
return RSplit(X,Y,K)
```
<h3>Max-Pooling Code for Image Compression</h3>
```
# Ref: https://stackoverflow.com/questions/42463172/how-to-perform-max-mean-pooling-on-a-2d-array-using-numpy
def asStride(arr,sub_shape,stride):
'''Get a strided sub-matrices view of an ndarray.
See also skimage.util.shape.view_as_windows()
'''
s0,s1 = arr.strides[:2]
m1,n1 = arr.shape[:2]
m2,n2 = sub_shape
view_shape = (1+(m1-m2)//stride[0],1+(n1-n2)//stride[1],m2,n2)+arr.shape[2:]
strides = (stride[0]*s0,stride[1]*s1,s0,s1)+arr.strides[2:]
subs = np.lib.stride_tricks.as_strided(arr,view_shape,strides=strides)
return subs
def poolingOverlap(mat,ksize,stride=None,method='max',pad=False):
'''Overlapping pooling on 2D or 3D data.
<mat>: ndarray, input array to pool.
<ksize>: tuple of 2, kernel size in (ky, kx).
<stride>: tuple of 2 or None, stride of pooling window.
If None, same as <ksize> (non-overlapping pooling).
<method>: str, 'max for max-pooling,
'mean' for mean-pooling.
<pad>: bool, pad <mat> or not. If no pad, output has size
(n-f)//s+1, n being <mat> size, f being kernel size, s stride.
if pad, output has size ceil(n/s).
Return <result>: pooled matrix.
'''
m, n = mat.shape[:2]
ky,kx = ksize
if stride is None:
stride = (ky,kx)
sy,sx = stride
_ceil = lambda x,y: int(np.ceil(x/float(y)))
if pad:
ny = _ceil(m,sy)
nx = _ceil(n,sx)
size = ((ny-1)*sy+ky, (nx-1)*sx+kx) + mat.shape[2:]
mat_pad = np.full(size,np.nan)
mat_pad[:m,:n,...]=mat
else:
mat_pad=mat[:(m-ky)//sy*sy+ky, :(n-kx)//sx*sx+kx, ...]
view=asStride(mat_pad,ksize,stride)
if method=='max':
result=np.nanmax(view,axis=(2,3))
else:
result=np.nanmean(view,axis=(2,3))
return result
```
<h3>Global Dataset store & Dummy set generation</h3>
```
try:
datasets
except:
datasets = {}
name = 'Dummy'
L = 1000
_1,_2 = list(np.random.random((L,2))), list(np.random.random((L,2)))
X1,X2 = [],[]
Y1,Y2 = [],[]
rad = 0.8
for i in range(L):
a,b = _1[i][0],_1[i][1]
if a**2+b**2<rad**2:
Y1.append([1,0])
X1.append(_1[i])
elif a**2+b**2>=rad**2:
Y1.append([0,1])
X1.append(_1[i])
a,b = _2[i][0],_2[i][1]
if a**2+b**2<rad**2:
Y2.append([1,0])
X2.append(_2[i])
elif a**2+b**2>=rad**2:
Y2.append([0,1])
X2.append(_2[i])
X1 = np.array(X1)
X2 = np.array(X2)
Y1 = np.array(Y1)
Y2 = np.array(Y2)
datasets[name] = (X1,Y1,['In','Out'])
m, n = 5,5
X = np.array( list(it.product(np.arange(m),np.arange(n))) )
Y = np.array( cattooht( np.ravel( ((np.array([list(np.arange(n))]*m).T+np.arange(m)).T)%2 ) )[0] )
datasets['XOR'] = (X,Y,['E','O'])
```
<h3>Loading MNIST & Cat-Dog datasets</h3>
```
path = 'data'
res_path = os.getcwd()
os.chdir(path)
for fldr in os.listdir():
if not fldr.startswith('.'):
datasets[fldr] = ([],[])
os.chdir(fldr)
_ = sorted([x for x in os.listdir() if not x.startswith('.')])
name_index = {x:_.index(x) for x in _}
for category in _:
label = [0]*len(_)
label[name_index[category]] = 1
os.chdir(category)
for sample in os.listdir(): #[:2000]:
if not fldr.startswith('.'):
img_mat = io.imread(sample, as_gray=True)
if fldr=='Cat-Dog': img_mat = poolingOverlap(img_mat,(4,4))
img_mat = np.ravel(img_mat)
datasets[fldr][0].append(img_mat)
datasets[fldr][1].append(label)
os.chdir('..')
datasets[fldr] = tuple(map(np.array,datasets[fldr]))+(_,)
os.chdir('..')
os.chdir( res_path )
for i in datasets:
datasets[i] = np.array(datasets[i][0],dtype='float64'), datasets[i][1], datasets[i][2]
```
<h3>Back-propagation Algorithm for Neural Network</h3>
<img src="Images/BP.png" height="450" width="600" align="left">
<h3>Neural Network Class</h3>
```
class NN:
def __init__(self):
self.Num, self.fun = [], []
self.IP, self.OP, self.W, self.B, self.delta = {}, {}, {}, {}, {}
self.beta1, self.beta2, self.eps = 0.9, 0.999, 10**-8
def data_feed( self, M, L, targets):
self.raw, self.labels, self.target_names = M, L, targets
def data_validate( self, M=np.array([]), L=np.array([]) ):
self.vraw, self.vlabels = M, L
def add(self,N,f='relu'):
self.Num.append(N); self.fun.append(f)
def data_preprocess(self,mode='standard'):
sp = np.nan_to_num
try:
mode = self.preprocess_mode
except:
self.preprocess_mode = mode
if mode=='scale':
try:
self.mn, self.mx
except:
self.mn, self.mx = self.raw.min(axis=0), self.raw.max(axis=0)
mx = np.where(self.mx==self.mn,self.mx+1,self.mx)
self.data = sp((self.raw - self.mn)/(mx-self.mn))
try: # If validation data is defined
self.vdata = sp((self.vraw - self.mn)/(self.mx-self.mn))
except:
self.vdata = self.data
elif mode=='standard':
try:
self.mean, self.std
except:
self.mean, self.std = self.raw.mean(axis=0), self.raw.std(axis=0)
std = np.where(self.std==0,1,self.std)
self.data = sp((self.raw-self.mean)/std)
try: # If validation data is defined
self.vdata = sp((self.vraw-self.mean)/std)
except:
self.vdata = self.data
else:
raise Exception('Code should be unreachable')
def initialize_layers(self,He=True,mode='gaussian'):
for i in range(len(self.Num)):
if i==0:
self.W[i],self.B[i], = initWB(self.data.shape[1],self.Num[i],self.fun[i],He,mode)
else:
self.W[i],self.B[i], = initWB(self.Num[i-1],self.Num[i],self.fun[i],He,mode)
def forward_prop(self,predict=False):
self.IP[0] = self.fdata
for i in range(len(self.Num)):
wx_b = np.dot(self.IP[i],self.W[i])+self.B[i]
if not predict:
self.OP[i] = wx_b
_ = eval('ActV.{0}(wx_b)'.format(self.fun[i]))
self.IP[i+1] = _
if predict:
del self.IP[i]
return self.IP[len(self.Num)]
def back_prop(self,debug=False):
for i in range(len(self.Num)-1,-1,-1):
if debug: print('Layer',i)
if i==(len(self.Num)-1):
costD = eval('CostD.{0}(self.flabels,self.IP[len(self.Num)])'.format(self.cost))
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i]))
self.delta[i] = costD * actvD
if debug: print('>>',self.IP[i].shape,costD.shape,actvD.shape,self.delta[i].shape)
else:
costD = np.dot(self.W[i+1],self.delta[i+1].T).T # ((6,2),(100,2).T).T => (100,6)
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i])) #(100,6)
self.delta[i] = costD * actvD
if debug: print('>>',self.IP[i].shape,costD.shape,actvD.shape,self.delta[i].shape)
uW = np.dot( self.IP[i].T , self.delta[i] ) / self.IP[i].shape[0]
uB = np.mean( self.delta[i] ,axis=0, keepdims=True)
if debug: print( self.W[i].shape , self.B[i].shape)
if debug: print( uW.shape , uB.shape)
self.W[i] -= self.learning_rate*uW
self.B[i] -= self.learning_rate*uB
if debug: input()
def back_prop2(self,Iteration_Count=1,debug=False,amsgrad=False):
if Iteration_Count==1:
self.UW, self.UB, self.SW, self.SB = deepcopy(self.W), deepcopy(self.B), deepcopy(self.W), deepcopy(self.B)
for i in self.UW:
self.UW[i], self.UB[i], self.SW[i], self.SB[i] = 0*self.UW[i], 0*self.UB[i], 0*self.SW[i], 0*self.SB[i]
for i in range(len(self.Num)-1,-1,-1):
if i==(len(self.Num)-1):
costD = eval('CostD.{0}(self.flabels,self.IP[len(self.Num)])'.format(self.cost))
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i]))
self.delta[i] = costD * actvD
else:
costD = np.dot(self.W[i+1],self.delta[i+1].T).T
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i]))
self.delta[i] = costD * actvD
uW = np.dot( self.IP[i].T , self.delta[i] ) / self.IP[i].shape[0]
uB = np.mean( self.delta[i] ,axis=0, keepdims=True)
# Eqn 1
self.UW[i] = self.beta1*self.UW[i] + (1-self.beta1)*uW
self.UB[i] = self.beta1*self.UB[i] + (1-self.beta1)*uB
# Eqn 2
self.SW[i] = self.beta2*self.SW[i] + (1-self.beta2)*uW**2
self.SB[i] = self.beta2*self.SB[i] + (1-self.beta2)*uB**2
# Eqn 3
UW = self.UW[i]/(1-self.beta1**Iteration_Count)
UB = self.UB[i]/(1-self.beta1**Iteration_Count)
# Eqn 4
SW = self.SW[i]/(1-self.beta2**Iteration_Count)
SB = self.SB[i]/(1-self.beta2**Iteration_Count)
# Eqn 5
self.W[i] -= self.learning_rate*UW/(SW**0.5+self.eps)
self.B[i] -= self.learning_rate*UB/(SB**0.5+self.eps)
if np.isnan(self.W[i]).any() or np.isnan(self.B[i]).any():
raise Exception('NAN value arises')
def back_prop3(self,Epoch_Count=1,debug=False):
for i in range(len(self.Num)-1,-1,-1):
if i==(len(self.Num)-1):
costD = eval('CostD.{0}(self.flabels,self.IP[len(self.Num)])'.format(self.cost))
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i]))
self.delta[i] = costD * actvD
else:
costD = np.dot(self.W[i+1],self.delta[i+1].T).T
actvD = eval('ActD.{0}(self.OP[i])'.format(self.fun[i]))
self.delta[i] = costD * actvD
uW = np.dot( self.IP[i].T , self.delta[i] ) / self.IP[i].shape[0]
uB = np.mean( self.delta[i] ,axis=0, keepdims=True)
# Eqn 1
_W1 = (1-self.beta1)*uW/(1-self.beta1**Epoch_Count)
_B1 = (1-self.beta1)*uB/(1-self.beta1**Epoch_Count)
# Eqn 2
_W2 = (1-self.beta2)*uW**2/(1-self.beta2**Epoch_Count)
_B2 = (1-self.beta2)*uB**2/(1-self.beta2**Epoch_Count)
# Eqn 3
self.W[i] -= self.learning_rate*_W1/(_W2**0.5+self.eps)
self.B[i] -= self.learning_rate*_B1/(_B2**0.5+self.eps)
if np.isnan(self.W[i]).any() or np.isnan(self.B[i]).any():
raise Exception('NAN value arises')
def feed_adam(beta1, beta2, eps):
self.beta1, self.beta2, self.eps = beta1, beta2, eps
def plot_feed(self,feed=True):
self.fdata,self.flabels = self.data, self.labels
y_pred = self.forward_prop(predict=True)
costV = eval('CostV.{0}(self.flabels,y_pred)'.format(self.cost))
y_pred = one_hot(y_pred)
mvalue = eval('Metrices.{0}(self.flabels,y_pred)'.format(self.metric))
act2 = [ list(rw).index(1) for rw in self.flabels ]
pred2 = [ list(rw).index(1) for rw in y_pred ]
if feed:
self.costs.append( np.mean(costV) )
self.mvalues.append( mvalue )
self.f1m.append( f1_score(act2,pred2,average='micro') )
self.f1M.append( f1_score(act2,pred2,average='macro') )
self.fdata,self.flabels = self.vdata, self.vlabels
y_pred = one_hot( self.forward_prop(predict=True) )
vmvalue = eval('Metrices.{0}(self.flabels,y_pred)'.format(self.metric))
self.vmvalues.append( vmvalue )
return act2, pred2
def train(self,epochs=1000,batchsize=30,learning_rate=0.001,\
optimizer='adam',cost='cross_entropy',metric='accuracy',es=(True,0,True),amsgrad=False):
self.cost, self.metric, self.learning_rate = cost, metric, learning_rate
self.costs, self.mvalues, self.f1m, self.f1M, self.vmvalues = [], [], [], [], []
if es[0]: prev_entropy = [np.inf]
# Random value at starting NN
self.plot_feed()
f = open('continue_next_epoch','w')
f.close()
for T in range(epochs):
if 'continue_next_epoch' not in os.listdir(): break
init = datetime.now()
print('Epoch {0:{1}} ['.format(T+1,int(np.log10(epochs+1))+1),end='')
if es[0]: W,B = [deepcopy(self.W)],[deepcopy(self.B)] # Saving Weights for Early Stopping
mb_indx, splits = 0, int(np.ceil(self.data.shape[0]/batchsize))
self.index_set = Split(self.data, self.labels, splits ,'R')
for ln in range(len(self.index_set)):
train_indx, test_indx = self.index_set[ln]
self.fdata,self.flabels = self.data[test_indx],self.labels[test_indx]
self.forward_prop()
if optimizer=='gd':
self.back_prop()
elif optimizer=='adam':
self.back_prop2(T*len(self.index_set)+(ln+1))
else:
self.back_prop3(T+1)
if(mb_indx>=(splits*0.04)):
print('=',end='')
mb_indx = 0
mb_indx+=1
# Early Stopping using Validation Set #CHECKPOINT
if es[0]:
if es[1]==-1:
pass
else:
delta = 0 # Exploring with compromising observed value
self.fdata,self.flabels = self.vdata,self.vlabels
y_pred = self.forward_prop(predict=True)
costV = eval('CostV.{0}(self.flabels,y_pred)'.format(self.cost))
best_entropy, cur_entropy = min(prev_entropy), np.mean(costV)
if ( cur_entropy - best_entropy) > delta :
if len(prev_entropy)==(es[1]+1):
if es[2]: # Restoring Best Weights
bst_indx = len(prev_entropy)-prev_entropy[::-1].index(best_entropy) - 1
self.W,self.B = W[bst_indx], B[bst_indx]
print(']\n',best_entropy,'==>',cur_entropy)
break
else:
prev_entropy.append( cur_entropy )
W.append(deepcopy(self.W)); B.append(deepcopy(self.B))
else:
W,B = [deepcopy(self.W)],[deepcopy(self.B)]
prev_entropy = [ cur_entropy ]
# To plot results for entire datasets
self.plot_feed()
print('] Loss {0:.6e}, Accuracy {1:.2f}%, Accuracy-V {2:.2f}%, Time {3:}'.format(self.costs[-1],self.mvalues[-1]*100,self.vmvalues[-1]*100,datetime.now()-init))
def krs(self,epochs=1000,batchsize=30,learning_rate=0.001,\
optimizer='adam',cost='cross_entropy',metric='accuracy',es=(True,0,True)):
model = Sequential()
addswish(model)
model.add(Dense(self.Num[0], activation=self.fun[0], input_dim=self.data.shape[1]))
for i in range(1,len(self.Num)-1):
model.add(Dense(self.Num[i], activation=self.fun[i]))
model.add(Dense(self.labels.shape[1], activation='softmax'))
cb = [EarlyStopping(monitor='val_loss', patience=es[1], restore_best_weights=es[2])] if (es[0] and es[1]!=-1) else []
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=learning_rate,amsgrad=False), metrics=[metric])
model.fit(self.data, self.labels, epochs=epochs, batch_size=batchsize,\
validation_data=(self.vdata, self.vlabels), callbacks=cb )
y_pred = model.predict(self.vdata)
y_pred = one_hot(y_pred)
self.kmodel = model
return classification_report(self.vlabels, y_pred, target_names=self.target_names, digits = 4 )
def report(self,model=None):
if model:
y_true1, y_pred1 = self.labels, one_hot(model.predict(self.data))
y_true2, y_pred2 = self.vlabels, one_hot(model.predict(self.vdata))
else:
self.fdata, self.flabels = self.data, self.labels
y_true1, y_pred1 = self.labels, one_hot( self.forward_prop(predict=True) )
self.fdata, self.flabels = self.vdata, self.vlabels
y_true2, y_pred2 = self.vlabels, one_hot( self.forward_prop(predict=True) )
r1 = classification_report(y_true1, y_pred1, target_names=self.target_names, digits = 4 )
r2 = classification_report(y_true2, y_pred2, target_names=self.target_names, digits = 4 )
return r1, r2
def plot(self,prms={},learning_plot=False):
mpl.rcParams['figure.dpi'] = 100
plt.close()
ax = plt.subplot(111)
ls = [ self.mvalues[1:], self.f1M[1:], self.f1m[1:], self.vmvalues[1:] ]
c1, c2 = min((min(l) for l in ls)), max((max(l) for l in ls))
_ = (np.array(self.costs[1:])-min(self.costs[1:])) / (max(self.costs[1:])-min(self.costs[1:]))
_ = list( (_*(c2-c1)+c1) )
ls = [_]+ls
for i in range(len(ls)):
ls[i] = np.array(ls[i])
s = np.exp(-5)
if learning_plot:
for i in range(len(ls)):
ls[i] = -np.log((c2+s)-np.array(ls[i]))
# Best Depiction of Learning process
indx = list( np.linspace(-np.log((c2+s)-c1),-np.log((c2+s)-c2),10) )
yticks = np.round(np.linspace(c1,c2,10),3)
_1 = plt.plot(np.arange(1,len(ls[0])+1), ls[0],'-',label=self.cost)
_2 = plt.plot(np.arange(1,len(ls[1])+1), ls[1],'*',label='Accuracy-T')
_3 = plt.plot(np.arange(1,len(ls[2])+1), ls[2],'-.',label='F1-Macro')
_4 = plt.plot(np.arange(1,len(ls[3])+1), ls[3],':',label='F1-Micro')
_5 = plt.plot(np.arange(1,len(ls[4])+1), ls[4],'--',label='Accuracy-V')
if learning_plot:
plt.yticks(indx,yticks)
p1 = '{0} Accuracy {1:.2f}%'.format(self.name,(self.mvalues[-1]*0.9+self.vmvalues[-1]*0.1)*100)
prms = {x:prms[x] for x in prms if x in grid_params}
p2 = ', '.join(str(x) for x in tuple(prms[x] for x in grid_params) ) # Grid Search Hyperparameters
title = '\n'.join((p1,p2))
plt.title(title)
plt.xlabel('Epochs')
plt.legend(loc=0)
plt.savefig(title+'.png',dpi=300,bbox_inches = 'tight')
if verbose:
plt.show()
plt.close()
def missed(self,diff_validation=True):
try:
shutil.rmtree('missed')
except:
pass
finally:
os.mkdir('missed')
os.chdir('missed')
try:
ls = [(self.data,self.labels)]
if diff_validation:
ls.append( (self.vdata,self.vlabels) )
for data,labels in ls:
self.fdata, self.flabels = deepcopy(data), deepcopy(labels)
pred = one_hot( self.forward_prop(predict=True) )
act = self.flabels
count = {}
for i in range(len(self.fdata)):
if not (act[i]==pred[i]).all():
lbl_a = self.target_names[ np.sum( act[i]*np.arange(act[i].shape[0])) ]
lbl_p = self.target_names[ np.sum(pred[i]*np.arange(pred[i].shape[0])) ]
if (lbl_a,lbl_p) in count:
count[(lbl_a,lbl_p)]+=1
else:
count[(lbl_a,lbl_p)]=1
mat = deepcopy( self.fdata[i] )
try:
mat = mat*(self.mx-self.mn)+self.mn
except:
mat = mat*self.std+self.mean
mat = mat.reshape(round(mat.shape[0]**0.5),round(mat.shape[0]**0.5))
mpl.image.imsave('{1},{2},{0}.png'.format(count[(lbl_a,lbl_p)],lbl_a,lbl_p),mat)
except:
pass
finally:
os.chdir('..')
def save_model(self,model_store='models'):
if model_store not in os.listdir():
os.mkdir(model_store)
try:
try:
shutil.rmtree('{}/{}'.format(model_store,self.name))
except:
pass
finally:
os.mkdir('{}/{}'.format(model_store,self.name))
os.chdir('{}/{}'.format(model_store,self.name))
with open('config','w') as f:
print(repr(self.Num) ,file=f)
print(repr(self.fun) ,file=f)
print(self.preprocess_mode,end = '',file=f)
dct = {}
with open('parameters','wb') as f:
if self.preprocess_mode == 'standard':
dct['mean'], dct['std'] = self.mean, self.std
elif self.preprocess_mode == 'scale':
dct['mn'], dct['mx'] = self.mn, self.mx
else:
raise Exception('Code should be unreachable')
for i in self.W:
dct['W{}'.format(i)] = self.W[i]
dct['B{}'.format(i)] = self.B[i]
np.savez(f,**dct)
except Exception as exc:
pass
finally:
os.chdir('../..')
def load_model(self,model_store = 'models'):
if model_store not in os.listdir():
raise Exception("{} directory does not Exist".format(model_store))
try:
os.chdir('{}/{}'.format(model_store,self.name))
with open('config') as f:
self.Num = eval(f.readline().strip())
self.fun = eval(f.readline().strip())
self.preprocess_mode = f.readline().strip()
with open('parameters','rb') as f:
npzfile = np.load(f)
if self.preprocess_mode == 'standard':
self.mean, self.std = npzfile['mean'], npzfile['std']
elif self.preprocess_mode == 'scale':
self.mn, self.mx = npzfile['mn'], npzfile['mx']
else:
raise Exception('Code should be unreachable')
for i in range(len(self.Num)):
self.W[i] = npzfile['W{}'.format(i)]
self.B[i] = npzfile['B{}'.format(i)]
except Exception as exc:
pass
finally:
os.chdir('../..')
```
<h3>Dataset Evaluation Wrapper</h3>
```
# Early Stopping Parameters
# (Enable, ValidationPartition, Patience, Restore)
def evaldata(name,NumFun,prprc='standard',He=True,initmode='gaussian',\
epochs=1000,batchsize=30,lr=0.001,opt='adam',es=(True,False,0,True),krs=True):
params = locals()
if 'grid_params' not in globals():
global grid_params
grid_params = []
print('Dataset under processing: ',name)
X,Y,targets = datasets[name]
net = NN()
net.name = name
if es[1]:
index_set = SSplit(X,Y,10)
np.random.shuffle(index_set)
train_index, test_index = index_set[0]
X1, Y1, X2, Y2 = X[train_index], Y[train_index], X[test_index], Y[test_index]
else:
X1, Y1, X2, Y2 = X, Y, X, Y
net.data_feed(X1,Y1,targets) # Feeding Raw data
net.data_validate(X2,Y2) # Used for Early Stopping
net.data_preprocess(prprc)
#Adding Hidden Layers
for n,f in NumFun:
net.add(n,f)
# Output Layer & Cost function
net.add(Y.shape[1],'softmax')
net.initialize_layers(He,initmode)
# Calling Training module, with optmizer & regularization parameters
print('\n\t\t','#'*16,'NumPy Implementation','#'*16,'\n')
net.train( epochs, batchsize, lr, opt, 'cross_entropy', 'accuracy', es[0:1]+es[2:] )
r1, r2 = net.report()
print('\n\t\t\t','-'*8,'Classification Report on Training data','-'*8,'\n',r1)
if es[1]: print('\n\t\t','-'*8,'Classification Report on Validation data','-'*8,'\n',r2)
if krs:
print('\n\t\t','#'*16,'Keras Implementation','#'*16,'\n')
net.krs( epochs, batchsize, lr, opt, 'cross_entropy', 'accuracy', es[0:1]+es[2:] )
r1, r2 = net.report(net.kmodel)
print('\n\t\t\t','-'*8,'Classification Report on Training data','-'*8,'\n',r1)
if es[1]: print('\n\t\t','-'*8,'Classification Report on Validation data','-'*8,'\n',r2)
net.plot(params)
if krs:
net.missed(es[1])
net.save_model()
net.load_model()
return net.mvalues, net.costs, net.f1M, net.f1m
```
<h3>Grid Search for Hyper-parameter tuning</h3>
<pre>
<b>Sample Worst Case Sample given below, 9k+ executions</b><br>
########################################################
################ DON'T TRY THIS AT HOME ################
########################################################
dct = {
'A datasets' : ['Dummy'],
'B units' : list(zip((392,784,1568),(64,128,128))),
'C functions' : it.product(('sigmoid','tanh','relu','swish'),('sigmoid','tanh','relu','swish')),
'D preproc' : ['scale','standard'],
'E He' : [True,False],
'F initmodes' : ['uniform','gaussian'],
'G epochs' : [10,20,40],
'H batchsize' : [128,256,512,1024],
'I learning_rate' : [0.001,0.0003,0.0001],
'J optimizer' : ['adam'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [True],
}
res = grid_search(dct)
grid_plot(res)
```
def grid_search(dct):
grid_values = { 'Accuracy':{}, 'Cost':{}, 'F1-Macro':{}, 'F1-Micro':{} }
for prms in it.product(*(dct[x] for x in dct)):
name = prms[0]
prms = list(prms)
prms[1:3] = [tuple(zip(prms[1],prms[2]))]
prms = tuple(prms)
print('STARTED ',prms)
_ = eval( 'evaldata{0}'.format(tuple(prms)))
print('COMPLETED',prms,end='\n'*3)
for i in range(len(_)):
ls = sorted( grid_values.keys() )
grid_values[ls[i]][prms] = _[i][-1]
return grid_values
def grid_plot(res,dct):
'Under assumption that only one quantity will be varied at a time'
tmp_grid_params = ['DataSet','Config','Preprocess','He','InitMode','Epochs','Batch_Size','Learning_Rate']
def plot(metric,inner_dct,color):
param_vals, y_values = {i:set() for i in range(len(tmp_grid_params))}, []
for params in sorted(inner_dct):
y_values.append( inner_dct[params] )
for indx in range(len(tmp_grid_params)):
param_vals[ indx ].add(params[indx])
for indx in range(len(tmp_grid_params)):
if len(param_vals[indx])>1:
break
else: # No graph can be shown with no changing values
return
if tmp_grid_params[indx]=='Config':
sample = list(inner_dct.keys())[0][indx]
inner_indx_dct = {(i,j):set() for i in range(len(sample)) for j in range(2)}
for params in sorted(inner_dct):
for i in range(len(sample)):
for j in range(2):
inner_indx_dct[(i,j)].add(params[indx][i][j])
for inner_indx in inner_indx_dct:
if len(inner_indx_dct[inner_indx])>1:
break
else:
return
i,j = inner_indx
par_name = 'Layer {0}'.format(i+1)+' '+('Activation' if j else 'Units')
x_values = sorted(inner_indx_dct[inner_indx])
else:
par_name = tmp_grid_params[indx]
x_values = [params[indx] for params in sorted(inner_dct)]
ind = np.arange(len(y_values))
plt.xlabel(par_name)
plt.ylabel(metric)
plt.xticks( ind,x_values)
try:
styl = ('*' if set(map(int,x_values))=={0,1} else '-')
except:
styl = '*'
plt.plot( ind,y_values,styl,color=color)
title = ', '.join((metric,par_name))
plt.title(title)
plt.savefig(title+'.png',dpi=300,bbox_inches = 'tight')
if verbose:
plt.show()
plt.close()
for metric,color in zip(res,('g','r','b','y')):
plt.close()
plot(metric,res[metric],color)
def multi_grid_search(dct,plot=False):
for i in dct:
if i=='next':
for dct2 in dct[i]:
multi_grid_search({dct2:dct[i][dct2]},plot)
else:
_ = os.getcwd()
try:
shutil.rmtree(i)
except:
pass
finally:
os.mkdir(i)
os.chdir(i)
print('\n'+'#'*16+' Grid Search Started in {0} '.format(i)+'#'*16)
res = grid_search(dct[i])
try:
if plot: grid_plot(res,dct)
except Exception as exc:
print(exc)
pass
finally:
os.chdir(_)
```
<pre>
<h1>Part 0 Pre-execution checks & Sample executions</h1>
```
import warnings
warnings.filterwarnings("ignore")
sys.stdout = sys.__stdout__ = open('stdoutbuffer','a',buffering=1)
# os.chdir('..')
os.getcwd()
grid_params = ['NumFun','prprc','He', 'initmode', 'batchsize','lr']
name,config = 'Dummy',[(4,'swish'),(3,'relu')]
_ = evaldata(name,config,'standard',True,'gaussian',100,30,0.01,'adam',(True,True,-1,True),False)
name,config = 'XOR',[(10,'sigmoid'),(10,'sigmoid'),]
_ = evaldata(name,config,'standard',True,'uniform',1000,1,0.001,'gd',(True,False,-1,True),False)
name,config = 'MNIST',[(1568,'swish'),(256,'swish'),]
_1 = evaldata(name,config,'standard',True,'gaussian',10,2000,0.001,'myopt',(True,True,-1,True),False)
name,config = 'Cat-Dog',[(2048,'relu'),(256,'relu'),(64,'tanh')]
_ = evaldata(name,config,'standard',True,'gaussian',100,200,0.0001,'myopt',(True,False,-1,True),False)
```
<pre>
<h1>Part 1 "MNIST" Evaluation and Experiments</h1>
```
grid_params = ['NumFun','prprc','He', 'initmode', 'batchsize','lr']
```
<h3>Task 1 - Varying Number of Layers</h3>
```
try:
shutil.rmtree('Task1')
except:
pass
finally:
os.mkdir('Task1')
os.chdir('Task1')
grid_params = ['NumFun','prprc','He', 'initmode', 'batchsize','lr']
name,config = 'MNIST',[(1568,'relu'),]
_1 = evaldata(name,config,'standard',True,'gaussian',20,1000,0.001,'myopt',(True,True,5,True),False)
name,config = 'MNIST',[(1568,'relu'),(256,'tanh')]
_2 = evaldata(name,config,'standard',True,'gaussian',20,1000,0.001,'myopt',(True,True,5,True),False)
name,config = 'MNIST',[(1568,'relu'),(256,'tanh'),(64,'tanh')]
_3 = evaldata(name,config,'standard',True,'gaussian',20,1000,0.001,'myopt',(True,True,5,True),False)
ls = ['Accuracy','F1-Macro','F1-Micro']
color = ['green','blue','red']
_ = [_1,_2,_3]
for i in range(3):
_[i] = list(_[i])
_[i][:2] = _[i][:2][::-1]
for i in range(3): #Metric
plt.close()
plt.title(ls[i])
plt.plot(list(range(1,3+1)),[_[j][i+1][-1] for j in range(3)],color=color[i])
plt.savefig(ls[i]+'1')
plt.close()
os.chdir('..')
```
<img src="output_plots/Part1/Task1/Accuracy1.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task1/F1Macro1.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task1/F1Micro1.png" height="450" width="600" align="left">
<h3>Task 2 - Trying Various number of neurons in each layer</h3>
<h5>SubTask 1 - Changing Number of Units in 1<sup>st</sup> Hidden Layer of Architecture</h5>
```
dct1 = {
'1 Layer' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((49,98,196,392,784,1176,1568,)),
'C functions' : it.product(('relu',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<img src="output_plots/Part1/Task2/1Layer/CostLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/1Layer/AccuracyLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/1Layer/F1MacroLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/1Layer/F1MicroLayer1Units.png" height="450" width="600" align="left">
<h5>SubTask 2 - Changing Number of Units in 2<sup>nd</sup> Hidden Layer of Architecture</h5>
```
dct2 = {
'2 Layer' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(16,32,64,128,256)),
'C functions' : it.product(('relu',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<img src="output_plots/Part1/Task2/2Layer/CostLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/2Layer/AccuracyLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/2Layer/F1MacroLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/2Layer/F1MicroLayer2Units.png" height="450" width="600" align="left">
<h5>SubTask 3 - Changing Number of Units in 3<sup>rd</sup> Hidden Layer of Architecture</h5>
```
dct3 = {
'3 Layer' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,),(16,32,64)),
'C functions' : it.product(('relu',),('tanh',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct3,True)
```
<img src="output_plots/Part1/Task2/3Layer/CostLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/3Layer/AccuracyLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/3Layer/F1MacroLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task2/3Layer/F1MicroLayer3Units.png" height="450" width="600" align="left">
<h3>Task 3 - Trying Activation Functions on each layer</h3>
<h5>SubTask 1 - Changing Activation Functions in 1<sup>st</sup> Hidden Layer of Architecture</h5>
```
dct1 = {
'1 Layer FUNCTIONS' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,)),
'C functions' : it.product(('sigmoid','relu','tanh','swish')),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<img src="output_plots/Part1/Task3/1LayerF/CostLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/1LayerF/AccuracyLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/1LayerF/F1MacroLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/1LayerF/F1MicroLayer1Activation.png" height="450" width="600" align="left">
<h5>SubTask 2 - Changing Activation Functions in 2<sup>nd</sup> Hidden Layer of Architecture</h5>
```
dct2 = {
'2 Layer FUNCTIONS' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,)),
'C functions' : it.product(('relu',),('sigmoid','relu','tanh','swish')),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<img src="output_plots/Part1/Task3/2LayerF/CostLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/2LayerF/AccuracyLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/2LayerF/F1MacroLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/2LayerF/F1MicroLayer2Activation.png" height="450" width="600" align="left">
<h5>SubTask 3 - Changing Activation Functions in 3<sup>rd</sup> Hidden Layer of Architecture</h5>
```
dct3 = {
'3 Layer FUNCTIONS' :
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,),(64,)),
'C functions' : it.product(('relu',),('tanh',),('relu','tanh','swish'),),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct3,True)
```
<img src="output_plots/Part1/Task3/3LayerF/CostLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/3LayerF/AccuracyLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/3LayerF/F1MacroLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part1/Task3/3LayerF/F1MicroLayer3Activation.png" height="450" width="600" align="left">
<h3>Task 4 Initialization & Preprocessing Techniques</h3>
<h5>SubTask 1 Impact of Xavier-He weight Initiliazation</h5>
```
dct2 = {
'Xavier-He':
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,)),
'C functions' : it.product(('relu',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [False,True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<h5>Observed learning curve for Initialization technique as "Default" vs "Xavier-He"</h5>
<br><img src="output_plots/Part1/Task4/XH/NHE.png" height="300" width="450" align="left">
<img src="output_plots/Part1/Task4/XH/YHE.png" height="300" width="450" align="right">
<h5>SubTask 2 Finding Suitable Preprocesing & Initialization Distirbution</h5>
```
dct1 = {
'Init Gaussian':
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,)),
'C functions' : it.product(('relu',),('tanh',)),
'D preproc' : ['standard','scale'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
},
'Init Uniform':
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,)),
'C functions' : it.product(('relu',),('tanh',)),
'D preproc' : ['standard','scale'],
'E He' : [True],
'F initmodes' : ['uniform'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<h5>Observed learning curve for Preprocessing from "Scaling" vs "Standardization"</h5>
<br><img src="output_plots/Part1/Task4/IG/GraphG2.png" height="300" width="450" align="left">
<img src="output_plots/Part1/Task4/IG/GraphG.png" height="250" width="450" align="right">
<b>Note: </b>Standard preprocessing, being insensitive to out-liers performs better than Min-Max Scaling<br>
Hence, most our experiments shown use standardization, and might follow same for future ones
<h5>Observed learning curve for Initialization from "Uniform" vs "Gaussian"</h5>
<br><img src="output_plots/Part1/Task4/IU/GraphU.png" height="300" width="450" align="left">
<img src="output_plots/Part1/Task4/IG/GraphG.png" height="250" width="450" align="right">
<b>General obervation:</b>
Initialization when done from Gaussian distribution inserts minimum information in a system<br>
Hence our most experiments will use it, you are free to experiment with other options
<h3>Task 5 Comparing Classification Reports of Numpy & Keras implementations</h3>
```
dct1 = {
'Keras':
{
'A datasets' : ['MNIST'],
'B units' : it.product((1568,),(256,)),
'C functions' : it.product(('relu',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [20],
'H batchsize' : [1000],
'I learning_rate' : [0.001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,5,True)],
'L keras' : [True],
},
}
multi_grid_search(dct1,True)
```
<pre>
################ NumPy Implementation ################
-------- Classification Report on Training data --------
precision recall f1-score support
0 0.9984 0.9997 0.9991 3719
1 1.0000 0.9983 0.9992 4212
2 0.9989 0.9995 0.9992 3746
3 0.9992 0.9977 0.9985 3925
4 0.9986 0.9989 0.9988 3644
5 0.9985 0.9985 0.9985 3438
6 0.9992 0.9987 0.9989 3715
7 0.9977 0.9997 0.9987 3970
8 0.9986 0.9997 0.9992 3665
9 0.9981 0.9968 0.9975 3766
micro avg 0.9988 0.9988 0.9988 37800
macro avg 0.9987 0.9988 0.9988 37800
weighted avg 0.9988 0.9988 0.9988 37800
samples avg 0.9988 0.9988 0.9988 37800
-------- Classification Report on Validation data --------
precision recall f1-score support
0 0.9689 0.9806 0.9747 413
1 0.9871 0.9703 0.9786 472
2 0.9630 0.9652 0.9641 431
3 0.9240 0.9413 0.9326 426
4 0.9620 0.9463 0.9541 428
5 0.9660 0.9552 0.9606 357
6 0.9833 0.9787 0.9810 422
7 0.9578 0.9490 0.9534 431
8 0.9358 0.9523 0.9440 398
9 0.9343 0.9431 0.9387 422
micro avg 0.9583 0.9583 0.9583 4200
macro avg 0.9582 0.9582 0.9582 4200
weighted avg 0.9585 0.9583 0.9584 4200
################ Keras Implementation ################
-------- Classification Report on Training data --------
precision recall f1-score support
0 0.9995 1.0000 0.9997 3719
1 0.9995 0.9991 0.9993 4212
2 0.9997 1.0000 0.9999 3746
3 0.9997 0.9987 0.9992 3925
4 0.9997 0.9997 0.9997 3644
5 0.9994 0.9997 0.9996 3438
6 0.9992 0.9997 0.9995 3715
7 0.9982 0.9995 0.9989 3970
8 0.9995 0.9992 0.9993 3665
9 0.9992 0.9981 0.9987 3766
micro avg 0.9994 0.9994 0.9994 37800
macro avg 0.9994 0.9994 0.9994 37800
weighted avg 0.9994 0.9994 0.9994 37800
samples avg 0.9994 0.9994 0.9994 37800
-------- Classification Report on Validation data --------
precision recall f1-score support
0 0.9806 0.9782 0.9794 413
1 0.9850 0.9746 0.9798 472
2 0.9501 0.9722 0.9610 431
3 0.9307 0.9460 0.9383 426
4 0.9553 0.9486 0.9519 428
5 0.9624 0.9328 0.9474 357
6 0.9833 0.9739 0.9786 422
7 0.9471 0.9559 0.9515 431
8 0.9340 0.9598 0.9467 398
9 0.9444 0.9265 0.9354 422
micro avg 0.9574 0.9574 0.9574 4200
macro avg 0.9573 0.9569 0.9570 4200
weighted avg 0.9576 0.9574 0.9574 4200
samples avg 0.9574 0.9574 0.9574 4200
<pre>
<h1>Part 2 "Cat-Dog" Evaluation and Experiments</h1>
```
grid_params = ['NumFun','prprc','He', 'initmode', 'batchsize','lr']
```
<h3>Task 1 - Varying Number of Layers</h3>
```
try:
shutil.rmtree('Task1')
except:
pass
finally:
os.mkdir('Task1')
os.chdir('Task1')
grid_params = ['NumFun','prprc','He', 'initmode', 'batchsize','lr']
name,config = 'Cat-Dog',[(2048,'relu'),]
_1 = evaldata(name,config,'standard',True,'gaussian',100,200,0.0001,'myopt',(True,True,-1,True),False)
name,config = 'Cat-Dog',[(2048,'relu'),(256,'relu')]
_2 = evaldata(name,config,'standard',True,'gaussian',100,200,0.0001,'myopt',(True,True,-1,True),False)
name,config = 'Cat-Dog',[(2048,'relu'),(256,'relu'),(64,'tanh')]
_3 = evaldata(name,config,'standard',True,'gaussian',100,200,0.0001,'myopt',(True,True,-1,True),False)
ls = ['Accuracy','F1-Macro','F1-Micro']
color = ['green','blue','red']
_ = [_1,_2,_3]
for i in range(3):
_[i] = list(_[i])
_[i][:2] = _[i][:2][::-1]
for i in range(3): #Metric
plt.close()
plt.title(ls[i])
plt.plot(list(range(1,3+1)),[_[j][i+1][-1] for j in range(3)],color=color[i])
plt.savefig(ls[i]+'1')
plt.close()
os.chdir('..')
```
<img src="output_plots/Part2/Task1/Accuracy1.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task1/F1Macro1.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task1/F1Micro1.png" height="450" width="600" align="left">
<h3>Task 2 - Trying Various number of neurons in each layer</h3>
<h5>SubTask 1 - Changing Number of Units in 1<sup>st</sup> Hidden Layer of Architecture</h5>
```
dct1 = {
'1 Layer' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((512,1024,2048)),
'C functions' : it.product(('relu',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<img src="output_plots/Part2/Task2/1Layer/CostLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/1Layer/AccuracyLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/1Layer/F1MacroLayer1Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/1Layer/F1MicroLayer1Units.png" height="450" width="600" align="left">
<h5>SubTask 2 - Changing Number of Units in 2<sup>nd</sup> Hidden Layer of Architecture</h5>
```
dct2 = {
'2 Layer' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(64,128,256)),
'C functions' : it.product(('relu',),('relu',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<img src="output_plots/Part2/Task2/2Layer/CostLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/2Layer/AccuracyLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/2Layer/F1MacroLayer2Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/2Layer/F1MicroLayer2Units.png" height="450" width="600" align="left">
<h5>SubTask 3 - Changing Number of Units in 3<sup>rd</sup> Hidden Layer of Architecture</h5>
```
dct3 = {
'3 Layer' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,),(16,32,64)),
'C functions' : it.product(('relu',),('relu',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct3,True)
```
<img src="output_plots/Part2/Task2/3Layer/CostLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/3Layer/AccuracyLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/3Layer/F1MacroLayer3Units.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task2/3Layer/F1MicroLayer3Units.png" height="450" width="600" align="left">
<h3>Task 3 - Trying Activation Functions on each layer</h3>
<h5>SubTask 1 - Changing Activation Functions in 1<sup>st</sup> Hidden Layer of Architecture</h5>
```
dct1 = {
'1 Layer FUNCTIONS' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((2048,)),
'C functions' : it.product(('sigmoid','relu','tanh','swish')),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<img src="output_plots/Part2/Task3/1LayerF/CostLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/1LayerF/AccuracyLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/1LayerF/F1MacroLayer1Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/1LayerF/F1MicroLayer1Activation.png" height="450" width="600" align="left">
<h5>SubTask 2 - Changing Activation Functions in 2<sup>nd</sup> Hidden Layer of Architecture</h5>
```
dct2 = {
'2 Layer FUNCTIONS' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,)),
'C functions' : it.product(('relu',),('sigmoid','relu','tanh','swish')),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<img src="output_plots/Part2/Task3/2LayerF/CostLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/2LayerF/AccuracyLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/2LayerF/F1MacroLayer2Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/2LayerF/F1MicroLayer2Activation.png" height="450" width="600" align="left">
<h5>SubTask 3 - Changing Activation Functions in 3<sup>rd</sup> Hidden Layer of Architecture</h5>
```
dct3 = {
'3 Layer FUNCTIONS' :
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,),(64,)),
'C functions' : it.product(('relu',),('relu',),('relu','tanh','swish'),),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct3,True)
```
<img src="output_plots/Part2/Task3/3LayerF/CostLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/3LayerF/AccuracyLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/3LayerF/F1MacroLayer3Activation.png" height="450" width="600" align="left">
<img src="output_plots/Part2/Task3/3LayerF/F1MicroLayer3Activation.png" height="450" width="600" align="left">
<h3>Task 4 Initialization & Preprocessing Techniques</h3>
<h5>SubTask 1 Impact of Xavier-He weight Initiliazation</h5>
```
dct2 = {
'Xavier-He':
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,)),
'C functions' : it.product(('relu',),('relu',)),
'D preproc' : ['standard'],
'E He' : [False,True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct2,True)
```
<h5>Observed learning curve for Initialization technique as "Default" vs "Xavier-He"</h5>
<br><img src="output_plots/Part2/Task4/XH/NHE.png" height="300" width="450" align="left">
<img src="output_plots/Part2/Task4/XH/YHE.png" height="300" width="450" align="left">
<h5>SubTask 2 Finding Suitable Preprocesing & Initialization Distirbution</h5>
```
dct1 = {
'Init Gaussian':
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,)),
'C functions' : it.product(('relu',),('relu',)),
'D preproc' : ['standard','scale'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
},
'Init Uniform':
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((1024,),(256,)),
'C functions' : it.product(('relu',),('relu',)),
'D preproc' : ['standard','scale'],
'E He' : [True],
'F initmodes' : ['uniform'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [False],
}
}
multi_grid_search(dct1,True)
```
<h5>Observed learning curve for Preprocessing from "Scaling" vs "Standardization"</h5>
<br><img src="output_plots/Part2/Task4/IG/GraphG2.png" height="300" width="450" align="left">
<img src="output_plots/Part2/Task4/IG/GraphG.png" height="250" width="450" align="right">
<b>Note: </b>Standard preprocessing, being insensitive to out-liers performs better than Min-Max Scaling<br>
Hence, most our experiments shown use standardization, and might follow same for future ones
<h5>Observed learning curve for Initialization from "Uniform" vs "Gaussian"</h5>
<br><img src="output_plots/Part2/Task4/IU/GraphU.png" height="300" width="450" align="left">
<img src="output_plots/Part2/Task4/IG/GraphG.png" height="250" width="450" align="right">
<b>General obervation:</b>
Initialization when done from Gaussian distribution inserts minimum information in a system<br>
Hence our most experiments will use it, you are free to experiment with other options
<h3>Task 5 Comparing Classification Reports of Numpy & Keras implementations</h3>
```
dct1 = {
'Keras':
{
'A datasets' : ['Cat-Dog'],
'B units' : it.product((2048,),(256,),(64,)),
'C functions' : it.product(('relu',),('relu',),('tanh',)),
'D preproc' : ['standard'],
'E He' : [True],
'F initmodes' : ['gaussian'],
'G epochs' : [100],
'H batchsize' : [200],
'I learning_rate' : [0.0001],
'J optimizer' : ['myopt'],
'K early_stopping' : [(True,True,-1,True)],
'L keras' : [True],
},
}
multi_grid_search(dct1,True)
```
<pre>
################ NumPy Implementation ################
-------- Classification Report on Training data --------
precision recall f1-score support
cat 1.0000 1.0000 1.0000 11250
dog 1.0000 1.0000 1.0000 11250
micro avg 1.0000 1.0000 1.0000 22500
macro avg 1.0000 1.0000 1.0000 22500
weighted avg 1.0000 1.0000 1.0000 22500
samples avg 1.0000 1.0000 1.0000 22500
-------- Classification Report on Validation data --------
precision recall f1-score support
cat 0.6341 0.6392 0.6367 1250
dog 0.6363 0.6312 0.6337 1250
micro avg 0.6352 0.6352 0.6352 2500
macro avg 0.6352 0.6352 0.6352 2500
weighted avg 0.6352 0.6352 0.6352 2500
samples avg 0.6352 0.6352 0.6352 2500
################ Keras Implementation ################
-------- Classification Report on Training data --------
precision recall f1-score support
cat 0.9932 0.9986 0.9959 11250
dog 0.9986 0.9932 0.9959 11250
micro avg 0.9959 0.9959 0.9959 22500
macro avg 0.9959 0.9959 0.9959 22500
weighted avg 0.9959 0.9959 0.9959 22500
samples avg 0.9959 0.9959 0.9959 22500
-------- Classification Report on Validation data --------
precision recall f1-score support
cat 0.6193 0.7224 0.6669 1250
dog 0.6670 0.5560 0.6065 1250
micro avg 0.6392 0.6392 0.6392 2500
macro avg 0.6432 0.6392 0.6367 2500
weighted avg 0.6432 0.6392 0.6367 2500
samples avg 0.6392 0.6392 0.6392 2500
<pre>
<h1>Part 3 Execution of Neural Nets on Datasets of Assignment 1</h1>
<h3>Loading Datasets & Preprocessing of Twitter data into bag-of-word</h3>
```
# 1. Dolphins
X1 = pd.read_csv('/data2/dolphins/dolphins.csv',sep=' ',header=None)
Y1 = pd.read_csv('/data2/dolphins/dolphins_label.csv',sep=' ',header=None)
X1, Y1 = np.array(X1), np.array(Y1)
Y1 = cattooht(Y1)
# 2. Twitter(bag-of-Word)
X2 = pd.read_csv('/data2/twitter/twitter.csv',header=None)
Y2 = pd.read_csv('/data2/twitter/twitter_label.csv',header=None)
# Converting into bag-of-word
all_words = set()
local_ls = []
for indx,stmt in X2.iterrows():
local = {}
for word in stmt[0].strip().split():
if word in local:
local[word] += 1
else:
local[word] = 1
local_ls.append(local)
all_words.update(local)
mat = [[(local[word] if word in local else 0) for word in all_words] for local in local_ls]
X2 = pd.DataFrame(np.array(mat))
X2, Y2 = np.array(X2), np.array(Y2)
Y2 = cattooht(Y2)
# 3. PubMed
X3 = pd.read_csv('/data2/pubmed/pubmed.csv',sep=' ',header=None)
Y3 = pd.read_csv('/data2/pubmed/pubmed_label.csv',sep=' ',header=None)
X3, Y3 = np.array(X3), np.array(Y3)
Y3 = cattooht(Y3)
# Assignment into Global Datastructure
datasets['Dolphins'] = X1,Y1[0],list(map(str,Y1[1]))
datasets['Twitter'] = X2,Y2[0],list(map(str,Y2[1]))
datasets['PubMed'] = X3,Y3[0],list(map(str,Y3[1]))
```
<h3> Evaluation on Dolphins datasets</h3>
```
# 1. Dolphins Execution
name = 'Dolphins'
ipdim, opdim = datasets[name][0].shape[1], datasets[name][1].shape[1]
s1 = ipdim*2
s2 = int(round((s1*opdim)**0.5))
name,config = name,[(s1,'relu'),(s2,'relu')]
ln = len(datasets[name][0])
_ = evaldata(name,config,'standard',True,'gaussian',100,10,0.001,'myopt',(True,True,-1,True),False)
```
<pre>Previous Bayesian Classification Metrics
Accuracy : 0.90833 F1-Micro : 0.90833 F1-Macro : 0.85333
Previous KNN Classification Metrices
Accuracy 0.98333 F1-Micro : 0.98333 F1-Macro : 0.98222
<img src="output_plots/Part3/Dolphins.png" height="450" width="600" align="left">
<b>General obervation:</b>
Neural Network is able to make precise prediction in class imbalanced dataset
<h3> Evaluation on Twitter datasets</h3>
```
name = 'Twitter'
ipdim, opdim = datasets[name][0].shape[1], datasets[name][1].shape[1]
s1 = ipdim*2
s2 = int(round((s1*opdim)**0.5))
name,config = name,[(s1,'relu'),(s2,'relu')]
ln = len(datasets[name][0])
_ = evaldata(name,config,'standard',True,'gaussian',10,300,0.0003,'myopt',(True,True,-1,True),False)
```
<pre>Previous Bayesian Classification Metrics
Accuracy : 0.56010 F1-Micro : 0.56010 F1-Macro : 0.34854
Previous KNN Classification Metrices
Accuracy 0.48414 F1-Micro : 0.48414 F1-Macro : 0.41388
<img src="output_plots/Part3/Twittr.png" height="450" width="600" align="left">
<b>General obervation:</b>
No improvement on validation data is observed during training<br>
Network is just overfitting Training data, sequence modelling should be captured<br>
Which is not possible with ordinary Neural Network Architecture of this assignment
<h3> Evaluation on PubMed datasets</h3>
```
name = 'PubMed'
ipdim, opdim = datasets[name][0].shape[1], datasets[name][1].shape[1]
s1 = ipdim*2
s2 = int(round((s1*opdim)**0.5))
name,config = name,[(s1,'relu'),(s2,'relu')]
ln = len(datasets[name][0])
_ = evaldata(name,config,'standard',True,'gaussian',100,500,0.001,'myopt',(True,True,-1,True),False)
```
<pre>Previous Bayesian Classification Metrics
Accuracy : 0.44144 F1-Micro : 0.44144 F1-Macro : 0.33277
Previous KNN Classification Metrices
Accuracy 0.35412 F1-Micro : 0.35412 F1-Macro : 0.34554
<img src="output_plots/Part3/PubMed.png" height="450" width="600" align="left">
<b>General obervation:</b>
This dataset is again overfitted by Neural Network<br>
Multinomial Naive Bayes' performed some better than Neural network
| github_jupyter |
### 함수
- 반복되는 코드를 묶음으로 효율적인 코드를 작성하도록 해주는 기능
- 기본 함수
- 파라미터와 아규먼트
- 리턴
- `*args`, `**kwargw`
- docstring
- scope
- inner function
- lambda function
- Map, filter, Reduce
- Decorlator
#### 1. 기본 함수
- 선언과 호출
```
point=88
if point >= 90:
print("A")
elif point>= 80:
print("B")
else:
print("C")
# 함수 선언 (식별자 스네이크로)
def grade(point):
if point >= 90:
print("A")
elif point>= 80:
print("B")
else:
print("C")
# 함수호출
grade(88)
grade(78)
```
### 2. 파라미터와 아규먼트
- 파라미터: 함수를 선언할때 호출하는 부분에서 보내주는 데이터를 받는 변수
- 아규먼트: 함수를 호출할때 함수에 보내주는 데이터 (함수를 사용할때 입력하는,,)
```
def plus(num1, num2): #파라미터
print(num1+num2)
plus(1,2) # 아규먼트
plus(3, 4) #아규먼트
plus(3) # 아규먼트 개수가 요구되는 파라미터 개수랑 다를땐 오류가남
def plus(num1, num2=10): #파라미터 : 디폴트 파라미터
print(num1+num2)
# 아무것도 입력하지 않으면 디폴트 값을 10으로 하겠다
plus(3)
def plus(num1,num2=10,num3=20):
print(num1+num2-num3)
plus(3,num3=100) # 아규먼트: 키워드 아규먼트
# 아규먼트를 어떤 파라미터역할에 대입할지 지정하는것
```
### 3. 리턴
- 함수를 실행한 결과를 저장하고 싶을때 사용합니다.
- return
```
def plus(num1,num2):
print(num1+num2)
result=plus(1,2)
print(result)
# 출력까지는 해줬는데 값을 내보내진 않음
def plus(num1,num2):
print(num1+num2)
return num1+num2
# return (num1+num2) ->를 하면 튜플로 저장됨 안해주는게 맞음
result=plus(1,2)
print(result)
data1="python"
result=data1.upper()
print(result)
data2=[3,1,2]
result=data2.sort()
print(result)
# sort에는 return이 포함되어있지 않음
# 결과를 저장하는 함수 예시
def grade(point):
result=""
if point >=90:
return "A"
elif point >= 80:
return 'B'
else:
return "C"
point=78
result=grade(point)
print(result)
# 함수에서 return 코드가 실행되면 무조건 함수의 코드 실행이 종료 (break와같이)
# 함수 종료를 위해서도 쓰임
def echo(msg):
if msg == "Quit":
return
print(msg)
echo("python")
echo("Quit")
```
#### 4. `*args`, `**kwargs`
- 함수를 호출할때 아규먼트와 키워드 아규먼트의 개수를 특정지을수 없을때 사용
```
def plus(num1, num2):
return num1, num2
plus(1,2)
# `*args` 아규먼트 개수에 상관없이 들어온 모든 아큐먼트에 대해 적용,,
def plus(*args):
print(type(args),args)
return sum(args)
plus(1,2,3,4,5)
# `*args` 아규먼트 개수에 상관없이 들어온 모든 아규먼트에 대해 적용,,
# '**kargs' 키워드가 있는 아규먼트들을 일컫는말 키워드 있는 아규먼트들은 kargs에 들어감
# kargs 키워드 있음 -> 딕셔너리 타입으로!
def plus(*args,**kargs):
print(type(args),args)
print(type(kargs),kargs)
return sum(args)+sum(list(kargs.values()))
plus(1,2,3,4,5,num1=6,num2=7)
def func(num1,num2,num3):
return num1+num2+num3
data=[1,2,3]
func(*data) # func(1,2,3)
# *args 이것처럼 *data 하면 data안에있는 요소들을 args하나하나로 넣음!
def func(num1,num2,num3):
return num1+num2+num3
data=[1,2,3]
func(data) # func([1,2,3]) 이렇게 들어간 거라 1,2,3이 num1에 들어가 에러남
# *args 이것처럼 *data 하면 data안에있는 요소들을 args하나하나로 넣음!
data={
"num2":100,
"num3":200,
}
func(1,**data)
# 1은 num1에 data에 num2와 func의 num2 같으니 num2에 100, num3에는 200
# (**karg는 키값 같은걸 넣어줌)
# func(1,num2=100,num3=200)와 동일한 결과
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
> Note: This is an archived TF1 notebook. These are configured
to run in TF2's
[compatbility mode](https://www.tensorflow.org/guide/migrate)
but will run in TF1 as well. To use TF1 in Colab, use the
[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)
magic.
## Overview
`tf.distribute.Strategy` is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes.
`tf.distribute.Strategy` has been designed with these key goals in mind:
* Easy to use and support multiple user segments, including researchers, ML engineers, etc.
* Provide good performance out of the box.
* Easy switching between strategies.
`tf.distribute.Strategy` can be used with TensorFlow's high level APIs, [tf.keras](https://www.tensorflow.org/r1/guide/keras) and [tf.estimator](https://www.tensorflow.org/r1/guide/estimators), with just a couple of lines of code change. It also provides an API that can be used to distribute custom training loops (and in general any computation using TensorFlow).
In TensorFlow 2.0, users can execute their programs eagerly, or in a graph using [`tf.function`](../tutorials/eager/tf_function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution. Note that we may talk about training most of the time in this guide, but this API can also be used for distributing evaluation and prediction on different platforms.
As you will see in a bit, very few changes are needed to use `tf.distribute.Strategy` with your code. This is because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we will talk about various types of strategies and how one can use them in different situations.
Note: For a deeper understanding of the concepts, please watch [this deep-dive presentation](https://youtu.be/jKV53r9-H14). This is especially recommended if you plan to write your own training loop.
```
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
```
## Types of strategies
`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF.
### MirroredStrategy
`tf.distribute.MirroredStrategy` support synchronous distributed training on multiple GPUs on one machine. It creates one model replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. The user can also choose between a few other options we provide, or write their own.
Here is the simplest way of creating `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
```
This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
```
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
```
If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently we provide `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` as 2 other options other than `tf.distribute.NcclAllReduce` which is the default.
```
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
```
### CentralStorageStrategy
`tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create a `CentralStorageStrategy` by:
```
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
```
This will create a `CentralStorageStrategy` instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggragated before being applied to variables.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### MultiWorkerMirroredStrategy
`tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, users will be able to plugin algorithms that are better tuned for their hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them like so:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
```
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. "TF_CONFIG" environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. See section on ["TF_CONFIG" below](#TF_CONFIG) for more details on how this can be done.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### TPUStrategy
`tf.distribute.experimental.TPUStrategy` lets users run their TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Google Compute Engine](https://cloud.google.com/tpu).
In terms of distributed training architecture, TPUStrategy is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
Here is how you would instantiate `TPUStrategy`.
Note: To run this code in Colab, you should select TPU as the Colab runtime. See [Using TPUs]( tpu.ipynb) guide for a runnable version.
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.tpu.experimental.initialize_tpu_system(resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)
```
`TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it. If you want to use this for Cloud TPUs, you will need to specify the name of your TPU resource in `tpu` argument. We also need to initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation and should ideally be done at the beginning because it also wipes out the TPU memory so all state will be lost.
Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### ParameterServerStrategy
`tf.distribute.experimental.ParameterServerStrategy` supports parameter servers training on multiple machines. In this setup, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of all the workers.
In terms of code, it looks similar to other strategies:
```
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
```
For multi worker training, "TF_CONFIG" needs to specify the configuration of parameter servers and workers in your cluster, which you can read more about in [TF_CONFIG](#TF_CONFIG) below.
So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end.
## Using `tf.distribute.Strategy` with Keras
We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the
[Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for Keras users to distribute their training written in the Keras training framework. The only things that need to change in a user's program are: (1) Create an instance of the appropriate `tf.distribute.Strategy` and (2) Move the creation and compiling of Keras model inside `strategy.scope`.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
```
In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is setup, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients etc.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
```
Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays:
```
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
```
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
```
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
```
### What's supported now?
In [TF nightly release](https://pypi.org/project/tf-nightly-gpu/), we now support training with Keras using all strategies.
Note: When using `TPUStrategy` with TPU pods with Keras, currently the user will have to explicitly shard or shuffle the data for different workers, but we will change this in the future to automatically shard the input data intelligently.
### Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
1. [Tutorial](../tutorials/distribute/keras.ipynb) to train MNIST with `MirroredStrategy`.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/resnet_imagenet_main.py) training with ImageNet data using `MirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) trained with Imagenet data on Cloud TPus with `TPUStrategy`.
## Using `tf.distribute.Strategy` with Estimator
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator` so that a user who is using Estimator for their training can easily change their training is distributed with very few changes to your their code. With this, estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split across the multiple replicas. In Estimator, however, the user provides an `input_fn` and have full control over how they want their data to be distributed across workers and devices. We do not do automatic splitting of batch, nor automatically shard the data across different workers. The provided `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`. When doing multi worker training, users will also want to either split their data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb).
We showed an example of using `MirroredStrategy` with Estimator. You can also use `TPUStrategy` with Estimator as well, in the exact same way:
```
config = tf.estimator.RunConfig(
train_distribute=tpu_strategy, eval_distribute=tpu_strategy)
```
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set "TF_CONFIG" environment variables for each binary running in your cluster.
### What's supported now?
In TF nightly release, we support training with Estimator using all strategies.
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kuberentes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/r1/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/distribution_strategy/resnet_estimator.py) example with TPUStrategy.
## Using `tf.distribute.Strategy` with custom training loops
As you've seen, using `tf.distrbute.Strategy` with high level APIs is only a couple lines of code change. With a little more effort, `tf.distrbute.Strategy` can also be used by other users who are not using these frameworks.
TensorFlow is used for a wide variety of use cases and some users (such as researchers) require more flexibility and control over their training loops. This makes it hard for them to use the high level frameworks such as Estimator or Keras. For instance, someone using a GAN may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training. So these users will usually write their own training loops.
For these users, we provide a core set of methods through the `tf.distrbute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, the user should be able to switch between GPUs / TPUs / multiple machines by just changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
Note: These APIs are still experimental and we are improving them to make them more user friendly.
First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
```
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.train.GradientDescentOptimizer(0.1)
```
Next, we create the input dataset and call `tf.distribute.Strategy.experimental_distribute_dataset` to distribute the dataset based on the strategy.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
```
Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put it in a function `step_fn` and pass it to `tf.distribute.Strategy.run` along with the inputs from the iterator:
```
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
train_op = optimizer.minimize(loss)
with tf.control_dependencies([train_op]):
return tf.identity(loss)
per_replica_losses = mirrored_strategy.run(
step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
return mean_loss
```
A few other things to note in the code above:
1. We used `tf.nn.softmax_cross_entropy_with_logits` to compute the loss. And then we scaled the total loss by the global batch size. This is important because all the replicas are training in sync and number of examples in each step of training is the global batch. So the loss needs to be divided by the global batch size and not by the replica (local) batch size.
2. We used the `strategy.reduce` API to aggregate the results returned by `tf.distribute.Strategy.run`. `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results(results)`to get the list of values contained in the result, one per local replica.
Finally, once we have defined the training step, we can initialize the iterator and variables and run the training in a loop:
```
with mirrored_strategy.scope():
input_iterator = dist_dataset.make_initializable_iterator()
iterator_init = input_iterator.initializer
var_init = tf.global_variables_initializer()
loss = train_step(input_iterator.get_next())
with tf.Session() as sess:
sess.run([var_init, iterator_init])
for _ in range(10):
print(sess.run(loss))
```
In the example above, we used `tf.distribute.Strategy.experimental_distribute_dataset` to provide input to your training. We also provide the `tf.distribute.Strategy.make_experimental_numpy_dataset` to support numpy inputs. You can use this API to create a dataset before calling `tf.distribute.Strategy.experimental_distribute_dataset`.
This covers the simplest case of using `tf.distribute.Strategy` API to distribute custom training loops. We are in the process of improving these APIs. Since this use case requires more work on the part of the user, we will be publishing a separate detailed guide in the future.
### What's supported now?
In TF nightly release, we support training with custom training loops using `MirroredStrategy` and `TPUStrategy` as shown above. Support for other strategies will be coming in soon. `MultiWorkerMirorredStrategy` support will be coming in the future.
### Examples and Tutorials
Here are some examples for using distribution strategy with custom training loops:
1. [Example](https://github.com/tensorflow/tensorflow/blob/5456cc28f3f8d9c17c645d9a409e495969e584ae/tensorflow/contrib/distribute/python/examples/mnist_tf1_tpu.py) to train MNIST using `TPUStrategy`.
## Other topics
In this section, we will cover some topics that are relevant to multiple use cases.
<a id="TF_CONFIG">
### Setting up TF\_CONFIG environment variable
</a>
For multi-worker training, as mentioned before, you need to set "TF\_CONFIG" environment variable for each
binary running in your cluster. The "TF\_CONFIG" environment variable is a JSON string which specifies what
tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets
"TF\_CONFIG" for your training tasks.
One example of "TF\_CONFIG" is:
```
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["host1:port", "host2:port", "host3:port"],
"ps": ["host4:port", "host5:port"]
},
"task": {"type": "worker", "index": 1}
})
```
This "TF\_CONFIG" specifies that there are three workers and two ps tasks in the
cluster along with their hosts and ports. The "task" part specifies that the
role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
"chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
## What's next?
`tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
<a href="https://colab.research.google.com/github/andretocci/notebooks/blob/master/Post_blog_Attribution(MAM).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Demostração Blog DP6
Importando pacotes necessários:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
!pip install marketing_attribution_models
from marketing_attribution_models import MAM
#@title Importando e Tratando as Bases
import glob
from google.colab import drive
drive.mount('/content/drive/')
filepath = '/content/drive/Shared drives/Data Science/Ultron/Recipes/DP6tribution/'
filename = '*'
all_files = glob.glob(filepath + filename)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
```
Para está demonstração, optamos por demonstrar utilizando uma base de dados bastante conhecida pelos profissionais do Marketing: Google Merchandise Store. Está base foi extraída da conta de demonstração do Google Analytics, que pode ser acessada por qualquer usuário do Google.
```
#Lendo os dados do Google Merchandise Store, mas selecioanndo apenas as colunas que serão utilizadas para gerar os modelos
df = pd.read_csv(all_files[0],
usecols=['channelGrouping',
'date',
'fullVisitorId',
'totals_transactions',
'totals_transactionRevenue'],
dtype = {'channelGrouping':str,
'fullVisitorId':str,
'totals_transactions' : np.float64,
'totals_transactionRevenue' : np.float64} )
#Analizando o DF retornado
df.head()
```
Mas antes de aplicarmos o modelo utilizando a MAM, precisamos realizar alguns ajustes na base de dados.
E analisando o .head() e o .info(), conseguimos observar que:
- A coluna de data não está no formato datetime;
- As colunas totals_transactions e totals_transactionRevenue possuem valores vazios;
- fullVisitorId foi lido como objeto, temos que tomar cuidado para não ler essa informação como número, 001 pode ser um ID diferente de1;
- A base não possui nenhum agrupamento de jornada;
```
#Selecionando colunas necessárias
df.info()
```
Assim, tratamos a coluna de data para o formato correto, tratamos os valores nulos e criamos uma coluna indicando se houve ou não uma conversão naquela sessão.
```
df['date_tratada'] = pd.to_datetime( df['date'], format='%Y%m%d' )
df['totals_transactions'].fillna(0, inplace=True)
df['totals_transactionRevenue'].fillna(0, inplace=True)
df['has_transaction'] = df.totals_transactions.apply(lambda x: True if x > 0 else False)
df.head()
```
Agora que tratamos nossa base, podemos criar o nosso objeto MAM, como nossa base está na granularidade de sessão por usuário e não possuímos um ID de agrupamento de jornada, é importante que passamos como True os parametros group_channels e create_journey_id_based_on_conversion.
```
DP_tribution = MAM(df,
channels_colname='channelGrouping',
group_channels=True,
group_channels_by_id_list=['fullVisitorId'],
group_timestamp_colname='date_tratada',
journey_with_conv_colname='has_transaction',
create_journey_id_based_on_conversion = True,
conversion_value='totals_transactionRevenue')
```
Acessando o atributo .DataFrame, conseguimos observar que houve uma mudança significativa em nossa base de dados. Observamos que a granularidade da tabela foi alterada de sessão para jornada e que há uma coluna nova de id da jornada.
```
DP_tribution.DataFrame.sort_values('conversion_value', ascending=False).head()
```
### Aplicando os Modelos
Para rodar o modelo do Shapley Value, basta aplicar o método .attribution_shapley().
O parâmetro **size limita quantidade de canais únicos na jornada**, por **padrão** é definido como os **4 últimos**. Isso ocorre pois o número de iterações aumenta exponencialmente com o número de canais. Da ordem de 2N, sendo N o número de canais.
A metodologia do cálculo das contribuições marginais pode variar através do **parâmetro order**, que por padrão calcula a contribuição da **combinação dos canais independende da ordem em que aparecem** nas diferentes jornadas.
Por fim, parâmetro na qual o Shapley Value será calculado pode ser alterado em **values_col**, que por padrão utiliza a **taxa de conversão** que é uma forma de **considerarmos as não conversões no cálculo do modelo**. Contudo, também podemos considerar no cálculo o total de conversões ou o valor gerados pelas conversões, como demostrado abaixo.
```
shapley_results = DP_tribution.attribution_shapley()
```
O método retorna uma tupla contendo :
- Os resultados agrupados por jornadas únicas;
```
shapley_results[0]
```
- E os resultados agrupados por canais:
```
shapley_results[1].reset_index()
```
Para rodar o modelo das **Cadeias de Markov**, basta aplicar o método .attribution_markov().
No parâmetro de entrada transition_to_same_state, indica se irá ser retornado ou não a probabilidade de transição para o mesmo estado. Essa configuração **não afeta os resultados agregados** e que são atribuídos para cada canal, **mas sim os valores observados na matriz de transição**.
```
markov_results = DP_tribution.attribution_markov(transition_to_same_state=False)
```
O método retorna uma tupla contendo :
- Os resultados em nível de jornada, que também pode ser obeservado acessando o .DataFrame.
```
markov_results[0].head()
```
- E os resultados agrupados por canais, como retornado pelo modelo Shapley acima;
```
markov_results[1]
```
- Matrix de transição:
```
ax, fig = plt.subplots(figsize=(15,10))
sns.heatmap(markov_results[2].round(3), cmap="YlGnBu", annot=True, linewidths=.5)
```
- **Removal Effect**, quarto output dos resultados attribution_markov, que como detalhado anteriormente, nos auxilia a entender a impertância de cada canal analisado:
```
ax, fig = plt.subplots(figsize=(2,5))
sns.heatmap(markov_results[3].round(3), cmap="YlGnBu", annot=True, linewidths=.5)
```
#### Comparando os Resultados
Agora que rodamos ambos os modelos, podemos comparar seus resultados acessando o atributo group_by_channels_models, que retorna todos os diferentes modelos aplicados nos passos anteriores.
```
DP_tribution.group_by_channels_models
DP_tribution.group_by_channels_models.sum()
DP_tribution.plot()
```
| github_jupyter |
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
from os import listdir, path
import numpy as np
from collections import defaultdict
import datetime
import random
random.seed(42) # Keep the order stable everytime shuffling the files while creating training datasets
```
## Global variables
```
seq_length = 36 # This will be used to keep the fixed input size for the first CNN layer
dim = 6 # Number of datapoints in a single reading accX,accY,accZ,gyrX,gyrY,gyrZ
num_classes = 10 # Number of output classes [0,9]
```
## Sequence Padding
#### When collecting sequence data, individual samples have different lengths. Since the input data for a convolutional neural network must be a single tensor, samples need to be padded. The sequence are padded at the beginning and at the end with neighboring values.
```
def padding(data):
padded_data = []
noise_level = [ 20, 20, 20, 0.2, 0.2, 0.2 ]
tmp_data = (np.random.rand(seq_length, dim) - 0.5) * noise_level + data[0]
tmp_data[(seq_length - min(len(data), seq_length)):] = data[:min(len(data), seq_length)]
padded_data.append(tmp_data)
tmp_data = (np.random.rand(seq_length, dim) - 0.5) * noise_level + data[-1]
tmp_data[:min(len(data), seq_length)] = data[:min(len(data), seq_length)]
padded_data.append(tmp_data)
return padded_data
```
## Convert to TensorFlow dataset, keeps data and labels together
```
def build_dataset(data, label):
# Add 2 padding, initialize data and label
padded_num = 2
length = len(data) * padded_num
features = np.zeros((length, seq_length, dim))
labels = np.zeros(length)
# Get padding for train, valid and test
for idx, (data, label) in enumerate(zip(data, label)):
padded_data = padding(data)
for num in range(padded_num):
features[padded_num * idx + num] = padded_data[num]
labels[padded_num * idx + num] = label
# Turn into tf.data.Dataset
dataset = tf.data.Dataset.from_tensor_slices((features, labels.astype("int32")))
return length, dataset
```
## Time Warping
```
def time_warping(molecule, denominator, data):
tmp_data = [[0 for i in range(len(data[0]))] for j in range((int(len(data) / molecule) - 1) * denominator)]
for i in range(int(len(data) / molecule) - 1):
for j in range(len(data[i])):
for k in range(denominator):
tmp_data[denominator * i + k][j] = (data[molecule * i + k][j] * (denominator - k)
+ data[molecule * i + k + 1][j] * k) / denominator
return tmp_data
```
## Data augmentation
```
def augment_data(original_data, original_label):
new_data = []
new_label = []
for idx, (data, label) in enumerate(zip(original_data, original_label)): # pylint: disable=unused-variable
# Original data
new_data.append(data)
new_label.append(label)
# Shift Sequence
for num in range(5): # pylint: disable=unused-variable
new_data.append((np.array(data, dtype=np.float32) +
(random.random() - 0.5) * 200).tolist())
new_label.append(label)
# Add Random noise
tmp_data = [[0 for i in range(len(data[0]))] for j in range(len(data))]
for num in range(5):
for i in range(len(tmp_data)):
for j in range(len(tmp_data[i])):
tmp_data[i][j] = data[i][j] + 5 * random.random()
new_data.append(tmp_data)
new_label.append(label)
# Time warping
fractions = [(3, 2), (5, 3), (2, 3), (3, 4), (9, 5), (6, 5), (4, 5)]
for molecule, denominator in fractions:
new_data.append(time_warping(molecule, denominator, data))
new_label.append(label)
# Movement amplification
for molecule, denominator in fractions:
new_data.append(
(np.array(data, dtype=np.float32) * molecule / denominator).tolist())
new_label.append(label)
return new_data, new_label
```
## Load data from files
```
def load_data(data_type, files):
data = []
labels = []
random.shuffle(files)
for file in files:
with open(file) as f:
label = path.splitext(file)[0][-1]
labels.append(label)
readings = []
for line in f:
reading = line.strip().split(',')
readings.append([float(i) for i in reading[0:6]])
data.append(readings)
if data_type == 'train':
data, labels = augment_data(data, labels)
return build_dataset(data, labels)
```
## Prepare training, validation, and test datasets
```
files_path = defaultdict(list)
dir = './data'
for filename in listdir(dir):
if filename.endswith('.csv'):
digit = path.splitext(filename)[0][-1]
files_path[digit].append(path.join(dir, filename))
train_files = []
validation_files = []
test_files = []
for digit in files_path:
random.shuffle(files_path[digit])
train_split = int(len(files_path[digit]) * 0.6) # 60%
validation_split = train_split + int(len(files_path[digit]) * 0.2) # 20%
train_files += files_path[digit][:train_split]
validation_files += files_path[digit][train_split:validation_split]
# remaining 20%
test_files += files_path[digit][validation_split:]
train_length, train_data = load_data('train', train_files)
validation_length, validation_data = load_data('validation', validation_files)
test_length, test_data = load_data('test', test_files )
print('train_length={} validation_length={} test_length{}'.format(train_length, validation_length, test_length))
```
## Build a sequential model
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu", input_shape=(seq_length, dim, 1)),
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2)),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(8, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2), padding="same"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(16, (3, 3), padding="same", activation="relu"),
tf.keras.layers.MaxPool2D((2, 2), padding="same"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(16, (3, 3), padding="same", activation="relu"),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax")
])
model.summary()
```
## Compile and start training
```
epochs = 100
batch_size = 64
steps_per_epoch=1000
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
def reshape_function(data, label):
reshaped_data = tf.reshape(data, [-1, dim, 1])
return reshaped_data, label
train_data = train_data.map(reshape_function)
validation_data = validation_data.map(reshape_function)
train_data = train_data.batch(batch_size).repeat()
validation_data = validation_data.batch(batch_size)
logdir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
# Uncomment the ines below if you like to see how training proceeds
# %load_ext tensorboard
# %tensorboard --logdir logdir
model.fit(
train_data,
epochs=epochs,
validation_data=validation_data,
steps_per_epoch=steps_per_epoch,
validation_steps=int((validation_length - 1) / batch_size + 1),
callbacks=[tensorboard_callback])
```
## Evaluate the trained model on test dataset
```
test_data = test_data.map(reshape_function)
test_labels = np.zeros(test_length)
# There is no easy function to get the labels back from the tf.data.Dataset :(
# Need to iterate over dataset
idx = 0
for data, label in test_data:
test_labels[idx] = label.numpy()
idx += 1
test_data = test_data.batch(batch_size)
loss, acc = model.evaluate(test_data)
pred = np.argmax(model.predict(test_data), axis=1)
# Create a confusion matrix to see how model predicts
confusion = tf.math.confusion_matrix(labels=tf.constant(test_labels), predictions=tf.constant(pred), num_classes=num_classes)
print(confusion)
```
## Convert model to TFLite format
### Note: Currently quantized TFLite format does not work with TFLite Micro library
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open("model_quantized.tflite", "wb").write(tflite_model)
```
| github_jupyter |
## Best Practise 1 → Using enumerate() - Fetch elements from list
```
# List Variable
example = ['use','enumerate','instead','of','iteration']
# Ideal Way
for i in range(len(example)):
print(f"# {i + 1}: {example[i]}")
# Pythonic way - enumerate
for i, value in enumerate(example, 1):
print(f"# {i}: {value}")
```
## Best Practise 2 → Using zip() - Fetch elements from multiple lists
```
# Lists
Employees = ['Employee1','Employee2','Employee3','Employee4']
Age = [30,25,35,40]
# Ideal Way
for i in range(len(Employees)):
employee = Employees[i]
age = Age[i]
print(f"Employee name is {employee} and age is {age}")
# Pythonic way - zip
for employee, age in zip(Employees, Age):
print(f"Employee name is {employee} and age is {age}")
```
## Best Practise 3 → Using reversed() - Fetch elements reversly
```
# Lists
Employees = ['Employee1','Employee2','Employee3','Employee4']
# Ideal way
for i in range(1,len(Employees) + 1):
print(f"Approach 1 - Employee came to office after covid 19 is {Employees[-i]}")
for employee in Employees[::-1]:
print(f"Approach 2 - Employee came to office after covid 19 is {employee}")
# Pythonic way - reversed()
for employee in reversed(Employees):
print(f"Using revered - Employee came to office after covid 19 is {employee}")
```
## Best Practise 4 → Using filter() - Data Filtering
```
# List
numbers = [1,2,3,4,5,6,7,8,9,10]
#Ideal way
for number in numbers:
if number % 2:
print(f"Odd Number : {number}")
# Pythonic way - filter()
for number in filter(lambda x: x %2, numbers):
print(f"Odd Number : {number}")
```
## Best Practise 5 → Using Chain() - Concatenate values from lists
```
from itertools import chain
#Lists
oddValues = [1,3,5,7,9]
evenValues = [2,4,6,8,10]
# Ideal way
values = oddValues + evenValues
for value in values:
print(f"value is : {value}")
# Pythonic way - chain()
for value in chain(oddValues, evenValues):
print(f"value is : {value}")
```
## Best Practise 6 → Using Dictionaries() - Retrieve keys & values from dictionary
```
# Dict
Employees = {"Employee1": 30, "Employee2": 35, "Employee3": 40, "Employee4": 45}
#Ideal way
for key in Employees:
print(f"Employee Name is : {key}")
for key in Employees.keys():
print(f"Employee Name is : {key}")
for value in Employees.values():
print(f"Age is : {value}")
for value in Employees:
print(f"Age is : {Employees[value]}")
#Pythonic way
for key, value in Employees.items():
print(f"Employee came to office after covid 19 is {key} and age is {value}")
```
## Best Practise 7 → Using Comprehension() - Comprehensions for lists, dictionaries & set
```
### list
numbers = [1,2,3,4,5,6,7,8,9,10]
#Ideal way
squaredNumbers = list()
for square in numbers:
squaredNumbers.append(square * square)
print(squaredNumbers)
#Using list comprehension
squaredNumbers = [x * x for x in numbers]
print(squaredNumbers)
#Ideal way
squaredNumbers = dict()
for square in numbers:
squaredNumbers[square] = square * square
#Using list comprehension
squaredNumbers = {x: x*x for x in numbers}
print(squaredNumbers)
#Ideal way
squaredNumbers = set()
for square in numbers:
squaredNumbers.add(square)
print(squaredNumbers)
#Using list comprehension
squaredNumbers = [x*x for x in numbers]
print(squaredNumbers)
```
## Best Practise 8 → Using else clause - For and While Loops
```
# For Loop
for n in range(2, 10):
for x in range(2, n):
if n % x == 0:
print( n, 'equals', x, '*', n/x)
break
else:
# loop fell through without finding a factor
print(n, 'is a prime number')
# While Loop
count = 2
while (count < 1):
count = count+1
print(count)
break
else:
print("No Break")
```
## Best Practise 9 → **Using Ternary Opertor** - Ternary Opertor
```
#Traditional
value = True
if value:
v = 1
else:
v = 0
print(v)
#Using ternary
value = True
v = 1 if value else 0
print(v)
```
| github_jupyter |
# Müller Brown II
## Previously on Müller Brown I
```
import numpy as np
from taps.paths import Paths
from taps.models import MullerBrown
from taps.coords import Cartesian
from taps.visualize import view
N = 300
x = np.linspace(-0.55822365, 0.6234994, N)
y = np.linspace(1.44172582, 0.02803776, N)
coords = Cartesian(coords=np.array([x, y]))
model = MullerBrown()
paths = Paths(coords=coords, model=model)
view(paths, viewer='MullerBrown')
```
## Müller Brown II
Here, we introduce a less fun but more practical tools in TAPS, `Projector`. Optimizing a pathway of an atomic system requires vast amount of computational time on the configuration space searching which in many case unfeasible. `Projector` class manipulates the coordinates system into smaller (usually) dimension so that user can reduce the configuration space for searching. For example, user can use the `Mask` projector that effectively hide some atoms from the `PathFinder` object so that position of some atoms can be fixed while optimizing the pathway. Surface reaction may use it useful restriction for a slab. Or user can optimize only the sine components of the pathway so user can effectively reduce the space to search. Or one may want to do it both. In the `Projector` class, there are three keywords `domain`, `codomain`, and `pipeline`. Unfortunately, `domain` and `codomain` keywords are exists as future usage. They are intended to controls the class of the pathway while prjoection into one to the other but currently it only exists as a concept. The third keyword, `pipeline` binds the multiple `Projector` class into one so that user can use multiple projector like one. For example, user can put `Sine` projector into the `Mask`s pipeline so that some atoms are ignored and being fourie transformed. For simplicity, we are going to demonstrate only the `Sine` projector here.
Unfortunately, using `Sine` projector is not an automatic procedure. User has to put 4 additional kewords, total length of the `coords`, $N$, a number of sine components $N_k$, image of initial step `init`, and final reactant `fin`. `Sine` projector uses `scipy.fftpack` to conduct fast fourier transform. `Sine` use type 1 sine transformation, where
$$y_k = 2 \sum_{n=0}^{N-1} x_n \sin\left(\frac{\pi(k+1)(n+1)}{N+1}\right)$$
We have to fix the both end to 0. That gives us stability when removing high frequency components. Since it assumes the both end as 0, coordinate at both end should be given explcitley. Thus, user should manually specify the coordinate. Script below shows creation of the `Sine` projector and manipulate a sine components of that coordinate and visualize how it effected.
```
from taps.projectors import Sine
# We are going to use only the 30% of the coordinate information
Nk = N - 270
prj = Sine(N=N, Nk=Nk,
init=paths.coords.coords[..., 0].copy(),
fin=paths.coords.coords[..., -1].copy())
sine_coords = prj.x(paths.coords())
# Sine component manipulation
sine_coords.coords[:, 1] = -5
paths.coords = prj.x_inv(sine_coords)
view(paths, viewer='MullerBrown')
```
## Pathway optimization on the sine components.
Previouse MB example was conducted on the 2D Cartesian coordinate directly. In this example, we are going to use `Sine` projector during optimization process. Setting and kewords are same with previous example, with additional `prj_search=True` and `prj=prj` keyword inserted in `DAO`.
```
from taps.pathfinder import DAO
action_kwargs = {
'Onsager Machlup':{
'gam': 1.,
},
'Energy Restraint':{
'muE': 1.,
'Et': -0.45
}
}
search_kwargs = {"method":"L-BFGS-B"}
finder = DAO(action_kwargs=action_kwargs,
search_kwargs=search_kwargs,
prj=prj)
paths.finder = finder
paths.coords.epoch=6
paths.search()
```
Below text is output of previous example, where cartesian coordinates were directly used as input $(N=300)$. But here we used only 10% $(Nk=30)$ of the coordinate information during optimization process. From that we get huge amount of gain in computatational cost but a little sacrifice of its accuracy without losing continuity required by action calculation.
## Results in the previous example, Müller Brown I
```
=================================
Parameters
=================================
Onsager Machlup
gam : 1.0
Energy Restraint
muE : 1.0
Et : -0.45
Iter nfev njev S dS_max
Converge : 9784 10021 10021 1.8258 0.0070
Converge : 9785 10040 10040 1.8258 0.0070
=================================
Results
=================================
Onsager Machlup : 1.2556032942672488
Energy Restraint : 0.5702371630130022
Total S : 1.825840457280251
```
```
view(paths, viewer="MullerBrown")
```
You can see through profile of the trajectory that it has lack of high frequency components.
## Database construction
To utilize the data while we calculate the pathways, construction of proper database is postulated. `ImageData` class can save the calculation results of atomic or model system using below.
```python
from taps.db import ImageData
imgdata = ImageData()
imgdata.add_data(paths, coords=(DxN or 3xDxN)size array)
```
TAPS aims for string method pathway calculation more effcient manner. meaning we insert some muller. In order to use I had to made the data abse
```
from taps.db import ImageDatabase
imgdata = ImageDatabase(filename="mullerbrown.db")
imgdata.add_data(paths, coords=paths.coords(index=[0, -1]))
```
## Gaussian Potential
With the data we constructed in this example, we are going to use it to construct Gaussian PES. Using only the 3 point of data, `init`, `fin` and middle point of the trajectory, We conconstruct the Gaussian Potential.
Kernel we are going to use here is standard (square exponential) kernel. Kernel looks like,
$$ K(x_1, x_2) = \sigma_f e^{(x_1-x_2)^2/2l^2}$$
with zero mean, expectation value and covariance of the potential is
$$\mu = K_* (K_y+\sigma_n)^{-1} \mathbf{Y} $$
$$\Sigma = K_{**} + K_* K_y^{-1} K_*^{-1}$$
In the process of regression, where minimize the log likelihood.
```
from taps.models import Gaussian
paths.imgdata = imgdata
model = Gaussian(real_model=model)
paths.model = model
paths.add_data(index=[0, paths.coords.N // 2, -1])
paths.model.kernel.hyperparameters['sigma_f'] = 0.5
paths.model.kernel.hyperparameters['l^2'] = 0.1
view(paths, viewer="MullerBrown", gaussian=True)
```
## $\mu$ and $\Sigma$ Maps
You can see the second and third map images that depicting mean expectation value, $\mu$, and uncertainty map, $\Sigma$ with additional information plotted on the map such as maximum uncertainty or maximum estimated potential.
It looks totally different from real PES, the first map. We will shows you how we approximate the PES near the minimum energy pathway (MEP) to real one. We will finish this tutorial by optimizing the pathway based on the Gaussian PES.
```
paths.finder.action_kwargs['Energy Restraint'].update({'Et': -1.2})
paths.search()
view(paths, viewer="MullerBrown", gaussian=True)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Example-01:-Extract-text" data-toc-modified-id="Example-01:-Extract-text-1.1"><span class="toc-item-num">1.1 </span>Example 01: Extract text</a></span><ul class="toc-item"><li><span><a href="#Write-the-code-for-the-main-steps-aiming-web-scraping" data-toc-modified-id="Write-the-code-for-the-main-steps-aiming-web-scraping-1.1.1"><span class="toc-item-num">1.1.1 </span>Write the code for the main steps aiming web scraping</a></span><ul class="toc-item"><li><span><a href="#Send-request-and-catch-response" data-toc-modified-id="Send-request-and-catch-response-1.1.1.1"><span class="toc-item-num">1.1.1.1 </span>Send request and catch response</a></span></li><li><span><a href="#get-the-content-of-the-response" data-toc-modified-id="get-the-content-of-the-response-1.1.1.2"><span class="toc-item-num">1.1.1.2 </span>get the content of the response</a></span></li><li><span><a href="#parse-webpage" data-toc-modified-id="parse-webpage-1.1.1.3"><span class="toc-item-num">1.1.1.3 </span>parse webpage</a></span></li><li><span><a href="#Extra:-Use-prettify-to-have-a-'prettier'-view-of-the-page's-code" data-toc-modified-id="Extra:-Use-prettify-to-have-a-'prettier'-view-of-the-page's-code-1.1.1.4"><span class="toc-item-num">1.1.1.4 </span>Extra: Use prettify to have a 'prettier' view of the page's code</a></span></li></ul></li><li><span><a href="#Title?" data-toc-modified-id="Title?-1.1.2"><span class="toc-item-num">1.1.2 </span>Title?</a></span></li><li><span><a href="#Text-per-section-(e.g.-1.-What-is-cryptocurrency?)" data-toc-modified-id="Text-per-section-(e.g.-1.-What-is-cryptocurrency?)-1.1.3"><span class="toc-item-num">1.1.3 </span>Text per section (e.g. 1. What is cryptocurrency?)</a></span></li></ul></li><li><span><a href="#Example-02:-Extract-table-info" data-toc-modified-id="Example-02:-Extract-table-info-1.2"><span class="toc-item-num">1.2 </span>Example 02: Extract table info</a></span></li><li><span><a href="#Example-03:-Extract-information-from-hyperlink" data-toc-modified-id="Example-03:-Extract-information-from-hyperlink-1.3"><span class="toc-item-num">1.3 </span>Example 03: Extract information from hyperlink</a></span></li></ul></li></ul></div>
# Introduction
```
# importing packages
import requests
from bs4 import BeautifulSoup
import pandas as pd
```
## Example 01: Extract text
```
url_01 = "https://www.nerdwallet.com/article/investing/cryptocurrency-7-things-to-know#:~:text=A%20cryptocurrency%20(or%20%E2%80%9Ccrypto%E2%80%9D,sell%20or%20trade%20them%20securely."
```
### Write the code for the main steps aiming web scraping
#### Send request and catch response
```
# response =
```
#### get the content of the response
```
# content =
```
#### parse webpage
```
# parser =
```
#### Extra: Use prettify to have a 'prettier' view of the page's code
`parser` is a `BeautifulSoup object`, which represents the document as a nested data structure.
The `prettify()` method will turn a Beautiful Soup parse tree into a nicely formatted Unicode string, making it much easy to visualize the tree structure.
```
def parse_website(url):
"""
Parse content of a website
Args:
url (str): url of the website of which we want to acess the content
Return:
parser: representation of the document as a nested data structure.
"""
# Send request and catch response
response = requests.get(url)
# get the content of the response
content = response.content
# parse webpage
parser = BeautifulSoup(content, "lxml")
return parser
parser_01 = parse_website(url_01)
```
### Title?
```
# access title of the web page
#obtain text between tags
```
### Text per section (e.g. 1. What is cryptocurrency?)
1. Access subtitles (titles of sections e.g. "Cryptocurrency definition")

```
# subtitles =
# texts =
# text_01 = texts[0:6]
# text_01
```
Apply some cleaning to the piece of text bellow if you have time...
```
# text_01 = text_01[0:4]
```
## Example 02: Extract table info
```
url_02 = "https://www.worldometers.info/population/countries-in-the-eu-by-population/"
parser_02 = parse_website(url_02)
print(parser_02.prettify())
```

```
# Obtain information from tag <table>
# table =
# table
# tip: prettify table to see better the information you are looking for
# Obtain column names within tag <th> with attribute col
# list_col =
# Clean text
# list_col =
# list_col
# Create a dataframe
# EU_population_data =
```
From the table prettify we see that the rows are located under tag <tr> and items are located under tag <td> . Use this info and fill your dataframe.
```
# Create a for loop to fill EU_population_data
# EU_population_data
```
## Example 03: Extract information from hyperlink
Applying web scraping to [`https://jadsmkbdatalab.nl/voorbeeldcases/`](https://jadsmkbdatalab.nl/voorbeeldcases/).
Right click to `inspect` element in the webpage. Notice that the information we look for is between h3's...

In this example, you will face something new. Before doing as usual and using the parser function check the response using requests.
Which code did you get?
TIP: Check this [stackoverflow answer](https://stackoverflow.com/questions/38489386/python-requests-403-forbidden)
```
url_03 = 'https://jadsmkbdatalab.nl/voorbeeldcases/'
# response =
# response
# Modify the steps we have learn to solve the issue
# get response
# get the content of the response
# parse webpage
# print(parser_03.prettify())
# find hyperlinks
# links =
# links
# Obtain url of the links
```
Updating function to include headers...
```
def parse_website(url):
"""
Parse content of a website
Args:
url (str): url of the website of which we want to acess the content
Return:
parser: representation of the document as a nested data structure.
"""
# Send request and catch response
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
response = requests.get(url, headers=headers)
# get the content of the response
content = response.content
# parse webpage
parser = BeautifulSoup(content, "lxml")
return parser
# parse and prettify one of the obtained urls
# parser_03_0 =
# find all paragraphs within parser_03_0
# paragraphs =
# paragraphs
# Obtain text of paragraphs
# saving the content of this page
```
| github_jupyter |
# **Neural machine translation with attention**
Today we will train a sequence to sequence (seq2seq) model for Spanish to English translation. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 minutes to run on a single P100 GPU.
```
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
## **Download and prepare the dataset**
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
1. Add a *start* and *end* token to each sentence.
2. Clean the sentences by removing special characters.
3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
4. Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
```
### **Limit the size of the dataset to experiment faster (optional)**
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
```
# Try experimenting with the size of that dataset
num_examples = 100000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
```
### **Create a tf.data dataset**
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024 # better if embedding_dim*4
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
## **Write the encoder and decoder model**
Implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. The below picture and formulas are an example of attention mechanism from [Luong's paper](https://arxiv.org/abs/1508.04025v5).
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
This tutorial uses [Bahdanau attention](https://arxiv.org/pdf/1409.0473.pdf) for the encoder. Let's decide on notation before writing the simplified form:
* FC = Fully connected (dense) layer
* EO = Encoder output
* H = hidden state
* X = input to the decoder
And the pseudo-code:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
* `embedding output` = The input to the decoder X is passed through an embedding layer.
* `merged vector = concat(embedding output, context vector)`
* This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
## **Define the optimizer and the loss function**
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## **Checkpoints (Object-based saving)**
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## **Training**
1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.
3. The decoder returns the *predictions* and the *decoder hidden state*.
4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
5. Use *teacher forcing* to decide the next input to the decoder.
6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.
7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## **Translate**
* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
* Stop predicting when the model predicts the *end token*.
* And store the *attention weights for every time step*.
Note: The encoder output is calculated only once for one input.
```
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## **Restore the latest checkpoint and test**
```
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# as near translation
translate(u'trata de averiguarlo.')
```
| github_jupyter |
# Predict google map review dataset
## model
- kcbert
- fine-tuned with naver shopping review dataset (200,000개)
- train 5 epochs
- 0.97 accuracy
## dataset
- google map review of tourist places in Daejeon, Korea
```
import torch
from torch import nn, Tensor
from torch.optim import Optimizer
from torch.utils.data import DataLoader, RandomSampler, DistributedSampler, random_split
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.nn import CrossEntropyLoss
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning import LightningModule, Trainer, seed_everything
from pytorch_lightning.metrics.functional import accuracy, precision, recall
from transformers import AdamW, BertForSequenceClassification, AdamW, BertConfig, AutoTokenizer, BertTokenizer, TrainingArguments
from keras.preprocessing.sequence import pad_sequences
import random
import numpy as np
import time
import datetime
import pandas as pd
import os
from tqdm import tqdm
import pandas as pd
from transformers import AutoTokenizer, AutoModelWithLMHead
from keras.preprocessing.sequence import pad_sequences
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
pj_path = os.getenv('HOME') + '/Projects/JeongCheck'
data_path = pj_path + '/compare'
data_list = os.listdir(data_path)
print(len(data_list))
data_list
file_list = os.listdir(data_path)
file_list
spacing = pd.read_csv(data_path + f'/{file_list[0]}')
spell = pd.read_csv(data_path + f'/{file_list[1]}')
spacing.head()
spell.head()
len(spacing), len(spell)
print(spacing.isna().sum())
print('\n')
print(spell.isna().sum())
print(set(spacing.label))
print(set(spell.label))
print(len(spacing[spacing.label==2]))
print(len(spell[spell.label==2]))
test_spac = spacing.copy()
test_spel = spell.copy()
print(len(test_spac), len(test_spel))
```
중립 데이터 제외
```
test_spac = test_spac[test_spac.label != 2]
print(len(test_spac))
test_spel = test_spel[test_spel.label != 2]
print(len(test_spel))
from transformers import BertForSequenceClassification, AdamW, BertConfig
tokenizer = AutoTokenizer.from_pretrained("beomi/kcbert-base")
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
pj_path + "/bert_model/checkpoint-2000",
num_labels = 2,
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
params = list(model.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
def convert_input_data(sentences):
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
MAX_LEN = 64
# 토큰을 숫자 인덱스로 변환
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# 문장을 MAX_LEN 길이에 맞게 자르고, 모자란 부분을 패딩 0으로 채움
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# 어텐션 마스크 초기화
attention_masks = []
# 어텐션 마스크를 패딩이 아니면 1, 패딩이면 0으로 설정
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
inputs = torch.tensor(input_ids)
masks = torch.tensor(attention_masks)
return inputs, masks
def test_sentences(sentences):
# 평가모드로 변경!!!!!
model.eval()
inputs, masks = convert_input_data(sentences)
# 데이터를 GPU에 넣음
b_input_ids = inputs.to(device)
b_input_mask = masks.to(device)
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
return logits
device = "cuda:0"
model = model.to(device)
```
## 데이터 변환
```
def preprocessing(df):
df.document=df.comment.replace('[^A-Za-zㄱ-ㅎㅏ-ㅣ가-힣]+','')
return df
# result = preprocessing(gr_data)
# result = result.dropna()
# print(result)
# 감성분석할 comment 추출
def export_com(preprocessed_df):
sens =[]
for sen in preprocessed_df.comment:
sens.append(sen)
print('check lenght :', len(sens), len(preprocessed_df)) # 개수 확인
print('sample sentence :', sens[1])
return sens
def make_predicted_label(sen):
sen = [sen]
score = test_sentences(sen)
result = np.argmax(score)
if result == 0: # negative
return 0
elif result == 1: # positive
return 1
def predict_label(model, df, place_name):
result = preprocessing(df)
result = result.dropna()
sens = export_com(result)
scores_data=[]
for sen in sens:
scores_data.append(make_predicted_label(sen))
df['pred'] = scores_data
cor = df[df.label == df.pred]
uncor = df[df.label != df.pred]
print('correct prediction num :', len(cor))
print('uncorrect prediction num :', len(uncor))
print('correct label check :' ,set(cor.label))
# df.to_csv(pj_path + f'/sentiment_data/{place_name}_pred_kcbert.csv')
return df
print('### spacing ###')
predict_spac = predict_label(model, test_spac, 'total')
print('### spell ###')
predict_spel = predict_label(model, test_spel, 'total')
```
## Loss (RMSE)
```
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import math
def rmse(y, y_pred):
from sklearn.metrics import mean_squared_error
import math
print('lenght check (origin, prediction):', len(y), len(y_pred))
rmse_label = math.sqrt(mean_squared_error(y, y_pred))
print('rmse of label :', rmse_label)
```
## Accuracy
```
def acc(y, y_pred, total):
correct = (y_pred == y).sum().item()
print(f'Accuracy of the network on the {total} test text: %d %%' % (
100 * correct / total))
```
## f1-score
```
from sklearn.metrics import f1_score, classification_report
def f1(y, y_pred):
score = f1_score(y, y_pred)
report = classification_report(y, y_pred)
print('f1 score :', score)
print('===== classification report =====')
print(report)
```
## calculate performance
- RMSE
- Accuracy
- f1-score
```
def cal_perform(df):
y = df.label
y_pred = df.pred
if len(y) == len(y_pred):
total = len(y)
print('label length :', total)
else:
print('It has different length !')
rmse(y, y_pred)
acc(y, y_pred, total)
f1(y, y_pred)
print('===== spacing =====')
cal_perform(predict_spac)
print('===== spell =====')
cal_perform(predict_spel)
```
| github_jupyter |
### Testing for Interactive use case
```
import mlflow
from azureml.core import Workspace, Experiment, Environment, Datastore, Dataset, ScriptRunConfig
from azureml.core.runconfig import PyTorchConfiguration
# from azureml.widgets import RunDetails
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.runconfig import PyTorchConfiguration
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from IPython.display import clear_output
import time
import platform
# from ray_on_azureml.ray_on_aml import getRay
import sys
sys.path.append("../") # go to parent dir
import importlib
from src.ray_on_azureml.ray_on_aml import Ray_On_AML
ws = Workspace.from_config()
ray_on_aml =Ray_On_AML(ws=ws, compute_cluster ="worker-cpu-v3")
_, ray = ray_on_aml.getRay()
ray.cluster_resources()
ray_on_aml.shutdown()
# import ray
# ray.shutdown()
# ray.init()
```
### Testing with Dask on Ray
```
# import ray
# ray.init()
from ray.util.dask import ray_dask_get
import dask
import dask.array as da
import dask.dataframe as dd
import numpy as np
import pandas as pd
dask.config.set(scheduler=ray_dask_get)
d_arr = da.from_array(np.random.randint(0, 1000, size=(256, 256)))
# The Dask scheduler submits the underlying task graph to Ray.
d_arr.mean().compute(scheduler=ray_dask_get)
# Set the scheduler to ray_dask_get in your config so you don't have to
# specify it on each compute call.
df = dd.from_pandas(
pd.DataFrame(
np.random.randint(0, 10000, size=(1024, 2)), columns=["age", "grade"]),
npartitions=2)
df.groupby(["age"]).mean().compute()
# ray.shutdown()
import dask.dataframe as dd
storage_options = {'account_name': 'azureopendatastorage'}
ddf = dd.read_parquet('az://nyctlc/green/puYear=2019/puMonth=*/*.parquet', storage_options=storage_options)
ddf.count().compute()
#dask
# import ray
from ray.util.dask import ray_dask_get
import dask
import dask.array as da
import dask.dataframe as dd
import numpy as np
import pandas as pd
import dask
import dask.dataframe as dd
import matplotlib.pyplot as plt
from datetime import datetime
from azureml.core import Workspace, Dataset, Model
from adlfs import AzureBlobFileSystem
account_key = ws.get_default_keyvault().get_secret("adls7-account-key")
account_name="adlsgen7"
abfs = AzureBlobFileSystem(account_name="adlsgen7",account_key=account_key, container_name="mltraining")
abfs2 = AzureBlobFileSystem(account_name="azureopendatastorage", container_name="isdweatherdatacontainer")
storage_options={'account_name': account_name, 'account_key': account_key}
# ddf = dd.read_parquet('az://mltraining/ISDWeatherDelta/year2008', storage_options=storage_options)
data = ray.data.read_parquet("az://isdweatherdatacontainer/ISDWeather/year=2009", filesystem=abfs2)
data2 = ray.data.read_parquet("az://mltraining/ISDWeatherDelta/year2008", filesystem=abfs)
data.count()
```
### Testing Ray Tune for distributed ML tunning
```
import numpy as np
import torch
import torch.optim as optim
import torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
# import ray
from ray import tune
from ray.tune.schedulers import ASHAScheduler
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
# In this example, we don't change the model architecture
# due to simplicity.
self.conv1 = nn.Conv2d(1, 3, kernel_size=3)
self.fc = nn.Linear(192, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 3))
x = x.view(-1, 192)
x = self.fc(x)
return F.log_softmax(x, dim=1)
# Change these values if you want the training to run quicker or slower.
EPOCH_SIZE = 512
TEST_SIZE = 256
def train(model, optimizer, train_loader):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# We set this just for the example to run quickly.
if batch_idx * len(data) > EPOCH_SIZE:
return
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
def test(model, data_loader):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(data_loader):
# We set this just for the example to run quickly.
if batch_idx * len(data) > TEST_SIZE:
break
data, target = data.to(device), target.to(device)
outputs = model(data)
_, predicted = torch.max(outputs.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
return correct / total
def train_mnist(config):
# Data Setup
mnist_transforms = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.1307, ), (0.3081, ))])
train_loader = DataLoader(
datasets.MNIST("~/data", train=True, download=True, transform=mnist_transforms),
batch_size=64,
shuffle=True)
test_loader = DataLoader(
datasets.MNIST("~/data", train=False, transform=mnist_transforms),
batch_size=64,
shuffle=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ConvNet()
model.to(device)
optimizer = optim.SGD(
model.parameters(), lr=config["lr"], momentum=config["momentum"])
for i in range(10):
train(model, optimizer, train_loader)
acc = test(model, test_loader)
# Send the current training result back to Tune
tune.report(mean_accuracy=acc)
if i % 5 == 0:
# This saves the model to the trial directory
torch.save(model.state_dict(), "./model.pth")
search_space = {
"lr": tune.sample_from(lambda spec: 10**(-10 * np.random.rand())),
"momentum": tune.uniform(0.01, 0.09)
}
# Uncomment this to enable distributed execution
# ray.shutdown()
# ray.init(address="auto",ignore_reinit_error=True)
# ray.init(address =f'ray://{headnode_private_ip}:10001',allow_multiple=True,ignore_reinit_error=True )
# Download the dataset first
datasets.MNIST("~/data", train=True, download=True)
analysis = tune.run(train_mnist, config=search_space)
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import xgboost as xgb
from ray import tune
def train_breast_cancer(config):
# Load dataset
data, labels = sklearn.datasets.load_breast_cancer(return_X_y=True)
# Split into train and test set
train_x, test_x, train_y, test_y = train_test_split(
data, labels, test_size=0.25)
# Build input matrices for XGBoost
train_set = xgb.DMatrix(train_x, label=train_y)
test_set = xgb.DMatrix(test_x, label=test_y)
# Train the classifier
results = {}
xgb.train(
config,
train_set,
evals=[(test_set, "eval")],
evals_result=results,
verbose_eval=False)
# Return prediction accuracy
accuracy = 1. - results["eval"]["error"][-1]
tune.report(mean_accuracy=accuracy, done=True)
config = {
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"max_depth": tune.randint(1, 9),
"min_child_weight": tune.choice([1, 2, 3]),
"subsample": tune.uniform(0.5, 1.0),
"eta": tune.loguniform(1e-4, 1e-1)
}
analysis = tune.run(
train_breast_cancer,
resources_per_trial={"cpu": 1},
config=config,
num_samples=10)
```
### Testing Spark on Ray
```
import ray
import raydp
import os
ray.shutdown()
ray.init()
os.environ["PYSPARK_PYTHON"]="/anaconda/envs/azureml_py38/bin/python3"
# ray.init(address ='ray://10.0.0.11:6379')
spark = raydp.init_spark(
app_name = "example",
num_executors = 2,
executor_cores = 1,
executor_memory = "1gb"
)
# data =spark.read.format("csv").option("header", True).load("wasbs://ojsales-simulatedcontainer@azureopendatastorage.blob.core.windows.net/oj_sales_data/Store10*.csv")
# # normal data processesing with Spark
# df = spark.createDataFrame([('look',), ('spark',), ('tutorial',), ('spark',), ('look', ), ('python', )], ['word'])
# df.show()
# word_count = df.groupBy('word').count()
# word_count.show()
import pandas as pd
from pyspark.sql.functions import col, pandas_udf
from pyspark.sql.types import LongType
# Declare the function and create the UDF
def multiply_func(a: pd.Series, b: pd.Series) -> pd.Series:
return a * b
multiply = pandas_udf(multiply_func, returnType=LongType())
# The function for a pandas_udf should be able to execute with local Pandas data
x = pd.Series([1, 2, 3])
print(multiply_func(x, x))
# 0 1
# 1 4
# 2 9
# dtype: int64
# Create a Spark DataFrame, 'spark' is an existing SparkSession
df = spark.createDataFrame(pd.DataFrame(x, columns=["x"]))
# Execute function as a Spark vectorized UDF
df.select(multiply(col("x"), col("x"))).show()
# +-------------------+
# |multiply_func(x, x)|
# +-------------------+
# | 1|
# | 4|
# | 9|
# +-------------------+
# stop the spark cluster
raydp.stop_spark()
raydp.stop_spark()
```
## Testing Ray on Job Cluster
```
# pyarrow >=6.0.1
# dask >=2021.11.2
# adlfs >=2021.10.0
# fsspec==2021.10.1
# ray[default]==1.9.0
ws = Workspace.from_config()
# base_conda_dep =['adlfs>=2021.10.0','pytorch','matplotlib','torchvision','pip']
# base_pip_dep = ['sklearn','xgboost','lightgbm','ray[default]==1.9.0', 'xgboost_ray', 'dask','pyarrow>=6.0.1', 'azureml-mlflow']
compute_cluster = 'worker-cpu-v3'
maxnode =5
vm_size='STANDARD_DS3_V2'
vnet='rayvnet'
subnet='default'
exp ='ray_on_aml_job'
ws_detail = ws.get_details()
ws_rg = ws_detail['id'].split("/")[4]
vnet_rg=None
try:
ray_cluster = ComputeTarget(workspace=ws, name=compute_cluster)
print('Found existing cluster, use it.')
except ComputeTargetException:
if vnet_rg is None:
vnet_rg = ws_rg
compute_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
min_nodes=0, max_nodes=maxnode,
vnet_resourcegroup_name=vnet_rg,
vnet_name=vnet,
subnet_name=subnet)
ray_cluster = ComputeTarget.create(ws, compute_cluster, compute_config)
ray_cluster.wait_for_completion(show_output=True)
# python_version = ["python="+platform.python_version()]
# conda_packages = python_version+base_conda_dep
# pip_packages = base_pip_dep
# conda_dep = CondaDependencies()
# rayEnv = Environment(name="rayEnv")
rayEnv = Environment.get(ws, "rayEnv", version=16)
# for conda_package in conda_packages:
# conda_dep.add_conda_package(conda_package)
# for pip_package in pip_packages:
# conda_dep.add_pip_package(pip_package)
# # Adds dependencies to PythonSection of myenv
# rayEnv.python.conda_dependencies=conda_dep
src = ScriptRunConfig(source_directory='job',
script='aml_job.py',
environment=rayEnv,
compute_target=ray_cluster,
distributed_job_config=PyTorchConfiguration(node_count=maxnode),
# arguments = ["--master_ip",master_ip]
)
run = Experiment(ws, exp).submit(src)
```
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
First things to know:
- The **input** is a batch of images of shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
**Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
2. For each box, find:
- the index of the class with the maximum box score ([Hint](https://keras.io/backend/#argmax)) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score ([Hint](https://keras.io/backend/#max)) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))
Reminder: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
**Exercise**: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_area = (yi2-yi1) * (xi2-xi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3]-box1[1]) * (box1[2]-box1[0])
box2_area = (box2[3]-box2[1]) * (box2[2]-box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = float(inter_area) / float(union_area)
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
```
**Expected Output**:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold = iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.gather(scores, nms_indices)
boxes = tf.gather(boxes, nms_indices)
classes = tf.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
**Summary for YOLO**:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a (`sess`) graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
<font color='blue'>
**What you should remember**:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's github repository. The pretrained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset.
| github_jupyter |
***
# COVID-19 Vaccination Progress - Modelling
****
```
#common imports:
import numpy as np
import pandas as pd
from datetime import datetime
import os
from itertools import permutations
#import for visualization
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.colors
import seaborn as sns
import matplotlib.pyplot as plt
#for Modelling/timeseries:
import statsmodels.formula.api as smf
from statsmodels.tsa.seasonal import seasonal_decompose
from scipy.ndimage import gaussian_filter
from sklearn.metrics import mean_squared_error
from math import sqrt
from statsmodels.tsa.stattools import adfuller,kpss
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.arima_model import ARIMA
import statsmodels.api as sm
from statsmodels.graphics.tsaplots import plot_pacf
from pmdarima.arima import auto_arima
import statsmodels.graphics.tsaplots as tsaplot
from statsmodels.tsa.holtwinters import Holt, ExponentialSmoothing, SimpleExpSmoothing
from statsmodels.graphics.tsaplots import plot_pacf
sns.set(rc={'figure.figsize':(20,15)})
#suppress pandas future warnings:
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV, KFold
from sklearn import ensemble
from sklearn.preprocessing import OrdinalEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn import metrics
# load cleaned data time series:
df_time_m = pd.read_csv('timeseriesformodel.csv')
```
## Time series
For the vaccination progress we have daily observations points as daily vaccinations or people vaccinated per country. Our target is to predict how the vaccination progress will continue in the next weeks.
```
#set DatetimeIndex as index for our DataFrame:
df_daily = pd.read_csv('df2.csv', index_col=None)
#df_daily = df_daily.set_index('date')
df_daily['date'] = pd.to_datetime(df_daily['date'])
df_daily.head(3)
df_daily.info()
#df_daily.reset_index(inplace=True)
df_daily['day'] = df_daily['date'].dt.day
df_daily['month'] = df_daily['date'].dt.month
df_daily['year'] = df_daily['date'].dt.year
df_daily['weekday_name'] = df_daily['date'].dt.day_name()#day_of_week
# Display a random sampling of 5 rows
df_daily.sample(5, random_state=0)
# Define plotting parameters and custom color palette
cmaps_hex = ['#193251','#FF5A36','#1E4485', '#99D04A','#FF5A36', '#DB6668']
#sns.set_palette(palette=cmaps_hex)
sns_c = sns.color_palette(palette=cmaps_hex)
plt.rcParams['figure.figsize'] = [15, 5]
plt.rcParams['figure.dpi'] = 100
df_time = df_daily.copy()
df_time.set_index('date')
df_time = df_time[['date','country', 'daily_vaccinations', 'people_fully_vaccinated_per_hundred']]
df_time.sample(5)
#check for missing values
missing_values = pd.DataFrame(df_time.isnull().sum(), columns=['ID'])
missing_values
df_time.dropna(axis=0)
#check for missing values
missing_values = pd.DataFrame(df_time.isnull().sum(), columns=['ID'])
missing_values
df_time.isnull().any()
df_time.dropna(inplace = True)
#check for missing values in %
round(100*(df_time.isnull().sum()/len(df_time.index)),0)
# plot the time series for daily vaccinations:
df_time1= df_time[['date','daily_vaccinations']]
df_time1.set_index('date', inplace=True)
ax1 = df_time1.plot()
# Add title and axis names
ax1.ticklabel_format(useOffset=False, style='plain', axis='y')
plt.title('Global daily vaccinations')
plt.xlabel('Date')
plt.ylabel('Daily vaccinations')
plt.show()
```
## Modelling with data for the United States
```
#Select data for the United States only:
df_time_us = df_time[df_time.country == 'United States']
df_time_us
#df_time_us.set_index('date', inplace=True)
ax1 = df_time_us['daily_vaccinations'].plot()
# Add title and axis names
ax1.ticklabel_format(useOffset=False, style='plain', axis='y')
plt.title('Daily vaccinations in the US')
plt.xlabel('Date')
plt.ylabel('Daily vaccinations')
plt.show();
```
## Check for Stationarity
Stationarity: A time series is stationary if its statistical properties (e.g. mean, variance, etc.) are the same throughout the series, independently of the point in time where they were observed. There are no long-term predictable patterns such as trend or seasonality. Plots will show a roughly horizontal trend with constant variance.
We use the decomposition method which allows us to separately view seasonality, trend and random which is the variability in the data set after removing the effects of the seasonality and trend.
Trend: Increase and decrease in the value of the data. It can further be divided into global and local trends.
Seasonality: Repetitive pattern of fixed frequency that is visible in the data.
Noise/Resiudals: Random data that can be obtained after extracting the trend and seasonal component.
```
# Check decomposition of trend, seasonality and residue of original time series
decomposition = seasonal_decompose(x=df_time_us['daily_vaccinations'], period=10)# model='multiplicative',
fig, ax = plt.subplots(4, 1, figsize=(12, 12), constrained_layout=True)
decomposition.observed.plot(c=sns_c[0], ax=ax[0])
ax[0].set(title='observed')
decomposition.trend.plot(c=sns_c[1], ax=ax[1])
ax[1].set(title='trend')
decomposition.seasonal.plot(c=sns_c[2], ax=ax[2])
ax[2].set(title='seasonal')
decomposition.resid.plot(c=sns_c[3], ax=ax[3])
ax[3].set(title='residual')
fig.set_size_inches(20, 10);
```
The daily vaccinations for the United States have a clear increasing trend and a weekly seasonality. That means it is not stationary.
----
Statistical test: To confirm our visual observation on the above plot, we will use ADF and KPSS test:
----
ADF (Augemented Dickey-Fuller):
Null Hypothesis: The series is not stationary.
Alternate Hypothesis: The series is stationary.
----
KPSS (Kwiatkowski-Phillips-Schmidt-Shin):
Null Hypothesis: The series is stationary.
Alternated Hypothesis: The series is not stationary.
```
def stationarity_test(daily_vaccinations):
# Calculate rolling mean and rolling standard deviation
rolling_mean = daily_vaccinations.rolling(7).mean()
rolling_std_dev = daily_vaccinations.rolling(7).std()
# Plot the statistics
plt.figure(figsize=(24,6))
plt.plot(rolling_mean, color='#FF5A36', label='Rolling Mean')
plt.plot(rolling_std_dev, color='#1E4485', label = 'Rolling Std Dev')
plt.plot(daily_vaccinations, color='#99D04A',label='Original Time Series')
plt.xticks([])
plt.legend(loc='best')
plt.title('Rolling Mean and Standard Deviation')
# ADF test
print("ADF Test:")
adf_test = adfuller(daily_vaccinations,autolag='AIC')
print('Null Hypothesis: Not Stationary')
print('ADF Statistic: %f' % adf_test[0])
print('p-value: %f' % adf_test[1])
print('----'*10)
# KPSS test
print("KPSS Test:")
kpss_test = kpss(daily_vaccinations, regression='c', nlags="legacy", store=False)
print('Null Hypothesis: Stationary')
print('KPSS Statistic: %f' % kpss_test[0])
print('p-value: %f' % kpss_test[1])
print('----'*10)
stationarity_test(df_time_us['daily_vaccinations'])
```
The p-value of the ADF test is > 0.5 which tells us that we cannot decline the null-hypothesis that the time series is non-stationary and the p-value for the KPSS test is below 0.05 which means we can reject this null-hypothesis that it is stationary. Both tests indicates that it is not stationary.
We need to de-trend the time series and make the series stationary.
```
# De-trending the time series
df_time_us_diff = df_time_us['daily_vaccinations'].diff(periods =2).dropna()
#re-test stationarity:
def stationarity_test(daily_vaccinations):
# Calculate rolling mean and rolling standard deviation
rolling_mean = daily_vaccinations.rolling(30).mean()
rolling_std_dev = daily_vaccinations.rolling(30).std()
# Plot the statistics
plt.figure(figsize=(24,6))
plt.plot(rolling_mean, color='#FF5A36', label='Rolling Mean')
plt.plot(rolling_std_dev, color='#1E4485', label = 'Rolling Std Dev')
plt.plot(daily_vaccinations, color='#99D04A',label='De-Trended Time Series')
plt.xticks([])
plt.legend(loc='best')
plt.title('Rolling Mean and Standard Deviation')
# ADF test
print("ADF Test:")
adf_test = adfuller(daily_vaccinations,autolag='AIC')
print('Null Hypothesis: Not Stationary')
print('ADF Statistic: %f' % adf_test[0])
print('p-value: %f' % adf_test[1])
print('----'*10)
# KPSS test
print("KPSS Test:")
kpss_test = kpss(daily_vaccinations, regression='c', nlags="legacy", store=False)
print('Null Hypothesis: Stationary')
print('KPSS Statistic: %f' % kpss_test[0])
print('p-value: %f' % kpss_test[1])
print('----'*10)
#stationarity_test(df_time_us['Dailyvac_Detrend'].dropna()) df_time_us_diff
stationarity_test(df_time_us_diff)
# Partial Autocorrelation Plot
#pacf = plot_pacf(df_time_us['Dailyvac_Detrend'].dropna(), lags=30)
pacf = plot_pacf(df_time_us_diff, lags=30)
```
After de-trending the time series the AFD test as well as the KPSS test both indicate that our series is now stationary. Having a look at the partial autocorrelation plot suggests that correlation exists at certain lags.
## Split the Data
We will split our data and take the first part as our training set.
```
# Split data into train and test set
df_arima = df_time_us['daily_vaccinations']
train_test_split_ratio = int(len(df_arima)*0.8)
train_data, test_data = df_arima[:train_test_split_ratio], df_arima[train_test_split_ratio:]
# Plotting the train and test set
plt.figure(figsize=(10,6))
plt.title('Daily Vaccinations United America')
plt.xlabel('Date')
plt.ylabel('Daily Vaccinations')
plt.xticks([])
plt.plot(train_data, 'red', label='Train data')
plt.plot(test_data, 'black', label='Test data')
plt.legend();
```
### Auto-Regressive Integrated Moving Average (ARIMA)
ARIMA model is a combination of Auto-Regressive and Moving Average model along with the Integration of differencing. Auto-Regressive model determines the relationship between an observation and a certain number of lagged observations. The Integrated part is the differencing of the actual observations to make the time series stationary. Moving Average determines the relationship between an observation and residual error obtained by using a moving average model on the lagged observations.
* *Auto-Regressive (p)*: Number of lag observations in the model. Also called lag order.
* *Integrated (d)*: Number of times the actual observations are differenced for stationarity. Also called degree of differencing.
* *Moving Average (q)*: Size of moving average window. Also called the order of moving average.
```
# Auto ARIMA Method
arima_model = auto_arima(train_data,
start_p=1, start_q=1,
max_p=5, max_q=5,
test='adf',
trace=True,
alpha=0.05,
scoring='mse',
suppress_warnings=True,
seasonal = True,
stepwise=True, with_intercept=False,
)
# Fit the final model with the order
fitted_model = arima_model.fit(train_data)
print(fitted_model.summary())
# Forecasting values
forecast_values = fitted_model.predict(len(test_data), alpha=0.05)
fcv_series = pd.Series(forecast_values[0], index=test_data.index)
# Plot the predicted stock price and original price
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train_data, label='training')
plt.plot(test_data, label='Actual daily vaccinations in the US')
plt.plot(fcv_series,label='Predicted daily vaccinations')
plt.title('Prediction of daily vaccination progress in the US')
plt.xlabel('Date')
plt.ylabel('Daily vaccinations')
plt.xticks([])
plt.legend(loc='upper left', fontsize=8)
plt.show()
# Evaluate the model by calculating RMSE
rms_auto_arima = sqrt(mean_squared_error(test_data.values, fcv_series))
print("Auto-Arima RMSE :- " + str(round(rms_auto_arima,3)))
# Plotting diagnostics of the ARIMA model
arima_model.plot_diagnostics(figsize=(15,8))
plt.show()
```
Histogram plus estimated density plot: The red KDE line follows closely with the N(0,1) line. This is a good indication that the residuals are normally distributed.
The Q-Q-plot: Shows that the ordered distribution of residuals (blue dots) follows the linear trend of the samples taken from a standard normal distribution with N(0,1). This is an indication that the residuals are normally distributed.
The Correlogram plot: Shows that the time series residuals have low correlation with lagged versions of itself.
```
# Holt's Exponential Smoothing Method
pred_values = test_data.copy()
pred_values = pd.DataFrame(pred_values)
Holt_smooth_df = pd.DataFrame(columns = ['RMS','Smoothing Level','Smoothing Slope'])
perm = permutations(list(np.linspace(0.05,1,num=20)), 2)
for i in list(perm):
fit_Holt_smooth = ExponentialSmoothing(np.asarray(train_data)).fit(smoothing_level = i[0],smoothing_slope=i[1])
pred_values['Holt_smooth'] = fit_Holt_smooth.forecast(len(test_data))
rms = round(sqrt(mean_squared_error(test_data.values, pred_values.Holt_smooth)),3)
Holt_smooth_df = Holt_smooth_df.append(other = {'RMS' : rms , 'Smoothing Level' : i[0], 'Smoothing Slope':i[1]} , ignore_index=True)
opt_values = Holt_smooth_df.loc[Holt_smooth_df['RMS'] == min(Holt_smooth_df['RMS']),['Smoothing Level','Smoothing Slope']].values
# Using optimised values from the lists.
fit_Holt_smooth = ExponentialSmoothing(np.asarray(train_data)).fit(smoothing_level = opt_values[0][0],smoothing_slope=opt_values[0][1])
pred_values['Holt_smooth'] = fit_Holt_smooth.forecast(len(test_data))
plt.figure(figsize=(16,8))
plt.plot(train_data, label='Train')
plt.plot(test_data, label='Test')
plt.plot(pred_values['Holt_smooth'], label='Holt_smooth')
plt.xticks([])
plt.legend(loc='best')
plt.title('Holt Exponential Smoothing')
plt.show()
rms_holt_exp = sqrt(mean_squared_error(test_data.values, pred_values.Holt_smooth))
print("Holt’s Exponential Smoothing RMS :- " + str(round(rms_holt_exp,3)) + " & Smoothing Level :- "+str(round(opt_values[0][0],3)) + " & Smoothing Slope :- "+str(round(opt_values[0][1],3)))
```
## Simple Exponential Smoothing
```
# Simple Exponential Smoothing Method
simple_exponential_df = pd.DataFrame(columns = ['RMS','Smoothing Level'])
from itertools import permutations
perm = permutations(list(np.linspace(0.05,1,num=20)), 1)
for i in list(perm):
fit_sim_exp = SimpleExpSmoothing(np.asarray(train_data)).fit(smoothing_level = i[0])
pred_values['Simple_Exponential'] = fit_sim_exp.forecast(len(test_data))
rms = round(sqrt(mean_squared_error(test_data.values, pred_values.Simple_Exponential)),3)
simple_exponential_df = simple_exponential_df.append(other = {'RMS' : rms , 'Smoothing Level' : i[0]} , ignore_index=True)
opt_values = simple_exponential_df.loc[simple_exponential_df['RMS'] == min(simple_exponential_df['RMS']),['Smoothing Level']].values
# Use optimised values from the lists
fit_sim_exp = SimpleExpSmoothing(np.asarray(train_data)).fit(smoothing_level = opt_values[0][0])
pred_values['Simple_Exponential'] = fit_sim_exp.forecast(len(test_data))
plt.figure(figsize=(16,8))
plt.plot(train_data, label='Train')
plt.plot(test_data, label='Test')
plt.plot(pred_values['Simple_Exponential'], label='Simple_Exponential')
plt.xticks([])
plt.legend(loc='best')
plt.show()
rms_sim_exp = sqrt(mean_squared_error(test_data.values, pred_values.Simple_Exponential))
print("Simple Exponential Smoothing RMS :- " + str(round(rms_sim_exp,3)) + " & Smoothing Level :- "+str(round(opt_values[0][0],3)))
```
## Evaluation of the Models
To evaluate the performance of the model, we will use Root Mean Squared Error (RMSE) and compare which model performed the best.
```
# Printing RMSE of all the methods
print("RMSE of all the methods")
print("Auto-Arima: ", round(rms_auto_arima,3))
print("Simple Exponential Smoothing: ", round(rms_sim_exp,3))
print("Holt’s Exponential Smoothing: ", round(rms_holt_exp,3))
```
From the three models we trained the Auto-Arima reached the smallest RSME but all three models do not deliver a good prediction yet.
## Future Work:
- Tune models for better predictions
- If required use other models
- Use updated dataset for more data
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/5_exploring_model_families/2_vgg/1.1)%20Intro%20to%20vgg%20network%20-%20mxnet%20backend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### Train a architectural heritage site classifier using vgg16
### Understand what lies inside vgg network
# What is vgg
## Readings on vgg
1) Points from https://towardsdatascience.com/vgg-neural-networks-the-next-step-after-alexnet-3f91fa9ffe2c
- VGG addresses another very important aspect of CNNs: depth
- All of VGG’s hidden layers use ReLU
- Unlike 11x11 kernels of alexnet, it uses smaller ones 1x1 and 3x3 kernels
2) Points from https://becominghuman.ai/what-is-the-vgg-neural-network-a590caa72643
- Intuitively, more layer is better. However, the authors found that VGG-16 is better than VGG-19
- Authors introduce multi-scale evaluationin the paper
3) Read more here -
- https://arxiv.org/abs/1409.1556
- https://machinelearningmastery.com/use-pre-trained-vgg-model-classify-objects-photographs/
- https://www.cs.toronto.edu/~frossard/post/vgg16/
- https://d2l.ai/chapter_convolutional-modern/vgg.html
# Table of Contents
## [0. Install](#0)
## [1. Load experiment with vgg base architecture](#1)
## [2. Visualize vgg](#2)
## [3. Train the classifier](#3)
## [4. Run inference on trained classifier](#5)
<a id='0'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
# If using Colab install using the commands below
!cd monk_v1/installation/Misc && pip install -r requirements_colab.txt
# If using Kaggle uncomment the following command
#!cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt
# Select the requirements file as per OS and CUDA version when using a local system or cloud
#!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
```
## Dataset - Architectural Heritage site Classification
- https://old.datahub.io/dataset/architectural-heritage-elements-image-dataset
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1MFu7cnxwDM7LWKgeLggMLvWIBW_-YCWC' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1MFu7cnxwDM7LWKgeLggMLvWIBW_-YCWC" -O architectural_heritage.zip && rm -rf /tmp/cookies.txt
! unzip -qq architectural_heritage.zip
```
# Imports
```
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using mxnet-gluon backend
from gluon_prototype import prototype
```
<a id='1'></a>
# Load experiment with vgg base architecture
## Creating and managing experiments
- Provide project name
- Provide experiment name
- For a specific data create a single project
- Inside each project multiple experiments can be created
- Every experiment can be have diferent hyper-parameters attached to it
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "vgg-intro");
```
### This creates files and directories as per the following structure
workspace
|
|--------Project
|
|
|-----vgg-intro
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
## Set dataset and select the model
## Quick mode training
- Using Default Function
- dataset_path
- model_name
- freeze_base_network
- num_epochs
## Sample Dataset folder structure
architectural_heritage
|
|-----train
|------dome
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------altal
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------.... (and so on)
|
|
|-----val
|------dome
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------altal
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------.... (and so on)
```
gtf.Default(dataset_path="architectural_heritage/train",
model_name="vgg16",
freeze_base_network=False,
num_epochs=5);
```
## From the summary above
- Model Params
Model name: vgg16
Num of potentially trainable layers: 16
Num of actual trainable layers: 16
<a id='2'></a>
# Visualize vgg
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224), port=8082);
```
## vgg block - 1
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/vgg_block1_mxnet.png')
```
## Properties
- This block has 3 layers
- conv -> relu
## vgg block - 2
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/vgg_block2_mxnet.png')
```
## Properties
- This block has 3 layers
- conv -> relu -> max_pool
## vgg fully connected chain
```
from IPython.display import Image
Image(filename='imgs/vgg_block_fc_mxnet.png')
```
## vgg Network
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/vgg16_mxnet.png')
```
## Properties
- This network
- has 9 type-1 blocks
- has 5 type-2 blocks
- post these blocks the type-3 (fc) block exists
<a id='3'></a>
# Train the classifier
```
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='4'></a>
# Run inference on trained classifier
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "vgg-intro", eval_infer=True);
output = gtf.Infer(img_name = "architectural_heritage/test/test1.jpg");
from IPython.display import Image
Image(filename='architectural_heritage/test/test1.jpg')
output = gtf.Infer(img_name = "architectural_heritage/test/test2.jpg");
from IPython.display import Image
Image(filename='architectural_heritage/test/test2.jpg')
output = gtf.Infer(img_name = "architectural_heritage/test/test3.jpg");
from IPython.display import Image
Image(filename='architectural_heritage/test/test3.jpg')
```
| github_jupyter |
# Use PMML to predict iris species with `ibm-watson-machine-learning`
This notebook contains steps from storing sample PMML model to starting scoring new data.
Some familiarity with python is helpful. This notebook uses Python 3.
You will use a **Iris** data set, which details measurements of iris perianth. Use the details of this data set to predict iris species.
## Learning goals
The learning goals of this notebook are:
- Working with the WML instance
- Online deployment of PMML model
- Scoring of deployed model
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Model upload](#upload)
3. [Web service creation](#deploy)
4. [Scoring](#score)
5. [Clean up](#cleanup)
6. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Contact with your Cloud Pack for Data administrator and ask him for your account credentials
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.
```
wml_credentials = {
"username": username,
"password": password,
"url": url,
"instance_id": 'openshift',
"version": '3.5'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
- Click New Deployment Space
- Create an empty space
- Go to space `Settings` tab
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="upload"></a>
## 2. Upload model
In this section you will learn how to upload the model to the Cloud.
**Action**: Download sample PMML model from git project using wget.
```
import os
from wget import download
sample_dir = 'pmml_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
filename=os.path.join(sample_dir, 'iris_chaid.xml')
if not os.path.isfile(filename):
filename = download('https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/models/pmml/iris-species/model/iris_chaid.xml', out=sample_dir)
```
Store downloaded file in Watson Machine Learning repository.
```
sw_spec_uid = client.software_specifications.get_uid_by_name("spark-mllib_2.4")
meta_props = {
client.repository.ModelMetaNames.NAME: "pmmlmodel",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
client.repository.ModelMetaNames.TYPE: 'pmml_4.2.1'}
published_model = client.repository.store_model(model=filename, meta_props=meta_props)
```
**Note:** You can see that model is successfully stored in Watson Machine Learning Service.
```
client.repository.list_models()
```
<a id="deployment"></a>
## 3. Create online deployment
You can use commands bellow to create online deployment for stored model (web service).
```
model_uid = client.repository.get_model_uid(published_model)
deployment = client.deployments.create(
artifact_uid=model_uid,
meta_props={
client.deployments.ConfigurationMetaNames.NAME: "Test deployment",
client.deployments.ConfigurationMetaNames.ONLINE:{}}
)
```
<a id="scoring"></a>
## 4. Scoring
You can send new scoring records to web-service deployment using `score` method.
```
deployment_id = client.deployments.get_id(deployment)
scoring_data = {
client.deployments.ScoringMetaNames.INPUT_DATA: [
{
'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'],
'values': [[5.1, 3.5, 1.4, 0.2]]
}]
}
predictions = client.deployments.score(deployment_id, scoring_data)
print(predictions)
```
As we can see this is Iris Setosa flower.
<a id="cleanup"></a>
## 5. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 6. Summary and next steps
You successfully completed this notebook! You learned how to use Watson Machine Learning for PMML model deployment and scoring.
Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html) for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Lukasz Cmielowski**, PhD, is a Software Architect and Data Scientist at IBM.
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
# Metadata preprocessing tutorial
Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :
- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.
- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.
- **MetaAttachmentType :** a transformer which creates an 'attachment type' feature extracted from regex in metadata. It extracts the extensions of attached files.
- **Dummifier :** a transformer to dummifies categorial features.
All the classes have **fit_transform** methods.
### Input dataframe
- To use a **MetaExtension** transformer : the dataframe requires a **from** column
- To use a **MetaDate** transformer : the dataframe requires a **date** column
- To use a **MetaAttachmentType** transformer : the dataframe requires a **attachment** column with the list of attached files
```
from melusine.data.data_loader import load_email_data
import ast
df_emails = load_email_data()
df_emails = df_emails[['from','date', 'attachment']]
df_emails['from']
df_emails['date']
df_emails['attachment'] = df_emails['attachment'].apply(ast.literal_eval)
df_emails['attachment']
```
### MetaExtension transformer
A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
```
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
```
### MetaDate transformer
A **MetaDate transformer** creates new features from dates : hour, minute and dayofweek
```
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
```
### MetaAttachmentType transformer
A **MetaAttachmentType transformer** creates an *attachment_type* feature extracted from an attachment names list. It extracts the extensions of attachments files.
```
from melusine.prepare_email.metadata_engineering import MetaAttachmentType
meta_pj = MetaAttachmentType()
df_emails = meta_pj.fit_transform(df_emails)
df_emails.attachment_type
```
### Dummifier transformer
A **Dummifier transformer** dummifies categorial features.
Its arguments are :
- **columns_to_dummify** : a list of the metadata columns to dummify.
```
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension','attachment_type', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
df_meta.to_csv('./data/metadata.csv', index=False, encoding='utf-8', sep=';')
```
### Custom metadata transformer
A custom transformer can be implemented to extract metadata from a column :
```python
from sklearn.base import BaseEstimator, TransformerMixin
class MetaDataCustom(BaseEstimator, TransformerMixin):
"""Transformer which creates custom matadata
Compatible with scikit-learn API.
"""
def __init__(self):
"""
arguments
"""
def fit(self, X, y=None):
""" Fit method"""
return self
def transform(self, X):
"""Transform method"""
X['custom_metadata'] = X['column'].apply(self.get_metadata)
return X
```
The name of the output column can then be given as argument to a Dummifier transformer :
```python
dummifier = Dummifier(columns_to_dummify=['custom_metadata'])
```
| github_jupyter |
<p><img alt="DataOwl" width=150 src="http://gwsolutions.cl/Images/dataowl.png", align="left", hspace=0, vspace=5></p>
<h1 align="center">Aplicación de la derivada</h1>
<h4 align="center">Ecuaciones de una variable y Optimización</h4>
<pre><div align="center"> La idea de este notebook es que sirva para iniciarse en conceptos
matemáticos para aplicar la derivada numérica en la resolución
de ecuaciones de una variable y optimización.</div>
# Aplicaciones de la derivada
En clases anteriores, abordamos el problema de encontrar dónde una función se anula. En este Notebook veremos que las derivadas también nos pueden ayudar en este desafío, además de poder aplicarse en otros problemas, como la aproximación de una función mediante polinomios y la optimización de una función.
## 4. Ecuaciones de una variable (continuación)
### 4.1 Método de la Secante
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/92/Secant_method.svg/450px-Secant_method.svg.png" alt="Método de la secante" width=280 align="center" hspace=0 vspace=5 padding:5px />
De la clase anterior, sabemos que una función $f$ puede ser cortada en dos de sus puntos mediante una recta llamada *secante* Esta recta tiene una ecuación bien definida, esta vez dada por los puntos $(x_0,f(x_0))$, $(x_1,f(x_1))$ y la fórmula
$$y\ =\ \frac{f(x_1)-f(x_0)}{x_1-x_0}(x-x_1)+f(x_1)$$
Para encontrar un valor $x$ en que $f(x)=0$, se puede aproximar el resultado esperado utilizando $y=0$ en la fórmula anterior. Esto da lugar a una solución parcial
$$x = x_1-f(x_1)\frac{x_1-x_0}{f(x_1)-f(x_0)}$$
Esto se puede extender de forma iterativa, generando una sucesión de valores $x_n$ que se aproximan a la solución real:
$$x_n = x_{n-1}-f(x_{n-1})\frac{x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}$$
Esto depende de la elección de dos puntos de inicio, $x_0$ y $x_1$, además de algunas propiedades que debe satisfacer $f$, que mencionaremos dentro de poco. Este método se conoce como **Método de la Secante**.
### 4.2 Método de Newton-Raphson
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e0/NewtonIteration_Ani.gif/450px-NewtonIteration_Ani.gif" width=450 alt="Método de Newton-Raphson" align="center"/>
Del mismo modo, si la diferencia entre $x_{n-1}$ y $x_{n-2}$ es "pequeña", se puede aproximar la sucesión anterior a la fórmula
$$x_n = x_{n-1}-\frac{f(x_{n-1})}{f'(x_{n-1})}$$
donde ahora la recurrencia sólo depende de un paso anterior, por lo cual se requiere un solo punto $x_0$ de inicio. Además, este último método converge más rápido que el Método de la Secante, y se conoce como **Método de Newton-Raphson**.
### 4.3 Hipótesis necesarias
Es necesario notar que ambos métodos tienen sus limitaciones, siendo las más importante que haya **sólo un cero** (para el Método de la Secante), que $f'(x)\neq0$ (para el Método de Newton-Raphson) y que la función sea continuamente dos veces derivable.
Este último punto nos obliga a definir qué es ser "continuamente dos veces derivable". Sin embargo, la forma más inmediata de abordar esto, es simplemente decir que el método utilizado para calcular la derivada de $f$ lo usamos ahora para calcular la derivada de $f'$, y que el resultado es continuo, en el sentido que vimos en clases anteriores. Lo que se consigue es llamado **segunda derivada** de $f$, y se denota $\frac{d^2f}{dx^2}(x)$ o $f''(x)$.
## 5. Optimización
El objetivo de esta rama de las Matemáticas es encontrar dónde las funciones alcanzan su valor máximo o mínimo, qué condiciones deben cumplirse para que éstos existan y de qué forma se puede aproximar dichos valores. En esta sección, veremos ideas básicas de optimización en funciones reales diferenciables con derivada continua, en problemas irrestrictos.
Recordemos que la derivada de una función $f$ representa en un punto $x$ cuánto vale la pendiente de la recta tangente a su curva, en ese punto. Por lo tanto, cuando $f'(x)>0$, se dice que la función crece entorno a ese punto, y decrece cuando $f'(x)<0$. Como vimos en el ejercicio de encontrar ceros en una función continua, el hecho de que haya un cambio de signo en la función $f'$ implica que debe existir un valor $\bar{x}$ en que $f(\bar{x})=0$. Un punto $\bar{x}$ con esta característica se llama **punto estacionario**, ya que ahí la función no crece ni decrece. Esto indica que, si encontramos dicho valor $\bar{x}$, éste será un *candidato* a máximo o mínimo (¡existen los *puntos silla*!).
Ya conocemos formas de encontrar ceros de una función. Podemos aplicarlo en nuestra función $f'$ para encontrar, aproximadamente, dónde ésta se anula y así tener lo candidatos a óptimo. Para conocer mejor la naturaleza de el candidato $\bar{x}$, será necesario calcular $f''(\bar{x})$. Si se obtiene que $f''(\bar{x})>0$, sin duda $\bar{x}$ será **mínimo**, mientras que si $f''(\bar{x})<0$, $\bar{x}$ será **máximo**. El caso $f''(\bar{x})=0$ es más problemático, aunque matemáticamente es abordable y visualmente lo es aún más.
```
# Importando las librerías
%matplotlib notebook
import numpy as np
import matplotlib.colors as mcolors # Nos permite utilizar una paleta de colores más amplia
import matplotlib.pyplot as plt
import experimento5 as ex
def f(x): # Alguna función de ejemplo
return 0.5 - np.sin(2 * x)
def g(x): # Alguna función de ejemplo
return (np.exp(-x ** 2) - x) / ((x + 1) ** 2 + (x - 1) ** 2)
def h(x):
return np.exp(x)
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 1000)
y = g(x)
dydx1 = ex.derivadavec(x, y)
dydx2 = ex.derivadafun(0, np.pi/2, g, 0.001)
x2 = np.linspace(0, np.pi/2, len(dydx2))
plt.plot(x, dydx1, color='gold', label='Derivada vector', zorder=0)
plt.plot(x2, dydx2, color='crimson', label='Derivada función', zorder=1)
plt.plot(x, y, color='gray', label='Función', zorder=2)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Comparación de cálculo de derivadas')
plt.legend()
plt.grid()
plt.show()
# Buscamos los ceros de g(x)
x0, y0 = ex.ceros(-3, 3, g)
print(x0)
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 10000)
y = g(x)
dydx = ex.derivadavec(x, y)
d2ydx2 = ex.derivadavec(x, dydx)
x0, y0 = ex.ceros(-5, 5, g, x[1]-x[0])
plt.plot(x, y, color='gray', label='g(x)', zorder=0)
plt.plot(x, dydx, color='red', label='g´(x)', zorder=1)
plt.plot(x, d2ydx2, color='blue', label='g´´(x)', zorder=2)
plt.plot(x0, y0, marker='*', color='green', label='Cero de g', linestyle='', zorder=3)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Función y sus derivadas')
plt.legend()
plt.grid()
plt.show()
# Incluimos el cálculo de los x en que f(x)=0
x1, y1 = ex.cerof(x, dydx, x[1]-x[0])
# Probamos nuestras funciones para calcular derivadas, con distintos tamaños de arreglo y dx
x = np.linspace(-np.pi, np.pi, 10000)
y = g(x)
dydx = ex.derivadavec(x, y)
d2ydx2 = ex.derivadavec(x, dydx)
plt.plot(x, y, color='gray', label='g(x)', zorder=0)
plt.plot(x, dydx, color='red', label='g´(x)', zorder=1)
plt.plot(x, d2ydx2, color='blue', label='g´´(x)', zorder=2)
plt.plot(x1, y1, marker='*', color='green', label='Cero de g', linestyle='', zorder=3)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Función y sus derivadas')
plt.legend()
plt.grid()
plt.show()
```
¿Cómo podríamos encontrar el valor de f''(x) en los valores x que encontramos?
## Ejercicios
**1.-** Intente escribir un código para utilizar el Método de la Secante y el Método de Newton-Raphson y aplíquelo a alguna de las funciones vistas.
**2.-**
**a)** En relación con el problema de encontrar $x\in[a,b]$ tal que $f(x)\ =\ 0$, busque (por ejemplo, en Wikipedia) información sobre el Método de Householder o *Householder's Method*. Note que el método de Newton-Raphson es uno de estos modelos, pero que hay casos en que se usa derivadas de orden superior. Intente escribir un algoritmo con alguno de esos métodos (incluso puede hacer un algoritmo que permita utilizar cualquiera de los métodos), y aplíquelo a la función
$$f(x)\ =\ \frac{e^{-x^2}-x^3}{(x+1)^2+(x-1)^2}$$
Para ello, grafique esa función en algún intervalo en que se sepa que la función se anula. Puede ayudarse con el uso de una grilla, escribiendo
```Python
plt.grid() # Para desplegar la grilla
plt.show() # Para mostrar el gráfico
```
y tome un valor inicial $x_0$ que visualmente se halle cercano a la solución.
**b)** Haga lo mismo que antes, buscando información sobre el Método de Halley (o *Halley's Method*).
**3.-** Utilice el Notebook y cualquiera de los métodos vistos o los definidos en clase para estudiar las siguientes funciones:
<ol style="list-style-type:lower-alpha">
<li>$\qquad f(x) = x^p,\quad p\in\mathbb{R}$. Pruebe con distintos valores de $p$ (distinga entre $p\ge0$ y $p<0$ <br><br></li>
<li>$\qquad g(x) = \frac{x}{\sqrt{x^2+1}}$ <br><br></li>
<li>$\qquad h(x) = \frac{\sin^2(x)}{x},\quad x\neq0$</li>
</ol>
**4.-** Intente programar un algoritmo para encontrar los mínimos y máximos de una función $f$, si los tiene.
| github_jupyter |
# Results summary
| Logistic Regression | LightGBM Classifier | Logistic Regression + ATgfe |
|-------------------------------------------------------------------------|------------------------------------------------------------------------|--------------------------------------------------------------------|
| <ul> <li>10-CV Accuracy: 0.926</li><li>Test-data Accuracy: 0.911</li><li>ROC_AUC: 0.99</li> </ul> | <ul> <li>10-CV Accuracy: 0.946</li><li>Test-data Accuracy: 0.977</li><li>ROC_AUC: 1.0</li> </ul> | <ul> <li>10-CV Accuracy: **0.98**</li><li>Test-data Accuracy: **1.0**</li><li>ROC_AUC: **1.0**</li> </ul> |
# Import packages
```
from atgfe.GeneticFeatureEngineer import GeneticFeatureEngineer
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.pipeline import make_pipeline
from sklearn.compose import make_column_transformer
from sklearn.metrics import accuracy_score, make_scorer, balanced_accuracy_score, recall_score
from yellowbrick.classifier import ClassificationReport, ConfusionMatrix, ROCAUC, PrecisionRecallCurve
from lightgbm import LGBMClassifier
from sklearn import datasets
def prepare_column_names(columns):
return [col.replace(' ', '').replace('(cm)', '_cm') for col in columns]
sklearn_data = datasets.load_iris()
columns = prepare_column_names(sklearn_data.feature_names)
df = pd.DataFrame(data=sklearn_data.data, columns=columns)
df['class'] = sklearn_data.target
df['class'] = df['class'].astype(str)
df.head()
target = 'class'
X = df.drop(target, axis=1).copy()
Y = df.loc[:, target].copy()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)
classes = ['setosa', 'versicolor', 'virginica']
numerical_features = X.columns.tolist()
def classification_report(model):
visualizer = ClassificationReport(model, classes=classes, support=True)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def roc_auc(model):
visualizer = ROCAUC(model, classes=classes)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def confusion_matrix(model):
visualizer = ConfusionMatrix(model, classes=classes)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def precision_recall_curve(model):
visualizer = PrecisionRecallCurve(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def score_model(model, X, y):
evaluation_metric_scorer = make_scorer(balanced_accuracy_score, greater_is_better=True)
scores = cross_val_score(estimator=model, X=X, y=y, cv=10, scoring=evaluation_metric_scorer, n_jobs=-1)
scores_mean = scores.mean()
score_std = scores.std()
print('Mean of metric: {}, std: {}'.format(scores_mean, score_std))
def score_test_data_for_model(model, X_test, y_test):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Balanced Accuracy: {}'.format(balanced_accuracy_score(y_test, y_pred)))
print('Accuracy: {}'.format(accuracy_score(y_test, y_pred)))
def create_new_model():
model = make_pipeline(StandardScaler(), LogisticRegression(random_state=77, n_jobs=-1, solver='saga'))
return model
```
# Using LightGBM
```
lgbm_model = LGBMClassifier(n_estimators=100, random_state=7)
score_model(lgbm_model, X, Y)
classification_report(lgbm_model)
confusion_matrix(lgbm_model)
precision_recall_curve(lgbm_model)
roc_auc(lgbm_model)
lgbm_model.fit(X_train, y_train)
score_test_data_for_model(lgbm_model, X_test, y_test)
```
# Using Logistic Regression
```
model = create_new_model()
score_model(model, X, Y)
classification_report(model)
confusion_matrix(model)
precision_recall_curve(model)
roc_auc(model)
score_test_data_for_model(model, X_test, y_test)
```
# Using ATgfe
```
model = create_new_model()
def micro_recall_score(y_true, y_pred):
return recall_score(y_true, y_pred, average='micro')
gfe = GeneticFeatureEngineer(model, x_train=X_train, y_train=y_train, numerical_features=numerical_features,
number_of_candidate_features=2, number_of_interacting_features=4,
evaluation_metric=micro_recall_score, minimize_metric=False, enable_weights=True,
n_jobs=62, cross_validation_in_objective_func=True, objective_func_cv=3)
gfe.fit(mu=10, lambda_=120, early_stopping_patience=5, mutation_probability=0.4, crossover_probability=0.6)
```
# Apply GFE
```
new_X = gfe.transform(X)
new_X.head(20)
model = create_new_model()
score_model(model, new_X, Y)
X_train, X_test, y_train, y_test = train_test_split(new_X, Y, test_size=0.3, random_state=42)
classification_report(model)
confusion_matrix(model)
precision_recall_curve(model)
roc_auc(model)
score_test_data_for_model(model, X_test, y_test)
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from xgboost.sklearn import XGBClassifier
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from collections import Counter
from sklearn import decomposition
from sklearn.metrics import log_loss
from sklearn.calibration import CalibratedClassifierCV
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import GridSearchCV
import imblearn
from imblearn.over_sampling import RandomOverSampler
from matplotlib import pyplot as plt
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("data"))
# Any results you write to the current directory are saved as output.
!pip install imblearn
# Read training data
data = pd.read_csv('data/train.csv')
from sklearn.model_selection import train_test_split
train_X = data.drop(["target","id"], axis=1)
```
# Representation of the target with numerical values
```
le = LabelEncoder()
le.fit(data["target"])
train_y = le.transform(data["target"])
```
# Splitting the data (train.csv)
```
# split train set into 2 parts with same distribution: 80% train, 20% validation
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)
for train_index, test_index in sss.split(train_X.values, train_y):
X_train = train_X.values[train_index]
X_val = train_X.values[test_index]
y_train = train_y[train_index]
y_val = train_y[test_index]
```
# Preprocessing
## Null values ?
```
missing_val_count_by_column = (data.isnull().sum())
print(missing_val_count_by_column.sum())
data.describe()
```
## Balance in the class ?
```
data["target"].value_counts().plot.bar()
data["target"].value_counts()
ros = RandomOverSampler()
X_ros, y_ros = ros.fit_sample(X_train, y_train)
unique, counts = np.unique(y_ros, return_counts=True)
print(np.asarray((unique, counts)).T)
pd.Series(y_ros).value_counts().plot.bar()
```
## Scaling
```
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_val_scaled = scaler.transform(X_val)
test_data = pd.read_csv('../input/test.csv')
test_X = test_data.drop(["id"], axis=1)
scaler_all = StandardScaler()
train_X_scaled = scaler_all.fit_transform(train_X)
test_X_scaled = scaler.transform(test_X)
```
## PCA ?
```
pca = decomposition.PCA(n_components=20)
pca.fit(X_train_scaled)
X_train_pca = pca.transform(X_train_scaled)
print(pca.explained_variance_ratio_)
print(pca.explained_variance_)
```
## Determine number of components
```
pca = decomposition.PCA()
pca.fit(X_train_scaled)
X_train_pca = pca.transform(X_train_scaled)
#print(np.cumsum(pca.explained_variance_ratio_))
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
At least 95% of the variance in the data can be explained by 77 components.
# XGBOOST
```
xgb = XGBClassifier()
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
xgb.get_params
```
# Fitting and Tuning an Algorithm
```
from sklearn.model_selection import GridSearchCV
"""
param_test = {
'n_estimators': [300],
'n_jobs': [4], #Number of jobs to run in parallel. -1 means using all processors
}
gsearch = GridSearchCV(estimator = XGBClassifier(), param_grid = param_test, scoring='neg_log_loss', n_jobs=-1,iid=False, cv=3,verbose=1, return_train_score=True)
gsearch.fit(X_train_scaled,y_train)
pd.DataFrame(gsearch.cv_results_)
"""
scores = []
n_estimators = [100,200,400,450,500,525,550,600,700]
for nes in n_estimators:
xgb = XGBClassifier(learning_rate =0.1, n_estimators=nes, max_depth=7, min_child_weight=3, subsample=0.8,
colsample_bytree=0.8, nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(n_estimators,scores,'o-')
plt.ylabel(log_loss)
plt.xlabel("n_estimator")
print("best n_estimator {}".format(n_estimators[np.argmin(scores)]))
scores_md = []
max_depths = [1,3,5,6,7,8,10]
for md in max_depths:
xgb = XGBClassifier(learning_rate =0.1, n_estimators=n_estimators[np.argmin(scores)],
max_depth=md, min_child_weight=3, subsample=0.8,
colsample_bytree=0.8, nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_md.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(max_depths,scores_md,'o-')
plt.ylabel(log_loss)
plt.xlabel("max_depth")
print("best max_depth {}".format(max_depths[np.argmin(scores_md)]))
scores_mcw = []
min_child_weights = [1,2,3,4,5]
for mcw in min_child_weights:
xgb = XGBClassifier(learning_rate =0.1, n_estimators=n_estimators[np.argmin(scores)],
max_depth=max_depths[np.argmin(scores_md)],
min_child_weight=mcw, subsample=0.8,
colsample_bytree=0.8, nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_mcw.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(min_child_weights,scores_mcw,"o-")
plt.ylabel(log_loss)
plt.xlabel("min_child_weight")
print("best min_child_weight {}".format(min_child_weights[np.argmin(scores_mcw)]))
scores_ss = []
subsamples = [0.5,0.6,0.7,0.8,0.9,1]
for ss in subsamples:
xgb = XGBClassifier(learning_rate =0.1, n_estimators=n_estimators[np.argmin(scores)],
max_depth=max_depths[np.argmin(scores_md)],
min_child_weight=min_child_weights[np.argmin(scores_mcw)], subsample=ss,
colsample_bytree=0.8, nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_ss.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(subsamples,scores_ss,"o-")
plt.ylabel(log_loss)
plt.xlabel("subsample")
print("best subsample {}".format(subsamples[np.argmin(scores_ss)]))
scores_cb = []
colsample_bytrees = [0.5,0.6,0.7,0.8,0.9,1]
for cb in colsample_bytrees:
xgb = XGBClassifier(learning_rate =0.1, n_estimators=n_estimators[np.argmin(scores)],
max_depth=max_depths[np.argmin(scores_md)],
min_child_weight=min_child_weights[np.argmin(scores_mcw)],
subsample=subsamples[np.argmin(scores_ss)],
colsample_bytree=cb, nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_cb.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(colsample_bytrees,scores_cb,"o-")
plt.ylabel(log_loss)
plt.xlabel("colsample_bytree")
print("best colsample_bytree {}".format(colsample_bytrees[np.argmin(scores_cb)]))
scores_eta = []
etas = [0.001,0.01,0.1,0.2,0.3,0.5,1]
for eta in etas:
xgb = XGBClassifier(learning_rate =eta, n_estimators=n_estimators[np.argmin(scores)],
max_depth=max_depths[np.argmin(scores_md)],
min_child_weight=min_child_weights[np.argmin(scores_mcw)],
subsample=subsamples[np.argmin(scores_ss)],
colsample_bytree=colsample_bytrees[np.argmin(scores_cb)],
nthread=4, seed=42, objective='multi:softprob')
xgb.fit(X_train_scaled, y_train)
preds = xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_eta.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
plt.plot(etas,scores_eta,"o-")
plt.ylabel(log_loss)
plt.xlabel("eta")
print("best eta {}".format(etas[np.argmin(scores_eta)]))
xgb = XGBClassifier(learning_rate =eta, n_estimators=n_estimators[np.argmin(scores)],
max_depth=max_depths[np.argmin(scores_md)],
min_child_weight=min_child_weights[np.argmin(scores_mcw)],
subsample=subsamples[np.argmin(scores_ss)],
colsample_bytree=colsample_bytrees[np.argmin(scores_cb)],
nthread=4, seed=42, objective='multi:softprob')
calibrated_xgb = CalibratedClassifierCV(xgb, cv=5, method='isotonic')
calibrated_xgb.fit(X_train_scaled, y_train)
preds = calibrated_xgb.predict_proba(X_val_scaled)
score = log_loss(y_val, preds)
scores_eta.append(score)
print("test data log loss eval : {}".format(log_loss(y_val,preds)))
```
# submission
```
xgb = XGBClassifier(learning_rate =0.1, n_estimators=525, max_depth=8, min_child_weight=3, subsample=0.7,
colsample_bytree=0.7, nthread=4, seed=42, objective='multi:softprob')
my_model = CalibratedClassifierCV(xgb, cv=5, method='isotonic')
my_model.fit(train_X_scaled,train_y)
test_preds = my_model.predict_proba(test_X_scaled)
output = pd.DataFrame(test_preds,columns=["Class_"+str(i) for i in range(1,10)])
output.insert(loc=0, column='id', value=test_data.id)
output.to_csv('submission.csv', index=False)
test_data.head()
output.head()
```
| github_jupyter |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
!pip install citipy
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
#Starting URL for Weather Map API call
url = f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID={weather_api_key}"
#list of city data
city_data = []
#print to logger
print("Beginning Data Retrieval")
print("-" * 15)
#create counters
record_count = 1
set_count = 1
#loop through all the cities in the list
for index, city, in enumerate(cities):
#group cities in sets of 50 for logging purpose
if (index % 50 == 0 and index >= 50):
set_count +=1
record_count = 0
#create endpoint URL with each city
city_url = url + "&q=" + city
#log the url record and set number
print(f"Processing Record {record_count} of set {set_count} | {city}")
record_count += 1
#Run an API request for each of the cities
try:
#parse the JSON and retrieve data
city_weather = requests.get(city_url).json()
#extract out max temp humidity and cloudiness
city_lat = city_weather["coord"]["lat"]
city_lng = city_weather["coord"]["lon"]
city_max_temp = city_weather["main"]["temp_max"]
city_humidity = city_weather["main"]["humidity"]
city_clouds = city_weather["clouds"]["all"]
city_wind = city_weather["wind"]["speed"]
city_country = city_weather["sys"]["country"]
city_date = city_weather["dt"]
#append the city info into city data
city_data.append({
"City": city,
"Lat": city_lat,
"Lng": city_lng,
"Max Temp": city_max_temp,
"Humidity": city_humidity,
"Cloudiness": city_clouds,
"Wind Speed": city_wind,
"Country": city_country,
"Data": city_data
})
except:
print("City not found. Skipping...")
pass
#indicate that data loading is complete
print("---------------")
print("Data Retrieval Complete")
print("---------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
#convert array of JSON into Pandas
city_data_df = pd.DataFrame(city_data)
#extract relevant fiels from the data frame
lats = city_data_df["Lat"]
max_temps = city_data_df["Max Temp"]
humidity = city_data_df["Humidity"]
cloudiness = city_data_df["Cloudiness"]
wind_speed = city_data_df["Wind Speed"]
city_data_df.to_csv(output_data_file, index_label = "City_ID")
city_data_df.count()
#display the city data frame
city_data_df.head()
```
### Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
#### Latitude vs. Temperature Plot
```
#build scatter plot for latitude vs. temperature
plt.scatter(lats,
max_temps,
edgecolor="black", linewidth=1, marker="o",
alpha=0.5, label="Cities")
#incorporate the other graph properties
plt.title("City Latitude vs. Max Temperature (%s)" % time.strftime("%x"))
plt.ylabel("Max Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
#save the figure
plt.savefig("output_data/Fig1.png")
#show plot
plt.show()
```
#### Latitude vs. Humidity Plot
```
#build scatter plot for latitude vs. humidity
plt.scatter(lats,
humidity,
edgecolor="black", linewidth=1, marker="o",
alpha=0.5, label="Cities")
#incorporate the other graph properties
plt.title("City Latitude vs. Humidity (%s)" % time.strftime("%x"))
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
#save the figure
plt.savefig("output_data/Fig2.png")
#show plot
plt.show()
```
#### Latitude vs. Cloudiness Plot
```
#build scatter plot for latitude vs. cloudiness
plt.scatter(lats,
cloudiness,
edgecolor="black", linewidth=1, marker="o",
alpha=0.5, label="Cities")
#incorporate the other graph properties
plt.title("City Latitude vs. Cloudiness (%s)" % time.strftime("%x"))
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
#save the figure
plt.savefig("output_data/Fig3.png")
#show plot
plt.show()
```
#### Latitude vs. Wind Speed Plot
```
#build scatter plot for latitude vs. wind speed
plt.scatter(lats,
wind_speed,
edgecolor="black", linewidth=1, marker="o",
alpha=0.5, label="Cities")
#incorporate the other graph properties
plt.title("City Latitude vs. Wind Speed (%s)" % time.strftime("%x"))
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
#save the figure
plt.savefig("output_data/Fig4.png")
#show plot
plt.show()
```
## Linear Regression
```
# OPTIONAL: Create a function to create Linear Regression plots
def plot_linear_regression(x_values, y_values, title, text_coordinates):
#run regression on souther hemisphere
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
#plot
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, text_coordinates, fontsize=15, color="red")
plt.xlabel("Latitude")
plt.ylabel(title)
print(f"The r-squared is {rvalue}")
plt.show()
# Create Northern and Southern Hemisphere DataFrames
northern_hemi_df = city_data_df.loc[(city_data_df["Lat"] >= 0)]
southern_hemi_df = city_data_df.loc[(city_data_df["Lat"] < 0)]
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Max Temp"]
plot_linear_regression(x_values, y_values, "Max Temp", (6,30))
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Max Temp"]
plot_linear_regression(x_values, y_values, "Max Temp", (-30,40))
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Humidity"]
plot_linear_regression(x_values, y_values, "Humidity", (40,10))
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Humidity"]
plot_linear_regression(x_values, y_values, "Humidity", (-30,150))
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Cloudiness"]
plot_linear_regression(x_values, y_values, "Cloudiness", (40,30))
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Cloudiness"]
plot_linear_regression(x_values, y_values, "Cloudiness", (-30,30))
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Wind Speed"]
plot_linear_regression(x_values, y_values, "Wind Speed", (40,25))
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Wind Speed"]
plot_linear_regression(x_values, y_values, "Wind Speed", (-30,30))
```
| github_jupyter |
# Performance - example
```
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import networkx as nx
import numpy as np
G = nx.DiGraph()
G.add_node(1, capacity = 1)
G.add_node(2, capacity = 1)
G.add_node(3, capacity = 1)
G.add_node(4, capacity = 1)
G.add_node(5, capacity = 1)
G.add_edge(1, 2, capacity=3)
G.add_edge(2, 3, capacity=1)
G.add_edge(3, 4, capacity=3)
G.add_edge(4, 3, capacity=5)
G.add_edge(4, 5, capacity=4)
G.add_edge(1, 4, capacity=2)
G.add_edge(3, 5, capacity=2)
G.add_edge(5, 2, capacity=3)
G.add_edge(1, 5, capacity=3)
G = G.to_undirected()
Performance(G)
nx.draw(G,with_labels = True, node_size=1000,pos=nx.spring_layout(G))
plt.show()
def Performance(G):
def capacity_b(Graph):
return np.array(np.repeat(1,len(Graph)))
def get_max_flow_demand(G):
results = []
max_flow = 0
for i in G:
for j in G:
if (i == j):
continue
else:
flow = nx.maximum_flow_value(G,i,j)
flow = int(flow)
results.append(flow)
return results
def fill_routing_matrix(route_graph):
routing_matrix = np.array(np.repeat(0, len(route_graph)))
# for each node in the graph calculate their standard shortest path routing with dijkstra
for n in route_graph:
all_path = nx.single_source_dijkstra_path(route_graph, n)
# for each shortest path for one node in the graph define an array with n elements
# the element in the array is 1 when the shortest path flows through the node
# otherwise 0
for path in all_path:
tmp = np.array(np.repeat(0, len(route_graph)))
for path_element in all_path[path]:
node_idx = 0
for link in route_graph:
if(path_element == link):
tmp[node_idx] = 1
continue
node_idx += 1
# add each shortest path transformed into an array tmp to the routing matrix
if len(tmp) == len(route_graph):
routing_matrix = np.vstack([routing_matrix, tmp])
return routing_matrix
b = capacity_b(G)
R_fl = fill_routing_matrix(G)
R_lf = np.transpose(R_fl)
x = np.array(np.repeat(1, len(R_fl)))
Rx = np.dot(R_lf,x)
rho = 2
x = get_max_flow_demand(G)
Performance = rho*sum(x)
return Performance
Performance(G)
```
| github_jupyter |
**Данный ноутбук проводит анализ текста с использование Google Cloud Nature Library**
**Расценки на использование данного API:**
Использование вами естественного языка рассчитывается в виде **«единиц»**, где каждый документ, отправляемый в API для анализа, представляет собой **как минимум одну единицу**. Документы, содержащие более 1000 символов Unicode (включая символы пробелов и любые символы разметки, такие как HTML или XML-теги), считаются несколькими единицами, **одна единица на 1000 символов**.
Цены на использование естественного языка рассчитываются **ежемесячно** в зависимости от того, какую функцию API вы использовали, и сколько единиц оценивается с использованием этих функций. В таблице ниже указана цена за 1000 единиц на основе **общего количества единиц, проанализированных в течение расчетного месяца**.
1. - Entity Analysis Identify entities and label by types such as person, organization, location, events, products and media.
- до 5 тыс единиц - бесплатно, от 5 тыс до 1 млн - 1 долл/1 тыс
2. - Sentiment Analysis Understand the overall sentiment expressed in a block of text.
- до 5 тыс единиц - бесплатно, от 5 тыс до 1 млн - 1 долл/1 тыс
3. - Entity Sentiment Analysis Understand the sentiment for entities identified in a block of text.
- до 5 тыс единиц - бесплатно, от 5 тыс до 1 млн - 2 долл/1 тыс
4. - Syntax Analysis Extract tokens and sentences, identify parts of speech (PoS) and create dependency parse trees for each sentence.
- до 5 тыс единиц - бесплатно, от 5 тыс до 1 млн - 0,5 долл/1 тыс
5. - Content Classification Identify content categories that apply to a block of text.
- до 30 тыс единиц - бесплатно, от 5 тыс до 1 млн - 2 долл/1 тыс
В данной задаче мы будем использовать для анализа текстовых отзывов:
- Sentiment Analysis (анализ настроений) - для оценки полярности и силы настроений;
- Entity Analysis (анализ сущностей) - для выявления и классификации значимых слов;
- Syntax Analysis (синтаксический анализ) - для разбиения на слова и предложения.
- **После отработаки данного кода API key удален из текста кода в целях безопасности моего Google-аккаунта. Вся полученная в результате обработки инорфмациия сохранена в CSV-файл для дальнейшей работы**
- **Датасет разбивал на мелкие подразделы, чтобы при случайном сбое не потерять данные, за получение которых произведена оплата использования API**
- **Выгрузку полученных данных тоже сделал по частям, чтобы минимизировать размер отдельных файлов для загрузки на гитхаб**
# Импорт библиотек и настройки
```
import os
import numpy as np
import pandas as pd
from google.cloud import language_v1
from tqdm import tqdm
# этот блок для отслеживания операций с Pandas
from tqdm.notebook import tqdm_notebook
tqdm_notebook.pandas()
PATH = "./data" # относительный путь к подпапке.
pd.set_option('display.max_columns', None)
df = pd.read_csv(os.path.join(PATH, 'order_reviews.csv'), sep=';')
df.shape
df.head()
```
# Предобработка датасета. Прогнозирование затрат на обработку
- Прогнозировали - 42706*3=128118 запросов и сумму расходов 94,3 долл.
- Фактически - 128082 запроса и сумма расходов: 80.65 евро, в т.ч. entity analysis - 32.3 euro, sentimental - 32.3 euro, syntax - 16.13
-
```
df['C']=df.progress_apply(lambda x: ("" if pd.isna(x['review_comment_title']) else str(x['review_comment_title']) + '. ')\
+ ("" if pd.isna(x['review_comment_message']) else str(x['review_comment_message'])), axis=1)
count_reviews=df.apply(lambda x: 0 if x['C']=="" else 1, axis=1).sum()
print('Количество отзывов (единиц) для отправки в API: ', count_reviews)
print('Прогнозная стоимость обработки в API, долл: ', (count_reviews-5000)/1000*2.5)
```
# Функции обработки датасета
```
# фильтрует датасет и направляет для обработки в API ненулевые значения, чтобы не платить за пустые запросы.
# и она же вызывает три функции API Google
def get_text_for_nlp(message):
if len(message)<2:
#print(0)
return 0
else:
#print(message)
sent_score, sent_magnitude=analyze_text_sentiment(message) # два значения
entities_list=analyze_text_entities(message) # список со словарями сущностей
sent_count, token_count, sentlist, tokenlist =analyze_text_syntax2(message) # синтаксический анализ
return sent_score, sent_magnitude, entities_list, sent_count, token_count, sentlist, tokenlist
def analyze_text_sentiment(text):
try:
client = language_v1.LanguageServiceClient.from_service_account_json('celestial-digit-0000000000.json')
document = language_v1.Document(content=text, type_=language_v1.Document.Type.PLAIN_TEXT)
response = client.analyze_sentiment(document=document)
sentiment = response.document_sentiment
return sentiment.score, sentiment.magnitude
except Exception:
print('Ошибка в функции sentiment')
return 'НЕТ', 'НЕТ'
def analyze_text_entities(text):
try:
client = language_v1.LanguageServiceClient.from_service_account_json('celestial-digit-00000000000.json')
document = language_v1.Document(content=text, type_=language_v1.Document.Type.PLAIN_TEXT)
response = client.analyze_entities(document=document)
sss=[]
for entity in response.entities:
results = dict(
name=entity.name,
type=entity.type_.name,
salience=entity.salience,
wikipedia_url=entity.metadata.get("wikipedia_url", "-"),
mid=entity.metadata.get("mid", "-"),
)
sss.append(results)
return sss
except Exception:
print('Ошибка в функции entities')
return 'НЕТ'
def analyze_text_syntax2(text):
try:
client = language_v1.LanguageServiceClient.from_service_account_json('celestial-digit-00000000000.json')
document = language_v1.Document(content=text, type_=language_v1.Document.Type.PLAIN_TEXT)
response = client.analyze_syntax(document=document)
sent_count=len(response.sentences)
token_count=len(response.tokens)
sentlist=[]
for sentence in response.sentences:
sentlist.append(sentence.text.content)
tokenlist=[]
for token in response.tokens:
results = dict(
token_text=token.text.content,
token_label=token.dependency_edge.label.name,
token_head_index=token.dependency_edge.head_token_index,
token_tag=token.part_of_speech.tag.name,
token_gender=token.part_of_speech.gender.name,
token_number=token.part_of_speech.number.name,
token_proper=token.part_of_speech.proper.name,
token_lemma=token.lemma,
)
tokenlist.append(results)
return sent_count, token_count, sentlist, tokenlist
except Exception:
print('Ошибка в функции syntax2')
return 'НЕТ', 'НЕТ','НЕТ','НЕТ'
```
# Запуск обработки
Так как это долгий процесс на моем оборудовании, а также в связи с возникновением сбоев на стороне API и чтобы не потерять обработанную информацию, то датасет разбивал на очень маленькие разделы для отправления в обработку.
**Итого обработка заняла почти сутки**
```
#for i in tqdm(range(1,43)):
df['tmp']=np.nan
for i in tqdm(range(1,43)):
print('c ',(i-1)*1000, ' до включительно ',i*1000)
print(df['review_id'][(i-1)*1000],'---',df['review_id'][i*1000],'---',df['review_id'][(i-1)*1000:i*1000].shape)
df['tmp'][(i-1)*1000:i*1000] = df['C'][(i-1)*1000:i*1000].progress_apply(get_text_for_nlp)
df5=df[['review_id','order_id','tmp']]
filename1='tmp'+str(i)+'.csv'
df5[(i-1)*1000:i*1000].to_csv(os.path.join(PATH, filename1), sep=';', header=True, index=False)
for i in tqdm(range(43,100)):
print('c ',(i-1)*1000, ' до включительно ',i*1000)
print(df['review_id'][(i-1)*1000],'---',df['review_id'][i*1000],'---',df['review_id'][(i-1)*1000:i*1000].shape)
df['tmp'][(i-1)*1000:i*1000] = df['C'][(i-1)*1000:i*1000].progress_apply(get_text_for_nlp)
df5=df[['review_id','order_id','tmp']]
filename1='tmp'+str(i)+'.csv'
df5[(i-1)*1000:i*1000].to_csv(os.path.join(PATH, filename1), sep=';', header=True, index=False)
#for i in tqdm(range(99000,99225)):
# print('c ',(i-1), ' до включительно ',i)
# print(df['review_id'][(i-1)],'---',df['review_id'][i],'---',df['review_id'][(i-1):i].shape)
df['tmp'][99000:99225] = df['C'][99000:99225].progress_apply(get_text_for_nlp)
df5=df[['review_id','order_id','tmp']]
filename1='tmp'+str(100)+'.csv'
df5[99000:99225].to_csv(os.path.join(PATH, filename1), sep=';', header=True, index=False)
```
# Сохраняем промежуточные результаты, разворачиваем полученные данные по столбцам, сохраняем в CSV-файлы в виде разных наборов полей
```
# сохраняем промежуточный результат от API во внешний файл, с привязкой к ключевым полям.
filename1='tmp_all.csv'
df5.to_csv(os.path.join(PATH, filename1), sep=';', header=True, index=False)
df[['C','tmp']][0:10]
# необработанные тексты отзывов. Это однобуквенные значения.
df[(df['C']!="") & (df['tmp']==0)][['C','tmp']].shape
df5[['review_id','order_id','tmp']][15:21]
df3=df.tmp.progress_apply(pd.Series) # variant 2
df3.rename(columns={0:'sent_score',1:'sent_magnitude', 2:'entities_list',3:'sentences_count',4:'token_count',5:'sentlist',6:'tokenlist'}, inplace=True)
df3.shape
df3.to_csv(os.path.join(PATH, 'tmp_all_just.csv'), sep=';', header=True, index=False)
df3.head(4)
df4=pd.concat([df,df3],ignore_index=False, axis=1)
print(df4.shape)
```
## Это финальный файл для дальнейшей работы - ключевые поля 'review_id','order_id' и развернутые данные из API
На всякий случай сохраняю с двумя разными разделителями. А также полный файл и сокращенные. На гитхаб эти файлы в оригинальном виде не поместяться
```
columns_for_used=['review_id','order_id','sent_score','sent_magnitude', 'entities_list','sentences_count','token_count','sentlist','tokenlist']
columns_for_used_num=['review_id','order_id','sent_score','sent_magnitude', 'sentences_count','token_count']
columns_for_used_entit=['review_id','order_id','entities_list']
columns_for_used_sent=['review_id','order_id','sentlist']
columns_for_used_token=['review_id','order_id','tokenlist']
df4.to_csv(os.path.join(PATH, 'order_reviews_only_Google_full.csv'), sep=';', header=True, index=False)
df4.to_csv(os.path.join(PATH, 'order_reviews_only_Google_full2.csv'), sep='|', header=True, index=False)
df4[columns_for_used].to_csv(os.path.join(PATH, 'order_reviews_only_Google1.csv'), sep=';', header=True, index=False)
df4[columns_for_used].to_csv(os.path.join(PATH, 'order_reviews_only_Google2.csv'), sep='|', header=True, index=False)
df4[columns_for_used_num].to_csv(os.path.join(PATH, 'order_reviews_only_Google_num1.csv'), sep=';', header=True, index=False)
df4[columns_for_used_num].to_csv(os.path.join(PATH, 'order_reviews_only_Google_num2.csv'), sep='|', header=True, index=False)
df4[columns_for_used_entit].to_csv(os.path.join(PATH, 'order_reviews_only_Google_entit1.csv'), sep=';', header=True, index=False)
df4[columns_for_used_entit].to_csv(os.path.join(PATH, 'order_reviews_only_Google_entit2.csv'), sep='|', header=True, index=False)
df4[columns_for_used_sent].to_csv(os.path.join(PATH, 'order_reviews_only_Google_sent1.csv'), sep=';', header=True, index=False)
df4[columns_for_used_sent].to_csv(os.path.join(PATH, 'order_reviews_only_Google_sent2.csv'), sep='|', header=True, index=False)
df4[columns_for_used_token].to_csv(os.path.join(PATH, 'order_reviews_only_Google_token1.csv'), sep=';', header=True, index=False)
df4[columns_for_used_token].to_csv(os.path.join(PATH, 'order_reviews_only_Google_token2.csv'), sep='|', header=True, index=False)
```
| github_jupyter |
```
import h2o
h2o.init(max_mem_size = 2) #uses all cores by default
h2o.remove_all()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
higgs = h2o.import_file('higgs_boston_train.csv')
higgs.head()
higgs.shape
higgs_df = higgs.as_data_frame(use_pandas=True)
higgs_df['Label'].value_counts()
higgs.describe()
train, valid, test = higgs.split_frame([0.6, 0.2], seed = 2019)
higgs_X = higgs.col_names[1: -1]
higgs_y = higgs.col_names[-1]
higgs_model_v1 = H2ODeepLearningEstimator(model_id = 'higgs_v1', epochs = 1, variable_importances = True)
higgs_model_v1.train(higgs_X, higgs_y, training_frame = train, validation_frame = valid)
print(higgs_model_v1)
var_df = pd.DataFrame(higgs_model_v1.varimp(), columns = ['Variable', 'Relative Importance', 'Scaled Importance', 'Percentage'])
print(var_df.shape)
var_df.head(10)
higgs_v1_df = higgs_model_v1.score_history()
higgs_v1_df
plt.plot(higgs_v1_df['training_classification_error'], label="training_classification_error")
plt.plot(higgs_v1_df['validation_classification_error'], label="validation_classification_error")
plt.title("Higgs Deep Learner")
plt.legend();
pred = higgs_model_v1.predict(test[1:-1]).as_data_frame(use_pandas=True)
test_actual = test.as_data_frame(use_pandas=True)['Label']
(test_actual == pred['predict']).mean()
higgs_model_v2 = H2ODeepLearningEstimator(model_id = 'higgs_v2', hidden = [32, 32, 32], epochs = 1000000,
score_validation_samples = 10000, stopping_rounds = 2, stopping_metric = 'misclassification',
stopping_tolerance = 0.01)
higgs_model_v2.train(higgs_X, higgs_y, training_frame = train, validation_frame = valid)
higgs_v2_df = higgs_model_v2.score_history()
higgs_v2_df
plt.plot(higgs_v2_df['training_classification_error'], label="training_classification_error")
plt.plot(higgs_v2_df['validation_classification_error'], label="validation_classification_error")
plt.title("Higgs Deep Learner (Early Stop)")
plt.legend();
pred = higgs_model_v2.predict(test[1:-1]).as_data_frame(use_pandas=True)
test_actual = test.as_data_frame(use_pandas=True)['Label']
(test_actual == pred['predict']).mean()
higgs_model_v2.varimp_plot();
from h2o.automl import H2OAutoML
aml = H2OAutoML(max_models = 10, max_runtime_secs=100, seed = 1)
aml.train(higgs_X, higgs_y, training_frame = train, validation_frame = valid)
aml.leaderboard
```
AutoML has built 5 models inlcuding GLM, DRF (Distributed Random Forest) and XRT (Extremely Randomized Trees) and two stacked ensemble models (the 2nd and 3rd) and the best model is XRT.
It turns out, my proud deep learning models are not even on the leaderboard.
| github_jupyter |
# Residual Networks
Welcome to the first assignment of this week! You'll be building a very deep convolutional network, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously feasible.
**By the end of this assignment, you'll be able to:**
- Implement the basic building blocks of ResNets in a deep neural network using Keras
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification
- Implement a skip connection in your network
For this assignment, you'll use Keras.
Before jumping into the problem, run the cell below to load the required packages.
## Table of Content
- [1 - Packages](#1)
- [2 - The Problem of Very Deep Neural Networks](#2)
- [3 - Building a Residual Network](#3)
- [3.1 - The Identity Block](#3-1)
- [Exercise 1 - identity_block](#ex-1)
- [3.2 - The Convolutional Block](#3-2)
- [Exercise 2 - convolutional_block](#ex-2)
- [4 - Building Your First ResNet Model (50 layers)](#4)
- [Exercise 3 - ResNet50](#ex-3)
- [5 - Test on Your Own Image (Optional/Ungraded)](#5)
- [6 - Bibliography](#6)
<a name='1'></a>
## 1 - Packages
```
import tensorflow as tf
import numpy as np
import scipy.misc
from tensorflow.keras.applications.resnet_v2 import ResNet50V2
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet_v2 import preprocess_input, decode_predictions
from tensorflow.keras import layers
from tensorflow.keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from tensorflow.keras.models import Model, load_model
from resnets_utils import *
from tensorflow.keras.initializers import random_uniform, glorot_uniform, constant, identity
from tensorflow.python.framework.ops import EagerTensor
from matplotlib.pyplot import imshow
from test_utils import summary, comparator
import public_tests
%matplotlib inline
```
<a name='2'></a>
## 2 - The Problem of Very Deep Neural Networks
Last week, you built your first convolutional neural networks: first manually with numpy, then using Tensorflow and Keras.
In recent years, neural networks have become much deeper, with state-of-the-art networks evolving from having just a few layers (e.g., AlexNet) to over a hundred layers.
* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output).
* However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow.
* More specifically, during gradient descent, as you backpropagate from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode," from gaining very large values).
* During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds, as shown below:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> <b>Figure 1</b> </u><font color='purple'> : <b>Vanishing gradient</b> <br> The speed of learning decreases very rapidly for the shallower layers as the network trains </center></caption>
Not to worry! You are now going to solve this problem by building a Residual Network!
<a name='3'></a>
## 3 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the model to skip layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> <b>Figure 2</b> </u><font color='purple'> : A ResNet block showing a skip-connection <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
The lecture mentioned that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance.
On that note, there is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more than skip connections help with vanishing gradients.
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are the same or different. You are going to implement both of them: the "identity block" and the "convolutional block."
<a name='3-1'></a>
### 3.1 - The Identity Block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> <b>Figure 3</b> </u><font color='purple'> : <b>Identity block.</b> Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, notice the CONV2D and ReLU steps in each layer. To speed up training, a BatchNorm step has been added. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> <b>Figure 4</b> </u><font color='purple'> : <b>Identity block.</b> Skip connection "skips over" 3 layers.</center></caption>
These are the individual steps:
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid". Use 0 as the seed for the random uniform initialization: `kernel_initializer = initializer(seed=0)`.
- The first BatchNorm is normalizing the 'channels' axis.
- Then apply the ReLU activation function. This has no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same". Use 0 as the seed for the random uniform initialization: `kernel_initializer = initializer(seed=0)`.
- The second BatchNorm is normalizing the 'channels' axis.
- Then apply the ReLU activation function. This has no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid". Use 0 as the seed for the random uniform initialization: `kernel_initializer = initializer(seed=0)`.
- The third BatchNorm is normalizing the 'channels' axis.
- Note that there is **no** ReLU activation function in this component.
Final step:
- The `X_shortcut` and the output from the 3rd layer `X` are added together.
- **Hint**: The syntax will look something like `Add()([var1,var2])`
- Then apply the ReLU activation function. This has no hyperparameters.
<a name='ex-1'></a>
### Exercise 1 - identity_block
Implement the ResNet identity block. The first component of the main path has been implemented for you already! First, you should read these docs carefully to make sure you understand what's happening. Then, implement the rest.
- To implement the Conv2D step: [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)
- To implement BatchNorm: [BatchNormalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) `BatchNormalization(axis = 3)(X, training = training)`. If training is set to False, its weights are not updated with the new examples. I.e when the model is used in prediction mode.
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [Add](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Add)
We have added the initializer argument to our functions. This parameter receives an initializer function like the ones included in the package [tensorflow.keras.initializers](https://www.tensorflow.org/api_docs/python/tf/keras/initializers) or any other custom initializer. By default it will be set to [random_uniform](https://www.tensorflow.org/api_docs/python/tf/keras/initializers/RandomUniform)
Remember that these functions accept a `seed` argument that can be any value you want, but that in this notebook must set to 0 for **grading purposes**.
Here is where you're actually using the power of the Functional API to create a shortcut path:
```
# UNQ_C1
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, training=True, initializer=random_uniform):
"""
Implementation of the identity block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
training -- True: Behave in training mode
False: Behave in inference mode
initializer -- to set up the initial weights of a layer. Equals to random uniform initializer
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = 1, strides = (1,1), padding = 'valid', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training = training) # Default axis
X = Activation('relu')(X)
### START CODE HERE
## Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = f, strides = (1,1), padding = 'same', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training = training) # Default axis
X = Activation('relu')(X)
## Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = 1, strides = (1,1), padding = 'valid', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training = training) # Default axis
## Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X_shortcut, X])
X = Activation('relu')(X)
### END CODE HERE
return X
np.random.seed(1)
X1 = np.ones((1, 4, 4, 3)) * -1
X2 = np.ones((1, 4, 4, 3)) * 1
X3 = np.ones((1, 4, 4, 3)) * 3
X = np.concatenate((X1, X2, X3), axis = 0).astype(np.float32)
A3 = identity_block(X, f=2, filters=[4, 4, 3],
initializer=lambda seed=0:constant(value=1),
training=False)
print('\033[1mWith training=False\033[0m\n')
A3np = A3.numpy()
print(np.around(A3.numpy()[:,(0,-1),:,:].mean(axis = 3), 5))
resume = A3np[:,(0,-1),:,:].mean(axis = 3)
print(resume[1, 1, 0])
print('\n\033[1mWith training=True\033[0m\n')
np.random.seed(1)
A4 = identity_block(X, f=2, filters=[3, 3, 3],
initializer=lambda seed=0:constant(value=1),
training=True)
print(np.around(A4.numpy()[:,(0,-1),:,:].mean(axis = 3), 5))
public_tests.identity_block_test(identity_block)
```
**Expected value**
```
With training=False
[[[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]
[[192.71234 192.71234 192.71234 96.85617]
[ 96.85617 96.85617 96.85617 48.92808]]
[[578.1371 578.1371 578.1371 290.5685 ]
[290.5685 290.5685 290.5685 146.78426]]]
96.85617
With training=True
[[[0. 0. 0. 0. ]
[0. 0. 0. 0. ]]
[[0.40739 0.40739 0.40739 0.40739]
[0.40739 0.40739 0.40739 0.40739]]
[[4.99991 4.99991 4.99991 3.25948]
[3.25948 3.25948 3.25948 2.40739]]]
```
<a name='3-2'></a>
### 3.2 - The Convolutional Block
The ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> <b>Figure 4</b> </u><font color='purple'> : <b>Convolutional block</b> </center></caption>
* The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.)
* For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2.
* The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
* As for the previous exercise, the additional `initializer` argument is required for grading purposes, and it has been set by default to [glorot_uniform](https://www.tensorflow.org/api_docs/python/tf/keras/initializers/GlorotUniform)
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid". Use 0 as the `glorot_uniform` seed `kernel_initializer = initializer(seed=0)`.
- The first BatchNorm is normalizing the 'channels' axis.
- Then apply the ReLU activation function. This has no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is "same". Use 0 as the `glorot_uniform` seed `kernel_initializer = initializer(seed=0)`.
- The second BatchNorm is normalizing the 'channels' axis.
- Then apply the ReLU activation function. This has no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid". Use 0 as the `glorot_uniform` seed `kernel_initializer = initializer(seed=0)`.
- The third BatchNorm is normalizing the 'channels' axis. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid". Use 0 as the `glorot_uniform` seed `kernel_initializer = initializer(seed=0)`.
- The BatchNorm is normalizing the 'channels' axis.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no hyperparameters.
<a name='ex-2'></a>
### Exercise 2 - convolutional_block
Implement the convolutional block. The first component of the main path is already implemented; then it's your turn to implement the rest! As before, always use 0 as the seed for the random initialization, to ensure consistency with the grader.
- [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)
- [BatchNormalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) (axis: Integer, the axis that should be normalized (typically the features axis)) `BatchNormalization(axis = 3)(X, training = training)`. If training is set to False, its weights are not updated with the new examples. I.e when the model is used in prediction mode.
- For the activation, use: `Activation('relu')(X)`
- [Add](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Add)
We have added the initializer argument to our functions. This parameter receives an initializer function like the ones included in the package [tensorflow.keras.initializers](https://www.tensorflow.org/api_docs/python/tf/keras/initializers) or any other custom initializer. By default it will be set to [random_uniform](https://www.tensorflow.org/api_docs/python/tf/keras/initializers/RandomUniform)
Remember that these functions accept a `seed` argument that can be any value you want, but that in this notebook must set to 0 for **grading purposes**.
```
# UNQ_C2
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, s = 2, training=True, initializer=glorot_uniform):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
s -- Integer, specifying the stride to be used
training -- True: Behave in training mode
False: Behave in inference mode
initializer -- to set up the initial weights of a layer. Equals to Glorot uniform initializer,
also called Xavier uniform initializer.
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path glorot_uniform(seed=0)
X = Conv2D(filters = F1, kernel_size = 1, strides = (s, s), padding='valid', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training=training)
X = Activation('relu')(X)
### START CODE HERE
## Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = f, strides = (1, 1), padding='same', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training=training)
X = Activation('relu')(X)
## Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = 1, strides = (1, 1), padding='valid', kernel_initializer = initializer(seed=0))(X)
X = BatchNormalization(axis = 3)(X, training=training)
##### SHORTCUT PATH ##### (≈2 lines)
X_shortcut = Conv2D(filters = F3, kernel_size = 1, strides = (s, s), padding='valid', kernel_initializer = initializer(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3)(X_shortcut, training=training)
### END CODE HERE
# Final step: Add shortcut value to main path (Use this order [X, X_shortcut]), and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
from outputs import convolutional_block_output1, convolutional_block_output2
np.random.seed(1)
#X = np.random.randn(3, 4, 4, 6).astype(np.float32)
X1 = np.ones((1, 4, 4, 3)) * -1
X2 = np.ones((1, 4, 4, 3)) * 1
X3 = np.ones((1, 4, 4, 3)) * 3
X = np.concatenate((X1, X2, X3), axis = 0).astype(np.float32)
A = convolutional_block(X, f = 2, filters = [2, 4, 6], training=False)
assert type(A) == EagerTensor, "Use only tensorflow and keras functions"
assert tuple(tf.shape(A).numpy()) == (3, 2, 2, 6), "Wrong shape."
assert np.allclose(A.numpy(), convolutional_block_output1), "Wrong values when training=False."
print(A[0])
B = convolutional_block(X, f = 2, filters = [2, 4, 6], training=True)
assert np.allclose(B.numpy(), convolutional_block_output2), "Wrong values when training=True."
print('\033[92mAll tests passed!')
```
**Expected value**
```
tf.Tensor(
[[[0. 0.66683817 0. 0. 0.88853896 0.5274254 ]
[0. 0.65053666 0. 0. 0.89592844 0.49965227]]
[[0. 0.6312079 0. 0. 0.8636247 0.47643146]
[0. 0.5688321 0. 0. 0.85534114 0.41709304]]], shape=(2, 2, 6), dtype=float32)
```
<a name='4'></a>
## 4 - Building Your First ResNet Model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> <b>Figure 5</b> </u><font color='purple'> : <b>ResNet-50 model</b> </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2).
- BatchNorm is applied to the 'channels' axis of the input.
- ReLU activation is applied.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three sets of filters of size [64,64,256], "f" is 3, and "s" is 1.
- The 2 identity blocks use three sets of filters of size [64,64,256], and "f" is 3.
- Stage 3:
- The convolutional block uses three sets of filters of size [128,128,512], "f" is 3 and "s" is 2.
- The 3 identity blocks use three sets of filters of size [128,128,512] and "f" is 3.
- Stage 4:
- The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3 and "s" is 2.
- The 5 identity blocks use three sets of filters of size [256, 256, 1024] and "f" is 3.
- Stage 5:
- The convolutional block uses three sets of filters of size [512, 512, 2048], "f" is 3 and "s" is 2.
- The 2 identity blocks use three sets of filters of size [512, 512, 2048] and "f" is 3.
- The 2D Average Pooling uses a window of shape (2,2).
- The 'flatten' layer doesn't have any hyperparameters.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation.
<a name='ex-3'></a>
### Exercise 3 - ResNet50
Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D)
Here are some other functions we used in the code below:
- Conv2D: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)
- BatchNorm: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding2D)
- Max pooling: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D)
- Fully connected layer: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)
- Addition: [See reference](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Add)
```
# UNQ_C3
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Stage-wise implementation of the architecture of the popular ResNet50:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> FLATTEN -> DENSE
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3)(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], s = 1)
X = identity_block(X, 3, [64, 64, 256])
X = identity_block(X, 3, [64, 64, 256])
### START CODE HERE
## Stage 3 (≈4 lines)
X = convolutional_block(X, f = 3, filters = [128, 128, 512], s = 2)
X = identity_block(X, 3, [128,128,512])
X = identity_block(X, 3, [128,128,512])
X = identity_block(X, 3, [128,128,512])
## Stage 4 (≈6 lines)
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], s = 2)
X = identity_block(X, 3, [256, 256, 1024])
X = identity_block(X, 3, [256, 256, 1024])
X = identity_block(X, 3, [256, 256, 1024])
X = identity_block(X, 3, [256, 256, 1024])
X = identity_block(X, 3, [256, 256, 1024])
## Stage 5 (≈3 lines)
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], s = 2)
X = identity_block(X, 3, [512, 512, 2048])
X = identity_block(X, 3, [512, 512, 2048])
## AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2,2))(X)
### END CODE HERE
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X)
return model
```
Run the following code to build the model's graph. If your implementation is incorrect, you'll know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
print(model.summary())
from outputs import ResNet50_summary
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
comparator(summary(model), ResNet50_summary)
```
As shown in the Keras Tutorial Notebook, prior to training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need now is a dataset!
Let's load your old friend, the SIGNS dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> <b>Figure 6</b> </u><font color='purple'> : <b>SIGNS dataset</b> </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig / 255.
X_test = X_test_orig / 255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 10 epochs with a batch size of 32. On a GPU, it should take less than 2 minutes.
```
model.fit(X_train, Y_train, epochs = 10, batch_size = 32)
```
**Expected Output**:
```
Epoch 1/10
34/34 [==============================] - 1s 34ms/step - loss: 1.9241 - accuracy: 0.4620
Epoch 2/10
34/34 [==============================] - 2s 57ms/step - loss: 0.6403 - accuracy: 0.7898
Epoch 3/10
34/34 [==============================] - 1s 24ms/step - loss: 0.3744 - accuracy: 0.8731
Epoch 4/10
34/34 [==============================] - 2s 44ms/step - loss: 0.2220 - accuracy: 0.9231
Epoch 5/10
34/34 [==============================] - 2s 57ms/step - loss: 0.1333 - accuracy: 0.9583
Epoch 6/10
34/34 [==============================] - 2s 52ms/step - loss: 0.2243 - accuracy: 0.9444
Epoch 7/10
34/34 [==============================] - 2s 48ms/step - loss: 0.2913 - accuracy: 0.9102
Epoch 8/10
34/34 [==============================] - 1s 30ms/step - loss: 0.2269 - accuracy: 0.9306
Epoch 9/10
34/34 [==============================] - 2s 46ms/step - loss: 0.1113 - accuracy: 0.9630
Epoch 10/10
34/34 [==============================] - 2s 57ms/step - loss: 0.0709 - accuracy: 0.9778
```
The exact values could not match, but don't worry about that. The important thing that you must see is that the loss value decreases, and the accuracy increases for the firsts 5 epochs.
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
<b>Test Accuracy</b>
</td>
<td>
>0.80
</td>
</tr>
</table>
For the purposes of this assignment, you've been asked to train the model for just two epochs. You can see that it performs pretty poorly, but that's ok! The online grader will only run your code for a small number of epochs as well. Please go ahead and submit your assignment as is.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. It tends to get much better performance when trained for ~20 epochs, but this does take more than an hour when training on a CPU.
Using a GPU, this ResNet50 model's weights were trained on the SIGNS dataset. You can load and run the trained model on the test set in the cells below. It may take ≈1min to load the model. Have fun!
```
pre_trained_model = tf.keras.models.load_model('resnet50.h5')
preds = pre_trained_model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Congratulations** on finishing this assignment! You've now implemented a state-of-the-art image classification system! Woo hoo!
ResNet50 is a powerful model for image classification when it's trained for an adequate number of iterations. Hopefully, from this point, you can use what you've learned and apply it to your own classification problem to perform state-of-the-art accuracy.
<font color = 'blue'>
**What you should remember**:
- Very deep "plain" networks don't work in practice because vanishing gradients make them hard to train.
- Skip connections help address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main types of blocks: The **identity block** and the **convolutional block**.
- Very deep Residual Networks are built by stacking these blocks together.
<a name='5'></a>
## 5 - Test on Your Own Image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x/255.0
print('Input image shape:', x.shape)
imshow(img)
prediction = pre_trained_model.predict(x)
print("Class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ", prediction)
print("Class:", np.argmax(prediction))
```
You can also print a summary of your model by running the following code.
```
pre_trained_model.summary()
```
<a name='6'></a>
## 6 - Bibliography
This notebook presents the ResNet algorithm from He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the GitHub repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's GitHub repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
| github_jupyter |
# Sentiment Analysis
```
from keras.datasets import imdb # import the built-in imdb dataset in Keras
# Set the vocabulary size
vocabulary_size = 5000
# Load in training and test data (note the difference in convention compared to scikit-learn)
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocabulary_size)
print("Loaded dataset with {} training samples, {} test samples".format(len(X_train), len(X_test)))
# Inspect a sample review and its label
print("--- Review ---")
print(X_train[7])
print("--- Label ---")
print(y_train[7])
```
Notice that the label is an integer (0 for negative, 1 for positive), and the review itself is stored as a sequence of integers. These are word IDs that have been preassigned to individual words. To map them back to the original words, you can use the dictionary returned by `imdb.get_word_index()`.
```
# Map word IDs back to words
word2id = imdb.get_word_index()
id2word = {i: word for word, i in word2id.items()}
print("--- Review (with words) ---")
print([id2word.get(i, " ") for i in X_train[7]])
print("--- Label ---")
print(y_train[7])
from keras.preprocessing import sequence
# Set the maximum number of words per document (for both training and testing)
max_words = 500
#Pad sequences in X_train and X_test
X_train = sequence.pad_sequences(X_train, maxlen=max_words)
X_test = sequence.pad_sequences(X_test, maxlen=max_words)
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense, Dropout
# TODO: Design your model
embedding_size = 32
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_size, input_length=max_words))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
# TODO: Compile your model, specifying a loss function, optimizer, and metrics
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# TODO: Specify training parameters: batch size and number of epochs
batch_size = 64
num_epochs = 3
# TODO(optional): Reserve/specify some training data for validation (not to be used for training)
X_valid, y_valid = X_train[:batch_size], y_train[:batch_size] # first batch_size samples
X_train2, y_train2 = X_train[batch_size:], y_train[batch_size:] # rest for training
# TODO: Train your model
model.fit(X_train2, y_train2,
validation_data=(X_valid, y_valid),
batch_size=batch_size, epochs=num_epochs)
# Evaluate your model on the test set
scores = model.evaluate(X_test, y_test, verbose=0) # returns loss and other metrics specified in model.compile()
print("Test accuracy:", scores[1]) # scores[1] should correspond to accuracy if you passed in metrics=['accuracy']
```
| github_jupyter |
# Objective
* 20181225:
* Predict stock price in next day using XGBoost
* Given prices and other features for the last N days, we do prediction for day N+1
* Here we split 3 years of data into train(60%), dev(20%) and test(20%)
* 20190110 - Diff from StockPricePrediction_v1_xgboost.ipynb:
* Here we scale the train set to have mean 0 and variance 1, and apply the same transformation to dev and test sets
* 20190111 - Diff from StockPricePrediction_v1a_xgboost.ipynb:
* Here for the past N values for the dev set, we scale them to have mean 0 and variance 1, and do prediction on them
```
import math
import numpy as np
import pandas as pd
import seaborn as sns
import time
from matplotlib import pyplot as plt
from pylab import rcParams
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm_notebook
from xgboost import XGBRegressor
%matplotlib inline
#### Input params ##################
stk_path = "./data/VTI.csv"
test_size = 0.2 # proportion of dataset to be used as test set
cv_size = 0.2 # proportion of dataset to be used as cross-validation set
N = 7 # for feature at day t, we use lags from t-1, t-2, ..., t-N as features
n_estimators = 100 # for the initial model before tuning. default = 100
max_depth = 3 # for the initial model before tuning. default = 3
learning_rate = 0.1 # for the initial model before tuning. default = 0.1
min_child_weight = 1 # for the initial model before tuning. default = 1
subsample = 1 # for the initial model before tuning. default = 1
colsample_bytree = 1 # for the initial model before tuning. default = 1
colsample_bylevel = 1 # for the initial model before tuning. default = 1
train_test_split_seed = 111 # 111
model_seed = 100
fontsize = 14
ticklabelsize = 14
####################################
```
# Load data
```
df = pd.read_csv(stk_path, sep = ",")
# Convert Date column to datetime
df.loc[:, 'Date'] = pd.to_datetime(df['Date'],format='%Y-%m-%d')
# Change all column headings to be lower case, and remove spacing
df.columns = [str(x).lower().replace(' ', '_') for x in df.columns]
# Get month of each sample
df['month'] = df['date'].dt.month
# Sort by datetime
df.sort_values(by='date', inplace=True, ascending=True)
df.head()
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='adj_close', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("USD")
```
# Split into train, dev and test set
```
# Get sizes of each of the datasets
num_cv = int(cv_size*len(df))
num_test = int(test_size*len(df))
num_train = len(df) - num_cv - num_test
print("num_train = " + str(num_train))
print("num_cv = " + str(num_cv))
print("num_test = " + str(num_test))
# Split into train, cv, and test
train = df[:num_train]
cv = df[num_train:num_train+num_cv]
train_cv = df[:num_train+num_cv]
test = df[num_train+num_cv:]
print("train.shape = " + str(train.shape))
print("cv.shape = " + str(cv.shape))
print("train_cv.shape = " + str(train_cv.shape))
print("test.shape = " + str(test.shape))
```
# Scale the train, dev and test set and combine them to do feature engineering
```
# Converting dataset into x_train and y_train
# Here we only scale the train dataset, and not the entire dataset to prevent information leak
scaler = StandardScaler()
train_scaled = scaler.fit_transform(train[['open', 'high', 'low', 'close', 'adj_close', 'volume']])
print("scaler.mean_ = " + str(scaler.mean_))
print("scaler.var_ = " + str(scaler.var_))
print("train_scaled.shape = " + str(train_scaled.shape))
# Convert the numpy array back into pandas dataframe
train_scaled = pd.DataFrame(train_scaled, columns=['open', 'high', 'low', 'close', 'adj_close', 'volume'])
train_scaled[['date', 'month']] = train[['date', 'month']]
print("train_scaled.shape = " + str(train_scaled.shape))
train_scaled.head()
# Do scaling for dev set
cv_scaled = scaler.transform(cv[['open', 'high', 'low', 'close', 'adj_close', 'volume']])
# Convert the numpy array back into pandas dataframe
cv_scaled = pd.DataFrame(cv_scaled, columns=['open', 'high', 'low', 'close', 'adj_close', 'volume'])
cv_scaled[['date', 'month']] = cv.reset_index()[['date', 'month']]
print("cv_scaled.shape = " + str(cv_scaled.shape))
cv_scaled.head()
# Do scaling for test set
test_scaled = scaler.transform(test[['open', 'high', 'low', 'close', 'adj_close', 'volume']])
# Convert the numpy array back into pandas dataframe
test_scaled = pd.DataFrame(test_scaled, columns=['open', 'high', 'low', 'close', 'adj_close', 'volume'])
test_scaled[['date', 'month']] = test.reset_index()[['date', 'month']]
print("test_scaled.shape = " + str(test_scaled.shape))
test_scaled.head()
# Combine back train_scaled, cv_scaled, test_scaled together
df_scaled = pd.concat([train_scaled, cv_scaled, test_scaled], axis=0)
df_scaled.head()
```
# Feature Engineering
We will generate the following features:
* Mean 'adj_close' of each month
* Difference between high and low of each day
* Difference between open and close of each day
* Mean volume of each month
```
# Get difference between high and low of each day
df_scaled['range_hl'] = df_scaled['high'] - df_scaled['low']
df_scaled.drop(['high', 'low'], axis=1, inplace=True)
# Get difference between open and close of each day
df_scaled['range_oc'] = df_scaled['open'] - df_scaled['close']
df_scaled.drop(['open', 'close'], axis=1, inplace=True)
df_scaled.head()
```
Now we use lags up to N number of days to use as features.
```
# Add a column 'order_day' to indicate the order of the rows by date
df_scaled['order_day'] = [x for x in list(range(len(df_scaled)))]
# merging_keys
merging_keys = ['order_day']
# List of columns that we will use to create lags
lag_cols = ['adj_close', 'range_hl', 'range_oc', 'volume']
lag_cols
shift_range = [x+1 for x in range(N)]
for shift in tqdm_notebook(shift_range):
train_shift = df_scaled[merging_keys + lag_cols].copy()
# E.g. order_day of 0 becomes 1, for shift = 1.
# So when this is merged with order_day of 1 in df_scaled, this will represent lag of 1.
train_shift['order_day'] = train_shift['order_day'] + shift
foo = lambda x: '{}_lag_{}'.format(x, shift) if x in lag_cols else x
train_shift = train_shift.rename(columns=foo)
df_scaled = pd.merge(df_scaled, train_shift, on=merging_keys, how='left') #.fillna(0)
del train_shift
# Remove the first N rows which contain NaNs
df_scaled = df_scaled[N:]
df_scaled.head()
df_scaled.info()
# # Get mean of adj_close of each month
# df_gb = df.groupby(['month'], as_index=False).agg({'adj_close':'mean'})
# df_gb = df_gb.rename(columns={'adj_close':'adj_close_mean'})
# df_gb
# # Merge to main df
# df = df.merge(df_gb,
# left_on=['month'],
# right_on=['month'],
# how='left').fillna(0)
# # Merge to main df
# shift_range = [x+1 for x in range(2)]
# for shift in tqdm_notebook(shift_range):
# train_shift = df[merging_keys + lag_cols].copy()
# # E.g. order_day of 0 becomes 1, for shift = 1.
# # So when this is merged with order_day of 1 in df, this will represent lag of 1.
# train_shift['order_day'] = train_shift['order_day'] + shift
# foo = lambda x: '{}_lag_{}'.format(x, shift) if x in lag_cols else x
# train_shift = train_shift.rename(columns=foo)
# df = pd.merge(df, train_shift, on=merging_keys, how='left') #.fillna(0)
# del train_shift
# df
# # Get mean of volume of each month
# df_gb = df.groupby(['month'], as_index=False).agg({'volume':'mean'})
# df_gb = df_gb.rename(columns={'volume':'volume_mean'})
# df_gb
# # Merge to main df
# df = df.merge(df_gb,
# left_on=['month'],
# right_on=['month'],
# how='left').fillna(0)
# df.head()
```
# Split the scaled features back into train, dev and test set
```
features = [
"adj_close_lag_1",
"range_hl_lag_1",
"range_oc_lag_1",
"volume_lag_1",
"adj_close_lag_2",
"range_hl_lag_2",
"range_oc_lag_2",
"volume_lag_2",
"adj_close_lag_3",
"range_hl_lag_3",
"range_oc_lag_3",
"volume_lag_3",
"adj_close_lag_4",
"range_hl_lag_4",
"range_oc_lag_4",
"volume_lag_4",
"adj_close_lag_5",
"range_hl_lag_5",
"range_oc_lag_5",
"volume_lag_5",
"adj_close_lag_6",
"range_hl_lag_6",
"range_oc_lag_6",
"volume_lag_6",
"adj_close_lag_7",
"range_hl_lag_7",
"range_oc_lag_7",
"volume_lag_7"
]
target = "adj_close"
# Split into train, cv, and test
train = df_scaled[:num_train]
cv = df_scaled[num_train:num_train+num_cv]
train_cv = df_scaled[:num_train+num_cv]
test = df_scaled[num_train+num_cv:]
# Split into X and y
X_train = train[features]
y_train = train[target]
X_cv = cv[features]
y_cv = cv[target]
X_train_cv = train_cv[features]
y_train_cv = train_cv[target]
X_sample = test[features]
y_sample = test[target]
print("X_train.shape = " + str(X_train.shape))
print("y_train.shape = " + str(y_train.shape))
print("X_cv.shape = " + str(X_cv.shape))
print("y_cv.shape = " + str(y_cv.shape))
print("X_train_cv.shape = " + str(X_train_cv.shape))
print("y_train_cv.shape = " + str(y_train_cv.shape))
print("X_sample.shape = " + str(X_sample.shape))
print("y_sample.shape = " + str(y_sample.shape))
```
# EDA
```
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = train.plot(x='date', y='adj_close', style='b-', grid=True)
ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test'])
ax.set_xlabel("date")
ax.set_ylabel("USD (scaled)")
```
# Train the model using XGBoost
```
# Create the model
model = XGBRegressor(seed=model_seed,
n_estimators=n_estimators,
max_depth=max_depth,
learning_rate=learning_rate,
min_child_weight=min_child_weight)
# Train the regressor
model.fit(X_train, y_train)
```
# Predict on train set
```
# Do prediction on train set
est = model.predict(X_train)
# Calculate RMSE
math.sqrt(mean_squared_error(y_train, est))
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
est_df = pd.DataFrame({'est': est,
'date': train['date']})
ax = train.plot(x='date', y='adj_close', style='b-', grid=True)
ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax)
ax = est_df.plot(x='date', y='est', style='r-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test', 'est'])
ax.set_xlabel("date")
ax.set_ylabel("USD (scaled)")
```
# Predict on dev set
```
# Do prediction on test set
est = model.predict(X_cv)
# Calculate RMSE
math.sqrt(mean_squared_error(y_cv, est))
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
est_df = pd.DataFrame({'est': est,
'y_cv': y_cv,
'date': cv['date']})
ax = train.plot(x='date', y='adj_close', style='b-', grid=True)
ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax)
ax = est_df.plot(x='date', y='est', style='r-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test', 'est'])
ax.set_xlabel("date")
ax.set_ylabel("USD (scaled)")
```
# Findings
* Doesn't work well
* Likely because the model was trained on prices below ~1.7 and so when it saw prices above 1.7 for the dev set, it could not generalize well
| github_jupyter |
## In this notebook:
- Using a pre-trained convnet to do feature extraction
- Use ConvBase only for feature extraction, and use a separate machine learning classifier
- Adding ```Dense``` layers to top of a frozen ConvBase, allowing us to leverage data augmentation
- Fine-tuning a pre-trained convnet (Skipped, because I am tired now)
### In previous notebook:
- Training your own small convnets from scratch
- Using data augmentation to mitigate overfitting
```
from datetime import date
date.today()
author = "NirantK. https://github.com/NirantK/keras-practice"
print(author)
import keras
print('Keras Version:', keras.__version__)
import os
if os.name=='nt':
print('We are on Windows')
import os, shutil
pwd = os.getcwd()
```
Feature extraction
---
This consists of using the representations learned by a previous network to extract interesting features from new samples. These features are then run through a new classifier, which is trained from scratch.

Warning: The line below triggers a download. You need good speed Internet!
```
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
```
We passed three arguments to the constructor:
- **```weights```**, to specify which weight checkpoint to initialize the model from
- **```include_top```**, which refers to including or not the densely-connected classifier on top of the network. By default, this densely-connected classifier would correspond to the 1000 classes from ImageNet. Since we intend to use our own densely-connected classifier (with only two classes, cat and dog), we don’t need to include it.
- **```input_shape```**, the shape of the image tensors that we will feed to the network. This argument is purely optional: if we don’t pass it, then the network will be able to process inputs of any size.
(from *Deep Learning in Python by F. Chollet*)
What does the **VGG16** thing look like?
```
conv_base.summary()
```
Feature Extraction
---
Pros:
- Fast, and cheap
- Works on CPU
Cons:
- Does not allow us to use data augmentation
- Because we do feature extraction and classification in separate steps
```
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
base_dir = os.path.join(pwd, 'data/cats_and_dogs_small/')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 1
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 4, 4, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(
directory,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
i = 0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
try:
features[i * batch_size : (i + 1) * batch_size] = features_batch
except ValueError:
print(i)
raise ValueError
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
# Note that since generators yield data indefinitely in a loop,
# we must `break` after every image has been seen once.
break
return features, labels
%time train_features, train_labels = extract_features(train_dir, 2000)
%time validation_features, validation_labels = extract_features(validation_dir, 1000)
%time test_features, test_labels = extract_features(test_dir, 1000)
train_features = np.reshape(train_features, (2000, 4 * 4 * 512))
validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512))
test_features = np.reshape(test_features, (1000, 4 * 4 * 512))
```
**Model Training:**
```
from keras import models
from keras import layers
from keras import optimizers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_dim=4 * 4 * 512))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=2e-5),
loss='binary_crossentropy',
metrics=['acc'])
%time history = model.fit(train_features, train_labels, epochs=15, batch_size=20, validation_data=(validation_features, validation_labels))
model.save('cats_and_dogs_small_feature_extraction.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
This is Overfitting!
---
We can see that the training and validation accuracy curve diverge from each other rather quickly. This alone is might not be a sure shot sign of overfitting. We also observe that the training loss drops smoothly while validation loss actually increases. These two graphs in conjunction with each other indicate overfittig.
**Why did this overfit despite dropout?**
We did *NOT* do data augmentation
Extending the ConvBase Model!
---
Pros:
- Better performance (accuracy)
- Better Generalization (less overfitting)
- Because we can use data augmentation
Cons:
- Expensive compute
**Warning: Do not attempt this without a GPU. Your Python process can/will crash after a few hours**
```
from keras import models
from keras import layers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
```
### Freezing ConvBase model: VGG16
Freezing means we do not update the layer weights in those particular layers. This is important for our present application.
```
print('This is the number of trainable weights '
'before freezing the conv base:', len(model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights '
'after freezing the conv base:', len(model.trainable_weights))
model.summary()
# compare the Trainable Params value from the previous model summary
from keras.preprocessing.image import ImageDataGenerator
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# just for reference, let's calculate the test accuracy
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
%time test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
```
| github_jupyter |
# Gradient Descent Optimizations
Mini-batch and stochastic gradient descent is widely used in deep learning, where the large number of parameters and limited memory make the use of more sophisticated optimization methods impractical. Many methods have been proposed to accelerate gradient descent in this context, and here we sketch the ideas behind some of the most popular algorithms.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
## Smoothing with exponentially weighted averages
```
n = 50
x = np.arange(n) * np.pi
y = np.cos(x) * np.exp(x/100) - 10*np.exp(-0.01*x)
```
### Exponentially weighted average
The exponentially weighted average adds a fraction $\beta$ of the current value to a leaky running sum of past values. Effectively, the contribution from the $t-n$th value is scaled by
$$
\beta^n(1 - \beta)
$$
For example, here are the contributions to the current value after 5 iterations (iteration 5 is the current iteration)
| iteration | contribution |
| --- | --- |
| 1 | $\beta^4(1 - \beta)$ |
| 2 | $\beta^3(1 - \beta)$ |
| 3 | $\beta^2(1 - \beta)$ |
| 4 | $\beta^1(1 - \beta)$ |
| 5 | $(1 - \beta)$ |
Since $\beta \lt 1$, the contribution decreases exponentially with the passage of time. Effectively, this acts as a smoother for a function.
```
def ewa(y, beta):
"""Exponentially weighted average."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zs[i] = z
return zs
```
### Exponentially weighted average with bias correction
Since the EWA starts from 0, there is an initial bias. This can be corrected by scaling with
$$
\frac{1}{1 - \beta^t}
$$
where $t$ is the iteration number.
```
def ewabc(y, beta):
"""Exponentially weighted average with bias correction."""
n = len(y)
zs = np.zeros(n)
z = 0
for i in range(n):
z = beta*z + (1 - beta)*y[i]
zc = z/(1 - beta**(i+1))
zs[i] = zc
return zs
beta = 0.9
plt.plot(x, y, 'o-')
plt.plot(x, ewa(y, beta), c='red', label='EWA')
plt.plot(x, ewabc(y, beta), c='orange', label='EWA with bias correction')
plt.legend()
pass
```
## Momentum in 1D
Momentum comes from physics, where the contribution of the gradient is to the velocity, not the position. Hence we create an accessory variable $v$ and increment it with the gradient. The position is then updated with the velocity in place of the gradient. The analogy is that we can think of the parameter $x$ as a particle in an energy well with potential energy $U = mgh$ where $h$ is given by our objective function $f$. The force generated is a function of the rat of change of potential energy $F \propto \nabla U \propto \nabla f$, and we use $F = ma$ to get that the acceleration $a \propto \nabla f$. Finally, we integrate $a$ over time to get the velocity $v$ and integrate $v$ to get the displacement $x$. Note that we need to damp the velocity otherwise the particle would just oscillate forever.
We use a version of the update that simply treats the velocity as an exponentially weighted average popularized by Andrew Ng in his Coursera course. This is the same as the momentum scheme motivated by physics with some rescaling of constants.
```
def f(x):
return x**2
def grad(x):
return 2*x
def gd(x, grad, alpha, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1] = x
return xs
def gd_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros(1 + max_iter)
xs[0] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1] = x
return xs
```
### Gradient descent with moderate step size
```
alpha = 0.1
x0 = 1
xs = gd(x0, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
### Gradient descent with large step size
When the step size is too large, gradient descent can oscillate and even diverge.
```
alpha = 0.95
xs = gd(1, grad, alpha)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x*1.2, y, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
### Gradient descent with momentum
Momentum results in cancellation of gradient changes in opposite directions, and hence damps out oscillations while amplifying consistent changes in the same direction. This is perhaps clearer in the 2D example below.
```
alpha = 0.95
xs = gd_momentum(1, grad, alpha, beta=0.9)
xp = np.linspace(-1.2, 1.2, 100)
plt.plot(xp, f(xp))
plt.plot(xs, f(xs), 'o-', c='red')
for i, (x, y) in enumerate(zip(xs, f(xs)), 1):
plt.text(x, y+0.2, i,
bbox=dict(facecolor='yellow', alpha=0.5), fontsize=14)
pass
```
## Momentum and RMSprop in 2D
```
def f2(x):
return x[0]**2 + 100*x[1]**2
def grad2(x):
return np.array([2*x[0], 200*x[1]])
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
pass
def gd2(x, grad, alpha, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0,:] = x
for i in range(max_iter):
x = x - alpha * grad(x)
xs[i+1,:] = x
return xs
def gd2_momentum(x, grad, alpha, beta=0.9, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)
vc = v/(1+beta**(i+1))
x = x - alpha * vc
xs[i+1, :] = x
return xs
```
### Gradient descent with large step size
We get severe oscillations.
```
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2(x0, grad2, alpha, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Vanilla gradient descent')
pass
```
### Gradient descent with momentum
The damping effect is clear.
```
alpha = 0.01
x0 = np.array([-1,-1])
xs = gd2_momentum(x0, grad2, alpha, beta=0.9, max_iter=75)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradieent descent with momentum')
pass
```
### Gradient descent with RMSprop
RMSprop scales the learning rate in each direction by the square root of the exponentially weighted sum of squared gradients. Near a saddle or any plateau, there are directions where the gradient is very small - RMSporp encourages larger steps in those directions, allowing faster escape.
```
def gd2_rmsprop(x, grad, alpha, beta=0.9, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
v = 0
for i in range(max_iter):
v = beta*v + (1-beta)*grad(x)**2
x = x - alpha * grad(x) / (eps + np.sqrt(v))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_rmsprop(x0, grad2, alpha, beta=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
```
### ADAM
ADAM (Adaptive Moment Estimation) combines the ideas of momentum, RMSprop and bias correction. It is probably the most popular gradient descent method in current deep learning practice.
```
def gd2_adam(x, grad, alpha, beta1=0.9, beta2=0.999, eps=1e-8, max_iter=10):
xs = np.zeros((1 + max_iter, x.shape[0]))
xs[0, :] = x
m = 0
v = 0
for i in range(max_iter):
m = beta1*m + (1-beta1)*grad(x)
v = beta2*v + (1-beta2)*grad(x)**2
mc = m/(1+beta1**(i+1))
vc = v/(1+beta2**(i+1))
x = x - alpha * m / (eps + np.sqrt(vc))
xs[i+1, :] = x
return xs
alpha = 0.1
x0 = np.array([-1,-1])
xs = gd2_adam(x0, grad2, alpha, beta1=0.9, beta2=0.9, max_iter=10)
x = np.linspace(-1.2, 1.2, 100)
y = np.linspace(-1.2, 1.2, 100)
X, Y = np.meshgrid(x, y)
levels = [0.1,1,2,4,9, 16, 25, 36, 49, 64, 81, 100]
Z = x**2 + 100*Y**2
c = plt.contour(X, Y, Z, levels)
plt.plot(xs[:, 0], xs[:, 1], 'o-', c='red')
plt.title('Gradient descent with RMSprop')
pass
```
| github_jupyter |
# The Stick and Ball Geometry
The ``SpheresAndCylinders`` class contains an assortment of pore-scale models that generate geometrical information assuming the pores are spherical and throats are cylindrical.
The ``SpheresAndCylinders`` is a perfect starting point for generating your own custom geometry. In fact, it's likely that only the calculation of 'pore.diameter' would need to be changed. By default the 'pore.diameter' values are drawn from a random distribution which is not very realistic. Luckily, it's easy to update the model used to calculate diameter, and then propagate this change to all the dependent values (i.e. 'pore.volume'), as illustrated below.
```
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import matplotlib.pyplot as plt
pn = op.network.Cubic(shape=[20, 20, 20], spacing=100)
```
> The spacing of the above network is in um for this example to make values easier to read, but in general you should always use SI
Now we can create a geometry object based on the ``SpheresAndCylinders``:
```
geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts)
```
As can be seen by printing it, there are quite a few geometrical properties already added to this object. Defining these manually would have been a pain, so it's a good idea to start with this class then alter the few models that need it:
```
print(geo)
```
The pore size distribution on the ``SpheresAndCylinders`` is probably the more likely thing to change, since it is a random (i.e. uniform distribution) as shown below:
```
fig = plt.hist(geo['pore.diameter'], bins=25, edgecolor='k')
```
The models on the ``geo`` object can be seen by printing them:
```
print(geo.models)
```
In this tutorial we will change how pore sizes are calculated. We can do this by assigning a new pore-scale model for 'pore.diameter'. Let's use Gaussian distribution:
```
f = op.models.geometry.pore_size.normal
geo.add_model(propname='pore.diameter',
model=f,
loc=50, scale=10)
```
This model is automatically run when it's assigned, so we can inspect the new pore diameter values:
```
fig = plt.hist(geo['pore.diameter'], bins=25, edgecolor='k')
```
The above distribution does not look very much like a Gaussian distribution. This is because the 'pore.seed' values are truncated between 0.2 and 0.7:
```
print(geo.models['pore.seed'])
```
We should change this to a wider range to capture more pores on the "tails", then call ``regenerate_models``, which will not only regenerate the random numbers, but all the other properties that depend on it such as 'pore.diameter', 'pore.volume', and so on:
```
geo.models['pore.seed']['num_range'] = [0.001, 0.999]
geo.regenerate_models()
fig = plt.hist(geo['pore.diameter'], bins=25, edgecolor='k')
```
A detailed example of adjusting pore-size distributions is given [here](./adjusting_pore_size_distributions.ipynb)
| github_jupyter |
```
import os
os.chdir('D:/IIM/Competitions/Resolvr') # changing working directory to required file location
os.getcwd()
# Importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pal = ['#009786','#7CCC4E', '#1E2A39']
sns.set(style="white", color_codes=True)
sns.set_palette(pal)
%matplotlib inline
import warnings # current version of seaborn generates a bunch of warnings that we'll ignore
warnings.filterwarnings("ignore")
raw_data = pd.read_excel('Worksheet in Analytics_Case_Resolvr2020.xlsx', sheet_name ="Case Study 2020")
raw_data.head().T
raw_data.info()
raw_data = raw_data.drop('Customer ID', axis = 1) # insignificant column
```
### Missing Value Analysis
```
# Finding proportion of missing variables - overall
# Size and shape of the dataframe
print("Size of the dataframe:", raw_data.size)
print("Shape of the dataframe:", raw_data.shape)
# Overall dataframe
print("Count of all missing values in dataframe: ", raw_data.isnull().sum().sum())
# Overall % of missing values in the dataframe
print("% of missing values in dataframe: ", round((raw_data.isnull().sum().sum()/raw_data.size)*100,2),"%")
# Overall missing values is < 10%
# Finding proportion of missing cases
# number of rows
print(len(raw_data.index))
# number of rows with missing values
raw_data.isnull().any(axis=1).sum() # Axis 1 = Rows
# proportion of rows with missing values
raw_data.isnull().any(axis=1).sum()/len(raw_data.index)*100
# Overall cases with misisng values is < 10%
print("Percentage of Missing values in each Row (Case):")
print(np.array(raw_data.isnull().mean(axis=1)))
print("")
# Extracting cases (rows) with missing values
values = np.array(raw_data.isnull().mean(axis=1)*100)
print("Rows with more than 10% percent")
print(np.where(values > 10))
print("")
print("Values in Rows with more than 10% percent")
print(values[values > 10])
# Inference
# None of the rows have missing value more than 10%
def missing_values_table(df):
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
# Print some summary information
print ("The selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
# Return the dataframe with missing information
return mis_val_table_ren_columns
raw_data_missing= missing_values_table(raw_data)
raw_data_missing
```
The % of missing values is very low, the maximum being 0.3% in Arrival Delay in Minutes
```
raw_data_nmv = raw_data.dropna() # preparing a dataset with no missing values to perform correlation test
from scipy.stats import pearsonr
#Null hypothesis: There is no relation between departure and arrival delay
#Alt hypothesis: There is relation between departure and arrival delay
# Plotting Scatterplot
plt.scatter(x = raw_data_nmv['Departure Delay in Minutes'], y = raw_data_nmv['Arrival Delay in Minutes']);
plt.xlabel('dep');
plt.ylabel('arr');
plt.show()
corr, p_val = pearsonr(raw_data_nmv['Departure Delay in Minutes'], raw_data_nmv['Arrival Delay in Minutes'])
print(corr,p_val)
if p_val < 0.05:
print ('Reject Null Hypothesis')
else:
print ('Retain Null Hypothesis')
eq= raw_data_nmv['Departure Delay in Minutes'] == raw_data_nmv['Arrival Delay in Minutes']
eq.value_counts() # Approximately 50%
raw_data_nmv['diff'] = raw_data_nmv['Arrival Delay in Minutes'] - raw_data_nmv['Departure Delay in Minutes']
raw_data_nmv['diff'].describe()
raw_data_nmv['diff'].hist(bins=20,range = [-54,54],figsize=(12,8))
sns.scatterplot(raw_data_nmv['Departure Delay in Minutes'], raw_data_nmv['diff'])
arr_delay_eq = raw_data_nmv.loc[(raw_data_nmv['Departure Delay in Minutes'] == 0)
& (raw_data_nmv['Arrival Delay in Minutes'] == 0)]
len(arr_delay_eq.index)
print("% of rows where Depature Delay and Arrival delay equal to zero:", round(len(arr_delay_eq.index)/len(raw_data_nmv.index)*100,2),"%")
arr_delay_eq['Flight Distance'].mean()
arr_delay_eq['Flight Distance'].describe()
arr_delay_mv = raw_data.loc[(raw_data['Departure Delay in Minutes'] == 0) & (raw_data['Arrival Delay in Minutes'].isnull())]
arr_delay_mv[arr_delay_mv['Flight Distance'] <= 1169]
# Imputing Arrival Delay with zero where flight distance <= 1169
idx = np.where((raw_data['Departure Delay in Minutes'] == 0) & (raw_data['Flight Distance'] <= 1169)
& (raw_data['Arrival Delay in Minutes'].isnull()))
data_imp_0 = raw_data.copy()
data_imp_0['Arrival Delay in Minutes'].loc[idx] = data_imp_0['Arrival Delay in Minutes'].loc[idx].fillna(0)
raw_data_missing= missing_values_table(data_imp_0)
raw_data_missing
# 97 missing values in arrival delay imputed with 0
# predicting arrival delay based on departure delay
from sklearn.linear_model import LinearRegression
X = raw_data_nmv['Departure Delay in Minutes'].values.reshape(-1,1)
Y = raw_data_nmv['Arrival Delay in Minutes'].values.reshape(-1,1)
linreg = LinearRegression()
model = linreg.fit(X,Y)
Yhat = model.predict(X)
from sklearn.metrics import r2_score
print("The R-squared for the imputer is :", round(r2_score(Y, Yhat),2))
idx = np.where(data_imp_0['Arrival Delay in Minutes'].isnull())
X_dep = data_imp_0['Departure Delay in Minutes'].loc[idx].values.reshape(-1,1)
Y_pred = model.predict(X_dep)
arr_pred = Y_pred.reshape(296,)
plt.scatter(X,Y)
fig = plt.plot(X_dep,Y_pred, lw=4, c='orange', label ='regression line')
plt.xlabel('Departure Delay',fontsize=20)
plt.ylabel('Arrival Delay',fontsize=20)
plt.show()
# our imputer will do fine as the model is a great fit
my_list = map(lambda x: x[0], Y_pred)
arr = pd.Series(my_list)
arr_val = pd.DataFrame({'Predicted Arrival': arr[:]})
arr_val.head()
# imputing with predictions according to linreg
data_imp_0['Arrival Delay in Minutes'].loc[idx] = arr_pred
data_imp_0['Arrival Delay in Minutes'].isnull().sum()
# dropping rest of the missing values
data_nmv = data_imp_0.dropna()
data_nmv.isnull().sum()
data_nmv.head().T
data_nmv.info()
data_nmv.describe().T
# segregating on the basis of measurement scale
ratio = data_nmv[['Age','Flight Distance','Departure Delay in Minutes', 'Arrival Delay in Minutes']]
nominal = data_nmv[['Gender','Customer Type','Type of Travel']]
ordinal = data_nmv[['Class']]
interval = data_nmv.iloc[:,6:20]
data_nmv[ratio.columns] = data_nmv[ratio.columns].astype('int64')
```
### Exploratory Data Analysis
```
sns.FacetGrid(data_nmv, hue="satisfaction", size=7) \
.map(plt.hist, "Inflight wifi service") \
.add_legend()
plt.title('Good Inflight wifi service - more satisfied customers')
plt.show()
ratio.hist(bins=50,figsize = (14,8))
fig, ax = plt.subplots(2, 2, figsize=(14, 8))
for variable, subplot in zip(ratio, ax.flatten()):
sns.boxplot(y = ratio[variable], ax=subplot)
# Apart from age all others have outliers. Number of outliers is huge in Departure and Arrival Delay.
sns.boxplot(data_nmv['satisfaction'],data_nmv['Departure Delay in Minutes'])
x = np.log10((data_nmv['Departure Delay in Minutes'])+1)
sns.boxplot(x)
#data_nmv['ln(Departure Delay in Minutes)'] = np.log10((data_nmv['Departure Delay in Minutes'])+1)
box = plt.boxplot(x , patch_artist=True);
plt.xlabel('Departure delay', fontsize=10);
plt.grid();
[whiskers.get_ydata()[0] for whiskers in box["whiskers"]]
[item.get_ydata()[0] for item in box['caps']]
[median.get_ydata()[0] for median in box["medians"]]
[flier.get_ydata()[0] for flier in box["fliers"]]
[flier.get_ydata() for flier in box["fliers"]]
#sns.distplot(data_nmv[data_nmv['satisfaction'] == 1]['ln(Departure Delay in Minutes)'],color = 'y',label = 'satisfaction: Yes')
#sns.distplot(data_nmv[data_nmv['satisfaction'] == 0]['ln(Departure Delay in Minutes)'],color = 'r',label = 'satisfaction: No');
#plt.legend();
data_nmv.to_csv("resolvr.csv", index = False)
counts = data_nmv.groupby(['Class', 'satisfaction']).satisfaction.count().unstack()
counts.plot(kind='bar', stacked=True,figsize = (10,8), fontsize = 12)
plt.xticks(rotation=0,fontsize = 15)
plt.xlabel('Class', fontsize=18)
plt.rcParams['legend.title_fontsize'] = 'xx-large'
plt.legend(title="satisfaction",fontsize = "x-large",loc = 1)
```
| github_jupyter |
# Welcome to Exkaldi
In this section, we will train a n-grams language model and query it.
Althrough __Srilm__ is avaliable in exkaldi, we recommend __Kenlm__ toolkit.
```
import exkaldi
import os
dataDir = "librispeech_dummy"
```
Firstly, prepare the lexicons. We have generated and saved a __LexiconBank__ object in file already (3_prepare_lexicons). So restorage it directly.
```
lexFile = os.path.join(dataDir, "exp", "lexicons.lex")
lexicons = exkaldi.load_lex(lexFile)
lexicons
```
We will use training text corpus to train LM model. Even though we have prepared a transcription file in the data directory, we do not need the utterance-ID information at the head of each line, so we must take a bit of work to produce a new text.
We can lend a hand of the exkaldi __Transcription__ class.
```
textFile = os.path.join(dataDir, "train", "text")
trans = exkaldi.load_transcription(textFile)
trans
newTextFile = os.path.join(dataDir, "exp", "train_lm_text")
trans.save(fileName=newTextFile, discardUttID=True)
```
But actually, you don't need do this. If you use a __Transcription__ object to train the language model, the information of utterance ID will be discarded automatically.
Now we train a 2-grams model with __Kenlm__ backend. (__srilm__ backend is also avaliable.)
```
arpaFile = os.path.join(dataDir, "exp", "2-gram.arpa")
exkaldi.lm.train_ngrams_kenlm(lexicons, order=2, text=trans, outFile=arpaFile, config={"-S":"20%"})
```
ARPA model can be transform to binary format in order to accelerate loading and reduce memory cost.
Although __KenLm__ Python API supports reading ARPA format, but in exkaldi, we only expected KenLM Binary format.
```
binaryLmFile = os.path.join(dataDir, "exp", "2-gram.binary")
exkaldi.lm.arpa_to_binary(arpaFile, binaryLmFile)
```
Use the binary LM file to initialize a Python KenLM n-grams object.
```
model = exkaldi.lm.KenNGrams(binaryLmFile)
model
```
__KenNGrams__ is simple wrapper of KenLM python Model. Check model information:
```
model.info
```
You can query this model with a sentence.
```
model.score_sentence("HELLO WORLD", bos=True, eos=True)
```
There is a example to compute the perplexity of test corpus in order to evaluate the language model.
```
evalTrans = exkaldi.load_transcription( os.path.join(dataDir, "test", "text") )
score = model.perplexity(evalTrans)
score
type(score)
```
___score___ is an exkaldi __Metric__ (a subclass of Python dict) object.
We design a group of classes to hold Kaldi text format table and exkaldi own text format data:
__ListTable__: spk2utt, utt2spk, words, phones and so on.
__Transcription__: transcription corpus, n-best decoding result and so on.
__Metric__: AM score, LM score, LM perplexity, Sentence lengthes and so on.
__ArkIndexTable__: The index of binary data.
__WavSegment__: The wave information.
All these classes are subclasses of Python dict. They have some common and respective methods and attributes.
In this case, for example, we can compute the average value of __Metric__.
```
score.mean()
```
More precisely, the weighted average by the length os sentences.
```
score.mean( weight= evalTrans.sentence_length() )
```
Back to Language Model. If you want to use query ARPA model directly. You can use this function.
```
model = exkaldi.load_ngrams(arpaFile)
model.info
```
As the termination of this section, we generate the Grammar fst for futher steps.
```
Gfile = os.path.join(dataDir, "exp", "G.fst")
exkaldi.decode.graph.make_G(lexicons, arpaFile, outFile=Gfile, order=2)
```
| github_jupyter |
```
require(cowplot)
require(data.table)
require(ggplot2)
require(ggpubr)
require(pbapply)
pboptions(type="timer")
nthreads=10
x_breaks = c(0, .01, .02, .03, .05, .07, .1, .2, .3, .5, .7, 1, 2, 3, 5, 7, 10, 20, 30, 50)
```
# Read spot data
```
thresholds = c(seq(0, .1, by=.01), seq(.2, 1, by=.1), seq(2, 50))
dw__root = "../data/single_FoV_different_thresholds/data/dw/"
raw_root = "../data/single_FoV_different_thresholds/data/raw/"
dw__data = rbindlist(pblapply(thresholds, function(thr) {
d = fread(file.path(dw__root, sprintf("new_decoded_human_cortex_threshold_%05.2f_with_QC_metrics_type_of_unassigned.csv.gz", thr)))
d$thr = thr
d$image_type = "dw"
return(d)
}, cl=nthreads))
raw_data = rbindlist(pblapply(thresholds, function(thr) {
d = fread(file.path(raw_root, sprintf("new_decoded_human_cortex_before_deconvolution_threshold_%05.2f_with_QC_metrics_type_of_unassigned.csv.gz", thr)))
d$thr = thr
d$image_type = "raw"
return(d)
}, cl=nthreads))
ddata = rbindlist(list(dw__data, raw_data))
ddata[, V1 := NULL]
ddata[, target_assigned := "unassigned"]
ddata['nan' != target, target_assigned := "assigned"]
colnames(ddata)
gene_counts = dcast(ddata["assigned" == target_assigned, .N, by=c("image_type", "target", "thr")],
target+thr~image_type, value.var="N")[order(dw, decreasing=T)]
colnames(gene_counts)
```
# Read strip data
```
cell_data = rbindlist(pblapply(c("dw", "raw"), function(image_type) {
d = fread(file.path("../data/strip_of_tissue", image_type, "MP_snRNAseq_filt_subclass.csv"))
d$image_type = image_type
return(d)
}, cl=nthreads))
cell_data[, V1 := NULL]
cell_data[, annotated := "unannotated"]
cell_data["Zero" != ClassName, annotated := "annotated"]
cell_data["dw" == image_type, image_type := "DW"]
colnames(cell_data)
```
# Panel 4A
Counts of assigned/unassigned/total dots per threshold.
```
pdata = rbindlist(list(
ddata[, .N, by=c("image_type", "thr", "target_assigned")],
ddata[, .(target_assigned="total", .N), by=c("image_type", "thr")]
))
setnames(pdata, "target_assigned", "variable")
pdata[, variable := factor(variable, levels=c("total", "assigned", "unassigned"))]
pdata["dw" == image_type, image_type := "DW"]
options(repr.plot.width=9, repr.plot.height=6)
p = ggplot(pdata[variable != "total"], aes(x=thr+.001, y=N, linetype=variable, color=image_type)) +
geom_vline(xintercept=c(2), color="orange", linetype="solid") +
geom_line() + geom_point(size=.5) +
theme_bw() + scale_y_log10() + scale_x_log10(breaks=x_breaks+.001, labels=x_breaks) +
theme(axis.text.x=element_text(angle=90, hjust=1, vjust=.5), legend.position="top") +
scale_fill_grey() + labs(x="Threshold", y="Absolute dot count", linetype="Dot type", color="Image type") +
scale_color_brewer(palette="Set1") +
geom_vline(xintercept=c(.15, 1.5), color="black", linetype="dotted")
print(p)
ggsave(plot=p, file="panels/fig_4a.png", width=4, height=4)
saveRDS(p, "panels_rds/fig_4a.rds")
```
# Panel 4b-c
Visualization of transcript spots in a field of view, RAW and DW.
Field #13, with highest delta (804). ALL transcripts, QC_score > .6.
```
pdata = ddata[thr==2]
pdata[, target := factor(target)]
pdata[, target_nr := as.numeric(target)]
require(viridis)
options(repr.plot.width=6, repr.plot.height=6)
pB = ggplot(pdata[FOV %in% sprintf("fov_%03d", c(3)) & QC_score >= .6 & !is.na(target) & image_type == "raw", .(x, y, target_nr, FOV, image_type)],
aes(x=x, y=y, color=target_nr)) + geom_point(size=1) +
theme_bw() + theme(
legend.position="top",
panel.grid.major=element_blank(), panel.grid.minor=element_blank(),
panel.background=element_rect(fill="black"),
axis.ticks.x=element_blank(), axis.ticks.y=element_blank(),
axis.text.x=element_blank(), axis.text.y=element_blank()
) +
labs(x="", y="", color="Transcript Nr.") +
coord_fixed() + scale_color_gradient(low="white", high="red") +
guides(color=guide_colorbar(title.position="top", barwidth=20))
print(pB)
ggsave(plot=pB, file="panels/fig_4b.png", width=4, height=4)
saveRDS(pB, "panels_rds/fig_4b.rds")
require(viridis)
options(repr.plot.width=6, repr.plot.height=6)
pC = ggplot(pdata[FOV %in% sprintf("fov_%03d", c(3)) & QC_score >= .6 & !is.na(target) & image_type == "dw", .(x, y, target_nr, FOV, image_type)],
aes(x=x, y=y, color=target_nr)) + geom_point(size=1) +
theme_bw() + theme(
legend.position="top",
panel.grid.major=element_blank(), panel.grid.minor=element_blank(),
panel.background=element_rect(fill="black"),
axis.ticks.x=element_blank(), axis.ticks.y=element_blank(),
axis.text.x=element_blank(), axis.text.y=element_blank()
) +
labs(x="", y="", color="Transcript Nr.") +
coord_fixed() + scale_color_gradient(low="white", high="red") +
guides(color=guide_colorbar(title.position="top", barwidth=20))
print(pC)
ggsave(plot=pC, file="panels/fig_4c.png", width=4, height=4)
saveRDS(pC, "panels_rds/fig_4c.rds")
```
# Panel 4d
Transcript counts of DW vs Raw.
```
options(repr.plot.width=6, repr.plot.height=6)
p = ggplot(gene_counts[thr==2], aes(raw, dw)) + geom_point() +
scale_x_log10() + scale_y_log10() + theme_bw() +
geom_abline(slope=1, linetype="dashed", color="red") +
labs(x="Absolute transcript count (raw)", y="Absolute transcript count (DW)")
print(p)
ggsave(plot=p, file="panels/fig_4d.png", width=4, height=4)
saveRDS(p, "panels_rds/fig_4d.rds")
summary(gene_counts[thr==2, (dw/raw-1)*100])
summary(gene_counts[thr==2 & dw/raw>1, (dw/raw-1)*100])
gene_counts[thr==2 & dw/raw>1, .N]
gene_counts[thr==2 & dw/raw==1, .N]
gene_counts[thr==2 & dw/raw<1, .N]
gene_counts[thr==2 & is.na(dw/raw)]
```
# Panel 4e
Number of un/annotated cells
```
cell_data[, .N, by=c("annotated", "image_type")][, .(image_type, annotated, N, N/2183)]
options(repr.plot.width=6, repr.plot.height=4)
p = ggplot(cell_data[, .N, by=c("annotated", "image_type")], aes(x=annotated, y=N, fill=image_type)) +
geom_col(position="dodge", color="#323232") +
theme_bw() + theme(legend.position="bottom") +
scale_fill_brewer("Image type", palette="Set2") +
labs(x="Cell type", y="Absolute cell count")
print(p)
ggsave(plot=p, file="panels/fig_4e.png", width=4, height=4)
saveRDS(p, "panels_rds/fig_4e.rds")
```
| github_jupyter |
<h1 style="text-align:center">Chapter 2</h1>
---
###### Words
---
Take a look at this sentence :
'The quick brown fox jumps over the lazy fox, and took his meal.'
* The sentence has 13 _Words_ if you don't count punctuations, and 15 if you count punctions.
* To count punctuation as a word or not depends on the task in hand.
* For some tasks like P-O-S tagging & speech synthesis, punctuations are treated as words. (Hello! and Hello? are different in speech synthesis)
```
len('The quick brown fox jumps over the lazy fox, and took his meal.'.split())
```
##### Utterance
> An utterance is a spoken correlate of a sentence. (Speaking a sentence is utterance)
Take a look at this sentence:
'I am goi- going to the market to buy ummm fruits.'
* This utterance has two kinds of <strong>disfluencies</strong>(disorder in smooth flow).
1. Fragment - The broken off word 'goi' is a fragment.
2. Fillers - Words like ummm, uhhh, are called fillers or filled pauses.
##### Lemma
> A lemma is a set of lexical forms having the same stem, the same major part-of-speech, and the same word sense.
* Wordform is the full inflected or derived form of the word.
Example,
Wordforms - cats,cat
Lemma - cat
Wordforms - Moving, move
Lemma - move
##### Vocabulary, Wordtypes, and Wordtokens
* Vocabulary - It is the set of distinct words in a corpus.
* Wordtypes - It is the size of the vocabulary V i.e. |V|
* Wordtokens - It is the total number of running words.
Take a look at this sentence:
'They picnicked by the pool, then lay back on the grass and looked at the stars.'
Here,
* Vocabulary = V = {They, picnicked, by, the, pool, then, lay, back, on, grass, and, looked, at, stars}
* Wordtypes = |V| = 14
* Wordtokens(ignoring punctuation) = 16
```
def vocTypeToken(sentence):
tokens = sentence.split()
vocabulary = list(set(tokens))
wordtypes = len(vocabulary)
wordtokens = len(tokens)
print("Sentence = {}\n".format(sentence))
print("Tokens = {}\n".format(tokens))
print("Vocabulary = {}\n".format(sorted(vocabulary)))
print("Wordtypes = {}\n".format(wordtypes))
print("Wordtokens = {}".format(wordtokens))
sentence = 'They picnicked by the pool, then lay back on the grass and looked at the stars.'
vocTypeToken(sentence)
```
###### Herdan's Law
> The larger the corpora we look at, the more wordtypes we find. The relationsip between wordtypes and tokens is called <strong>Herdan's Law</strong>
\begin{equation*}
|V| = kN^\beta
\end{equation*}
, k and \\(\beta\\) are positive consonants.
The value of \\(\beta\\) depends on the corpus size and is in the range of 0 to 1.
* We can say that the vocabulary size for a text goes up significantly faster than the square root of its length in words.
---
- Another rough measure of number of words in a corpus is the number of lemmas.
##### Code switching
> The phenonmenon of changing lanugage while reading or writing is called code switching.
Example,
'Tu mera dost hai or rahega, don't worry.'
---
## Text Normalization
---
Before any type of natural language processing, the text has to be brought a normal condition or state.
The below mentioned three tasks are common for almost every normalization process.
1. Tokenizing ( breaking into words )
2. Normalizing word formats
3. Segmenting sentences
### Word tokenization
---
> The task of segmenting text into words.
<p style="color:red">Why you should not use split() for tokenizaiton.</p>
If using split() on the text, the words like 'Mr. Randolf', emails like 'hello@internet.com' may be broken down as ['Mr.','Randolf'], emails may be broken down as ['hello','@','internet','.','com'].
This is not what we generally want, hence special tokenization algorithms must be used.
* Commas are generally used as word boundaries but also in large numbers (540,000).
* Periods are generally used as sentence boundaries but also in emails, urls, salutation.
##### Clitic
> Clitics are words that can't stand on their own. They are attached to other words. Tokenizer can be used to expand clitics.
Example of clitics,
What're, Who's, You're.
- Tokenization algorithms can also be used to tokenize multiwords like 'New York', 'rock N roll'.
This tokenization is used in conjunction with <strong>Named Entity Detection</strong> (the task of detecting name, places, dates, organizations)
Python code for tokenization below
```
from nltk.tokenize import word_tokenize
text = 'The San Francisco-based restaurant," they said, "doesn’t charge $10".'
print(word_tokenize(text))
from nltk.tokenize import wordpunct_tokenize
print(wordpunct_tokenize(text))
```
Since tokenization needs to run before any language processing, it needs to be fast.
Regex based tokenization is fast but not that smart while handling punctuations, and language dilemma.
There are many tokenization algorithms like ByteLevelBPETokenizer, CharBPETokenizer, SentencePieceBPETokenizer.
Below excercise shows step by step guide to modern way of tokenization using [huggingface's](https://huggingface.co/) ultrafast tokenization library - [Tokenizer](https://github.com/huggingface/tokenizers)
---
#### Notice the speed of huggingface tokenizer and nltk tokenizer
```
!python3 -m pip install tokenizers #install tokenizer
from tokenizers import (BertWordPieceTokenizer)
tokenizer = BertWordPieceTokenizer("bert-base-uncased-vocab.txt", lowercase=True)
from datetime import datetime
def textTokenizer(text):
start = (datetime.now())
print(tokenizer.encode(text).tokens)
end = (datetime.now())
print("Time taken - {}".format((end-start).total_seconds()))
textTokenizer('Number expressions introduce other complications as well; while commas nor- mally appear at word boundaries, commas are used inside numbers in English, every three digits.')
```
* We will discuss about [CLS] and [SEP] later
```
from datetime import datetime
def nltkTokenizer(text):
start = (datetime.now())
print(word_tokenize(text))
end = (datetime.now())
print("Time taken - {}".format((end-start).total_seconds()))
nltkTokenizer('Number expressions introduce other complications as well; while commas nor- mally appear at word boundaries, commas are used inside numbers in English, every three digits.')
```
##### Word segmentation
> Some languages(like Chinese) don't have words seperated by spaces, hence tokenization is not easily done. So word segmentation is done using sequence models trained on hand seperated datasets.
| github_jupyter |
```
import torch
import torch.nn.functional as F
import torchsde
from torchvision import datasets, transforms
import math
import numpy as np
import pandas as pd
from tqdm import tqdm
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
import functorch
import matplotlib.pyplot as plt
import cfollmer.functional as functional
from cfollmer.objectives import relative_entropy_control_cost
from cfollmer.drifts import SimpleForwardNet, SimpleForwardNetBN, ResNetScoreNetwork
from cfollmer.sampler_utils import FollmerSDE
device = "cuda" if torch.cuda.is_available() else "cpu"
class DNN(torch.nn.Module):
def __init__(self, input_dim=1, output_dim=1):
super(DNN, self).__init__()
self.output_dim = output_dim
self.input_dim = input_dim
self.nn = torch.nn.Sequential(
torch.nn.Linear(input_dim, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, output_dim)
)
def forward(self, x):
return self.nn(x)
class LinModel(torch.nn.Module):
def __init__(self, input_dim=1, output_dim=1):
super(LinModel, self).__init__()
self.output_dim = output_dim
self.input_dim = input_dim
self.nn = torch.nn.Sequential(
torch.nn.Linear(input_dim, 1),
)
def forward(self, x):
return self.nn(x)
device = "cuda" if torch.cuda.is_available() else "cpu"
def lin_reg_data_gen(dim, sigma_n, device, num_samps=30):
w = np.ones((dim,1))
b = 1
func = lambda x: np.dot(x, w) + 1
# Test inputs
num_test_samples = 30
if dim == 1:
X_test = np.linspace(-16, 16, num_samps).reshape(num_samps,1)
X_train = np.linspace(-3.5, 3.5, num_samps).reshape(-1,1)
else:
X_test = np.random.randn(num_samps, dim)
X_train = np.random.randn(num_samps, dim)
# Noise free training inputs
#f_train = np.cos(X_train)
f_train = func(X_train)
# Noise-free training outputs
#f = np.cos(X_test)
f = func(X_test)
y_test = f
# Noisy training Inputs with additive Gaussian noise (zero-mean, variance sigma_n)
mu = np.zeros(X_train.size)
epsilon = np.random.multivariate_normal(mu, sigma_n**2 * np.eye(X_train.size))
# Noisy targets
y_train = f_train + epsilon.reshape(X_train.size,1)
return X_train, y_train, X_test, y_test, f
dim = dim_data = 1
sigma_n = 0.5
X_train, y_train, X_test, y_test, f = lin_reg_data_gen(dim, sigma_n, device)
N_train , _ = X_train.shape
N_test , _ = X_test.shape
# if dim == 1:
# Noisy observations
fig, ax = plt.subplots(figsize=(6, 4), tight_layout=True)
ax.plot(X_test[:,[0]], f, 'b', label = 'f(x)')
ax.plot(X_train[:,[0]], y_train,".", label = 'y(x) = f(x) + $\epsilon$')
ax.legend(loc = 'upper left')
ax.set_title('Target function with noisy observations ')
plt.show()
X_train = torch.tensor(X_train, device=device, dtype=torch.float)
X_test = torch.tensor(X_test, device=device, dtype=torch.float)
y_train = torch.tensor(y_train, device=device, dtype=torch.float)
y_test = torch.tensor(y_test, device=device, dtype=torch.float)
# dim
model = LinModel().to(device)
func_model, params = functorch.make_functional(model)
size_list = functional.params_to_size_tuples(params)
dim = functional.get_number_of_params(size_list)
sigma2 = 1
def log_prior(params):
return -torch.sum(params**2) / (2 * sigma2)
def log_likelihood(x, y, params):
preds = func_model(functional.get_params_from_array(params, size_list), x)
diff = preds - y
return - torch.sum(diff**2) / (2 * sigma_n**2)
def log_likelihood_batch(x, y, params_batch):
func = lambda params: log_likelihood(x, y, params)
func = functorch.vmap(func)
return func(params_batch)
def log_posterior(x, y, params):
return log_prior(params) + (N_train / x.shape[0]) * log_likelihood(x, y, params)
def log_posterior_batch(x, y, params_batch):
func = lambda params: log_posterior(x, y, params)
func = functorch.vmap(func)
return func(params_batch)
gamma = 0.1**2
n_steps = 300
data_batch_size = 50
param_batch_size = 32
def train(gamma, n_steps, data_batch_size, param_batch_size, dt=0.05, stl=False):
sde = FollmerSDE(gamma, SimpleForwardNetBN(input_dim=dim, width=300)).to(device)
optimizer = torch.optim.Adam(sde.parameters(), lr=1e-4)
losses = []
for _ in tqdm(range(n_steps)):
perm = torch.randperm(N_train)
x = X_train[perm[:data_batch_size], :]
y = y_train[perm[:data_batch_size], :]
optimizer.zero_grad()
partial_log_p = lambda params_batch: log_posterior_batch(x, y, params_batch)
loss = relative_entropy_control_cost(sde, partial_log_p, param_batch_size=param_batch_size, dt=dt, device=device)
loss.backward()
losses.append(loss.detach().cpu().numpy())
optimizer.step()
if stl: # double check theres no references left
sde.drift_network_detatched.load_state_dict((sde.drift_network.state_dict()))
losses = np.array(losses)
return sde, losses
def predict(param_samples, x, y):
with torch.no_grad():
predict_func = lambda params : func_model(functional.get_params_from_array(params, size_list), x)
predict_func = functorch.vmap(predict_func)
preds = predict_func(param_samples)
std, mean = torch.std_mean(preds, dim=0)
mse = torch.mean((y_test - mean)**2)
logp = torch.mean(log_likelihood_batch(x, y, param_samples))
return std, mean, logp, mse
def plot_fit(mean, std, title="", fn=None):
x = X_test.cpu().squeeze()
std = std.cpu().squeeze()
mean = mean.cpu().squeeze()
plt.plot(x, mean)
plt.fill_between(x, mean - 2 * std, mean + 2 * std, alpha=0.2)
plt.plot(X_train.cpu(), y_train.cpu(), 'kP', ms = 9)
plt.title(title)
plt.legend(["mean prediction", "data", r"$\pm 2\sigma^2$"], loc="upper left")
if fn is not None:
plt.savefig(fn, bbox_inches="tight", dpi=600)
plt.close()
sde, losses = train(gamma, n_steps, data_batch_size, param_batch_size)
plt.plot(losses)
param_samples = sde.sample(100, dt=0.01, device=device)
std, mean, logp, mse = predict(param_samples, X_test, y_test)
std = torch.sqrt(std**2 + sigma_n**2)
plot_fit(mean, std, title="SBP fit", fn=None)
plt.show()
plot_fit(mean, std, title="SBP fit", fn="plots/step_func/sbp_fit.png")
# plt.show()
param_samples.std(dim=0), sfs_samps.std(dim=0)
class MCFollmerDrift:
def __init__(self, log_posterior, X,y, dim, device, n_samp=300, gamma=torch.tensor(1), debug=False):
self.log_posterior = log_posterior
self.debug = debug
self.log_posterior = log_posterior
self.device = device
self.X = X
self.dim = dim
self.y = y
self.gamma = gamma
self.n_samp = n_samp
self.distrib = torch.distributions.multivariate_normal.MultivariateNormal(
loc=torch.zeros(dim),
covariance_matrix=torch.eye(dim) * torch.sqrt(gamma)
)
def g(self, thet):
func = lambda params: self.log_posterior(self.X, self.y, params)
func = functorch.vmap(func)
lp = func(thet)
reg = 0.5 * (thet**2).sum(dim=-1) / self.gamma
# if torch.any(torch.isinf(torch.exp(lp + reg))):
out = torch.exp(lp + reg)
isnan = torch.isinf(torch.abs(out)) | torch.isnan(out)
if self.debug and torch.any(isnan):
import pdb; pdb.set_trace()
# import pdb; pdb.set_trace()
return out # nans exp(reg)
def ln_g(self, thet):
func = lambda params: self.log_posterior(self.X, self.y, params)
func = functorch.vmap(func)
lp = func(thet)
reg = 0.5 * (thet**2).sum(dim=-1) / self.gamma
out = lp + reg
isnan = torch.isinf(torch.abs(out)) | torch.isnan(out)
if self.debug and torch.any(isnan):
import pdb; pdb.set_trace()
return out # nans exp(reg)
def mc_follmer_drift_(self, t, params, Z):
# Using Stein Estimator for SFS drift
g_YZt = self.g(params[None, ...] + torch.sqrt(1-t) * Z)
num = (Z * g_YZt[..., None]).mean(dim=0)
denom = torch.sqrt(1-t) * (g_YZt).mean(dim=0)
out = num / denom[...,None]
isnan = torch.isinf(torch.abs(out)) | torch.isnan(out)
return out
def mc_follmer_drift_stable(self, t, params, Z):
# Using Stein Estimator for SFS drift
N, d = Z.shape
lnN = torch.log(torch.tensor(N)).to(self.device)
ln_g_YZt = self.ln_g(params[None, ...] + torch.sqrt(1-t) * Z)
Z_plus = torch.nn.functional.relu(Z)
Z_minus = torch.nn.functional.relu(-Z)
ln_num_plus = torch.logsumexp(
(torch.log(Z_plus) + ln_g_YZt[..., None]) - lnN,
dim=0,
)
ln_num_minus = torch.logsumexp(
(torch.log(Z_minus) + ln_g_YZt[..., None]) - lnN,
dim=0
)
ln_denom = torch.logsumexp(
torch.log(torch.sqrt(1-t)) + (ln_g_YZt) - lnN,
dim=0
)
out = torch.exp(ln_num_plus-ln_denom) - torch.exp(ln_num_minus-ln_denom)
isnan = torch.isinf(torch.abs(out)) | torch.isnan(out)
return out
def mc_follmer_drift_debug(self, t, params):
# Using Stein Estimator for SFS drift
Z = self.distrib.rsample((self.n_samp,)).to(self.device)
params = params[0]
g_YZt = self.g(params[None, ...] + torch.sqrt(1-t) * Z)
num = (Z * g_YZt[..., None]).mean(dim=0)
denom = torch.sqrt(1-t) * (g_YZt).mean(dim=0)
out = num / denom[...,None]
isnan = torch.isinf(torch.abs(out)) | torch.isnan(out)
if self.debug and torch.any(isnan):
import pdb; pdb.set_trace()
return out.reshape(1,-1)
def mc_follmer_drift(self, t , params_batch):
Z = self.distrib.rsample((params_batch.shape[0], self.n_samp)).to(self.device)
func = lambda params, z: self.mc_follmer_drift_stable(t, params, z)
func = functorch.vmap(func, in_dims=(0,0) )
out = func(params_batch, Z)
# import pdb; pdb.set_trace()
return out
class MCFollmerSDE(torch.nn.Module):
def __init__(self, gamma, dim, log_posterior, X_train, y_train, device, debug=False):
super().__init__()
self.noise_type = 'diagonal'
self.sde_type = 'ito'
self.gamma = gamma
if debug:
self.drift = MCFollmerDrift(log_posterior, X_train, y_train, dim, device, gamma=gamma, debug=debug).mc_follmer_drift_debug
else:
self.drift = MCFollmerDrift(log_posterior, X_train, y_train, dim, device, gamma=gamma).mc_follmer_drift
self.dim = dim
def f(self, t, y, detach=False):
return self.drift(t, y)
def g(self, t, y):
return torch.sqrt(self.gamma )* torch.ones_like(y)
def sample_trajectory(self, batch_size, dt=0.05, device=None):
param_init = torch.zeros((batch_size, self.dim), device=device)
n_steps = int(1.0 / dt)
ts = torch.linspace(0, 1, n_steps, device=device)
param_trajectory = torchsde.sdeint(self, param_init, ts, method="euler", dt=dt)
return param_trajectory, ts
def sample(self, batch_size, dt=0.05, device=None):
return self.sample_trajectory(batch_size, dt=dt, device=device)[0] [-1]#[-1]
# mcfol = MCFollmerDrift(log_posterior, X_train, y_train, dim, device)
sde_sfs = MCFollmerSDE(torch.tensor(gamma), dim, log_posterior, X_train, y_train, device)
# sfs_samps
sfs_samps = sde_sfs.sample(130, dt=0.01, device=device)
sfs_samps
(~torch.isnan(sfs_samps).sum(dim=1).bool()).sum()
sfs_samps = sfs_samps[~torch.isnan(sfs_samps).sum(dim=1).bool()]
std_sfs, mean_sfs, logp_sfs, mse_sfs = predict(sfs_samps, X_test, y_test)
std_sfs = torch.sqrt(std_sfs**2 + sigma_n**2)
plot_fit(mean_sfs, std_sfs, title="SBP fit", fn=None)
# plot_fit(mean, std, title="SBP fit", fn=None)
# plot_fit(torch.tensor(mean_true), torch.tensor(std_true), title="Exact fit", fn=None)
plt.show()
plot_fit(mean_sfs, std_sfs, title="SBP fit", fn="plots/step_func/sbp_sfs_mc_fit.png")
def pred_true_std(X_train, X_test, sigma_n, sigma2, dim):
# https://github.com/probml/pml-book/releases/latest/download/book1.pdf
# See Eq 11.124 in the above link page 430 on pdf viewer (page 400 on page number in pdf)
X_trainnp = X_train.cpu().detach().numpy()
n_, d = X_trainnp.shape
X_trainnp = np.concatenate((X_trainnp, np.ones((n_, 1))), axis=1)
X_testnp = X_test.cpu().detach().numpy()
n_, d = X_testnp.shape
X_testnp = np.concatenate((X_testnp, np.ones((n_, 1))), axis=1)
print(X_trainnp.shape)
Sigma_post = sigma_n**2 * np.linalg.inv(sigma_n**2 * np.eye(dim) / sigma2 + np.dot(X_trainnp.T,X_trainnp))
sigma_pred = []
for i in range(n_):
sigma_pred += [np.dot(X_testnp[i,:].dot(Sigma_post), X_testnp[i,:]) + sigma_n**2 ]
std_true = np.sqrt(sigma_pred)
return std_true
std_true = pred_true_std(X_train, X_test, sigma_n, sigma2, dim)
def pred_true_mean(y_train, X_train, X_test, sigma_n, sigma2, dim):
# https://github.com/probml/pml-book/releases/latest/download/book1.pdf
# See Eq 11.124 in the above link page 430 on pdf viewer (page 400 on page number in pdf)
X_trainnp = X_train.cpu().detach().numpy()
n_, d = X_trainnp.shape
lambda_ = sigma_n**2 / sigma2
X_trainnp = np.concatenate((X_trainnp, np.ones((n_, 1))), axis=1)
X_testnp = X_test.cpu().detach().numpy()
n_, d = X_testnp.shape
X_testnp = np.concatenate((X_testnp, np.ones((n_, 1))), axis=1)
Xty = np.dot(X_trainnp.T, y_train)
Sigma_post = np.linalg.inv(sigma_n**2 * np.eye(dim) / sigma2 + np.dot(X_trainnp.T,X_trainnp))
w = np.dot(Sigma_post, Xty)
print(w.shape)
return np.dot(X_testnp,w)
mean_true = pred_true_mean(y_train.detach().cpu(), X_train, X_test, sigma_n, sigma2, dim)
mean_true.shape
param_samples = sde.sample(100, dt=0.01, device=device)
plot_fit(torch.tensor(mean_true), torch.tensor(std_true), title="Exact fit", fn=None)
plt.show()
# plot_fit(mean, std, title="SBP fit", fn="plots/step_func/sbp_fit.png")
np.abs(std_sfs.detach().cpu().numpy()- std_true).mean(), np.abs(std.detach().cpu().numpy()- std_true).mean()
np.abs(mean_sfs.detach().cpu().numpy()- mean_true).mean(), np.abs(mean.detach().cpu().numpy()- mean_true).mean()
std_sfs = torch.sqrt(std_sfs**2 + sigma_n**2)
plot_fit(mean_sfs, std_sfs, title="SBP fit", fn=None)
plot_fit(mean, std, title="SBP fit", fn=None)
plot_fit(torch.tensor(mean_true), torch.tensor(std_true), title="Exact fit", fn=None)
plt.show()
plot_fit(mean_sfs, std_sfs, title="SBP fit", fn="plots/step_func/sbp_sfs_mc_fit.png")
# plot_fit(mean_sfs, std_sfs, title="SBP fit", fn=None)
plot_fit(mean, std_sfs, title="SBP fit", fn=None)
plot_fit(torch.tensor(mean_true), torch.tensor(std_true), title="Exact fit", fn=None)
n_runs = 5
sbp_mse = []
sbp_logp = []
for i in range(n_runs):
sde, losses = train(gamma, n_steps, data_batch_size, param_batch_size)
with torch.no_grad():
param_samples = sde.sample(100, dt=0.01, device=device)
std, mean, logp, mse = predict(param_samples, X_test, y_test)
std = torch.sqrt(std**2 + sigma_n**2)
plot_fit(mean, std, title="SBP fit", fn=None)
plt.show()
plot_fit(mean, std, title="SBP fit #{}".format(i+1), fn="plots/step_func/sbp_fit_#{:d}.png".format(i+1))
sbp_mse.append(mse.cpu().numpy())
sbp_logp.append(logp.cpu().numpy())
sbp_mse = np.array(sbp_mse)
sbp_logp = np.array(sbp_logp)
@torch.enable_grad()
def gradient(x, y, params):
params_ = params.clone().requires_grad_(True)
loss = log_posterior(x, y, params_)
grad, = torch.autograd.grad(loss, params_)
return loss.detach().cpu().numpy(), grad
def step_size(n):
return 1e-4/ (1 + n)**0.1
def sgld(n_steps, last_n, data_batch_size):
losses = []
param_samples = []
params = torch.zeros(dim).float().to(device)
for step in tqdm(range(n_steps)):
perm = torch.randperm(N_train)
x = X_train[perm[:data_batch_size], :]
y = y_train[perm[:data_batch_size], :]
eps = step_size(step)
loss, grad = gradient(x, y, params)
params = params + 0.5 * eps * grad + np.sqrt(eps) * torch.randn_like(params)
if n_steps <= step + last_n:
param_samples.append(params)
losses.append(loss)
param_samples = torch.stack(param_samples)
return param_samples, losses
param_samples, losses = sgld(10000, 2000, data_batch_size)
plt.plot(losses)
std, mean, logp, mse = predict(param_samples[:100], X_test, y_test)
std = torch.sqrt(std**2 + sigma_n**2)
plot_fit(mean, std, title="SBP fit", fn=None)
plt.show()
plot_fit(mean, std, title="SGLD fit", fn="plots/step_func/sgld_fit.png")
n_runs = 5
n_steps = 10000
sgld_mse = []
sgld_logp = []
for i in range(n_runs):
param_samples, losses = sgld(n_steps, 1000, data_batch_size)
std, mean, logp, mse = predict(param_samples[:100], X_test, y_test)
plot_fit(mean, std, title="SGLD fit #{} (100 samples)".format(i+1), fn="plots/step_func/sgld_fit_#{:d}_100.png".format(i+1))
std, mean, _, _ = predict(param_samples[:500], X_test, y_test)
plot_fit(mean, std, title="SGLD fit #{} (500 samples)".format(i+1), fn="plots/step_func/sgld_fit_#{:d}_500.png".format(i+1))
std, mean, _, _ = predict(param_samples, X_test, y_test)
plot_fit(mean, std, title="SGLD fit #{} (1000 samples)".format(i+1), fn="plots/step_func/sgld_fit_#{:d}_1000.png".format(i+1))
sgld_mse.append(mse.cpu().numpy())
sgld_logp.append(logp.cpu().numpy())
sgld_mse = np.array(sgld_mse)
sgld_logp = np.array(sgld_logp)
SBP_df = pd.DataFrame({"mse": sbp_mse, "logp": sbp_logp})
SGLD_df = pd.DataFrame({"mse": sgld_mse, "logp": sgld_logp})
SBP_df
SBP_df.describe()
SGLD_df
SGLD_df.describe()
```
| github_jupyter |
# Assignment 2 - Q-Learning and Expected Sarsa
Welcome to Course 2 Programming Assignment 2. In this notebook, you will:
- Implement Q-Learning with $\epsilon$-greedy action selection
- Implement Expected Sarsa with $\epsilon$-greedy action selection
- Investigate how these two algorithms behave on Cliff World (described on page 132 of the textbook)
We will provide you with the environment and infrastructure to run an experiment (called the experiment program in RL-Glue). This notebook will provide all the code you need to run your experiment and visualise learning performance.
This assignment will be graded automatically by comparing the behavior of your agent to our implementations of Expected Sarsa and Q-learning. The random seed will be set to avoid different behavior due to randomness. **You should not call any random functions in this notebook.** It will affect the agent's random state and change the results.
## Packages
You will need the following libraries for this assignment. We are using:
1. numpy: the fundamental package for scientific computing with Python.
2. scipy: a Python library for scientific and technical computing.
3. matplotlib: library for plotting graphs in Python.
4. RL-Glue: library for reinforcement learning experiments.
**Please do not import other libraries** — this will break the autograder.
```
%matplotlib inline
import numpy as np
from scipy.stats import sem
import matplotlib.pyplot as plt
from rl_glue import RLGlue
import agent
import cliffworld_env
from tqdm import tqdm
import pickle
plt.rcParams.update({'font.size': 15})
plt.rcParams.update({'figure.figsize': [10,5]})
```
## Section 1: Q-Learning
In this section you will implement and test a Q-Learning agent with $\epsilon$-greedy action selection (Section 6.5 in the textbook).
### Implementation
Your job is to implement the updates in the methods agent_step and agent_end. We provide detailed comments in each method describing what your code should do.
```
# [Graded]
# Q-Learning agent here
class QLearningAgent(agent.BaseAgent):
def agent_init(self, agent_init_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The epsilon parameter for exploration,
step_size (float): The step-size,
discount (float): The discount factor,
}
"""
# Store the parameters provided in agent_init_info.
self.num_actions = agent_init_info["num_actions"]
self.num_states = agent_init_info["num_states"]
self.epsilon = agent_init_info["epsilon"]
self.step_size = agent_init_info["step_size"]
self.discount = agent_init_info["discount"]
self.rand_generator = np.random.RandomState(agent_info["seed"])
# Create an array for action-value estimates and initialize it to zero.
self.q = np.zeros((self.num_states, self.num_actions)) # The array of action-value estimates.
def agent_start(self, state):
"""The first method called when the episode starts, called after
the environment starts.
Args:
state (int): the state from the
environment's evn_start function.
Returns:
action (int): the first action the agent takes.
"""
# Choose action using epsilon greedy.
current_q = self.q[state,:]
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.randint(self.num_actions)
else:
action = self.argmax(current_q)
self.prev_state = state
self.prev_action = action
return action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (int): the state from the
environment's step based on where the agent ended up after the
last step.
Returns:
action (int): the action the agent is taking.
"""
# Choose action using epsilon greedy.
current_q = self.q[state, :]
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.randint(self.num_actions)
else:
action = self.argmax(current_q)
# Perform an update (1 line)
### START CODE HERE ###
self.q[self.prev_state,self.prev_action] = self.q[self.prev_state,self.prev_action] + self.step_size*(
reward + self.discount*np.max(self.q[state,:]) - self.q[self.prev_state,self.prev_action] )
### END CODE HERE ###
self.prev_state = state
self.prev_action = action
return action
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# Perform the last update in the episode (1 line)
### START CODE HERE ###
self.q[self.prev_state,self.prev_action] = self.q[self.prev_state,self.prev_action] + self.step_size*(
reward - self.q[self.prev_state,self.prev_action] )
### END CODE HERE ###
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action-values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
```
### Test
Run the cells below to test the implemented methods. The output of each cell should match the expected output.
Note that passing this test does not guarantee correct behavior on the Cliff World.
```
# Do not modify this cell!
## Test Code for agent_start() ##
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = QLearningAgent()
current_agent.agent_init(agent_info)
action = current_agent.agent_start(0)
print("Action Value Estimates: \n", current_agent.q)
print("Action:", action)
```
**Expected Output:**
```
Action Value Estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Action: 1
```
```
# Do not modify this cell!
## Test Code for agent_step() ##
actions = []
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = QLearningAgent()
current_agent.agent_init(agent_info)
actions.append(current_agent.agent_start(0))
actions.append(current_agent.agent_step(2, 1))
actions.append(current_agent.agent_step(0, 0))
print("Action Value Estimates: \n", current_agent.q)
print("Actions:", actions)
```
**Expected Output:**
```
Action Value Estimates:
[[ 0. 0.2 0. 0. ]
[ 0. 0. 0. 0.02]
[ 0. 0. 0. 0. ]]
Actions: [1, 3, 1]
```
```
# Do not modify this cell!
## Test Code for agent_end() ##
actions = []
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = QLearningAgent()
current_agent.agent_init(agent_info)
actions.append(current_agent.agent_start(0))
actions.append(current_agent.agent_step(2, 1))
current_agent.agent_end(1)
print("Action Value Estimates: \n", current_agent.q)
print("Actions:", actions)
```
**Expected Output:**
```
Action Value Estimates:
[[0. 0.2 0. 0. ]
[0. 0. 0. 0.1]
[0. 0. 0. 0. ]]
Actions: [1, 3]
```
## Section 2: Expected Sarsa
In this section you will implement an Expected Sarsa agent with $\epsilon$-greedy action selection (Section 6.6 in the textbook).
### Implementation
Your job is to implement the updates in the methods agent_step and agent_end. We provide detailed comments in each method describing what your code should do.
```
# [Graded]
# Expected Sarsa agent here
class ExpectedSarsaAgent(agent.BaseAgent):
def agent_init(self, agent_init_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The epsilon parameter for exploration,
step_size (float): The step-size,
discount (float): The discount factor,
}
"""
# Store the parameters provided in agent_init_info.
self.num_actions = agent_init_info["num_actions"]
self.num_states = agent_init_info["num_states"]
self.epsilon = agent_init_info["epsilon"]
self.step_size = agent_init_info["step_size"]
self.discount = agent_init_info["discount"]
self.rand_generator = np.random.RandomState(agent_info["seed"])
# Create an array for action-value estimates and initialize it to zero.
self.q = np.zeros((self.num_states, self.num_actions)) # The array of action-value estimates.
def agent_start(self, state):
"""The first method called when the episode starts, called after
the environment starts.
Args:
state (int): the state from the
environment's evn_start function.
Returns:
action (int): the first action the agent takes.
"""
# Choose action using epsilon greedy.
current_q = self.q[state, :]
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.randint(self.num_actions)
else:
action = self.argmax(current_q)
self.prev_state = state
self.prev_action = action
return action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (int): the state from the
environment's step based on where the agent ended up after the
last step.
Returns:
action (int): the action the agent is taking.
"""
# Choose action using epsilon greedy.
current_q = self.q[state,:]
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.randint(self.num_actions)
else:
action = self.argmax(current_q)
"""
pi(any action) = epsilon / num_actions # any action might be chosen in the non-greedy case
pi(greedy action) = pi(any action) + (1 - epsilon) / num_greedy_actions
"""
# Perform an update (~5 lines)
### START CODE HERE ###
max_q = np.max(current_q)
num_greedy_actions = np.sum(current_q==max_q)
non_greedy_actions_prob = (self.epsilon / self.num_actions)
greedy_actions_prob = ((1 - self.epsilon) / num_greedy_actions) + (self.epsilon / self.num_actions)
expected_q = 0
for a in range(self.num_actions):
if current_q[a] == max_q: # This is a greedy action
expected_q += current_q[a] * greedy_actions_prob
else: # This is a non-greedy action
expected_q += current_q[a] * non_greedy_actions_prob
self.q[self.prev_state,self.prev_action] = self.q[self.prev_state,self.prev_action] + self.step_size*(
reward + self.discount*expected_q - self.q[self.prev_state,self.prev_action] )
### END CODE HERE ###
self.prev_state = state
self.prev_action = action
return action
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# Perform the last update in the episode (1 line)
### START CODE HERE ###
self.q[self.prev_state,self.prev_action] = self.q[self.prev_state,self.prev_action] + self.step_size*(
reward - self.q[self.prev_state,self.prev_action] )
### END CODE HERE ###
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action-values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
```
### Test
Run the cells below to test the implemented methods. The output of each cell should match the expected output.
Note that passing this test does not guarantee correct behavior on the Cliff World.
```
# Do not modify this cell!
## Test Code for agent_start() ##
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = ExpectedSarsaAgent()
current_agent.agent_init(agent_info)
action = current_agent.agent_start(0)
print("Action Value Estimates: \n", current_agent.q)
print("Action:", action)
```
**Expected Output:**
```
Action Value Estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Action: 1
```
```
# Do not modify this cell!
## Test Code for agent_step() ##
actions = []
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = ExpectedSarsaAgent()
current_agent.agent_init(agent_info)
actions.append(current_agent.agent_start(0))
actions.append(current_agent.agent_step(2, 1))
actions.append(current_agent.agent_step(0, 0))
print("Action Value Estimates: \n", current_agent.q)
print("Actions:", actions)
```
**Expected Output:**
```
Action Value Estimates:
[[0. 0.2 0. 0. ]
[0. 0. 0. 0.0185]
[0. 0. 0. 0. ]]
Actions: [1, 3, 1]
```
```
# Do not modify this cell!
## Test Code for agent_end() ##
actions = []
agent_info = {"num_actions": 4, "num_states": 3, "epsilon": 0.1, "step_size": 0.1, "discount": 1.0, "seed": 0}
current_agent = ExpectedSarsaAgent()
current_agent.agent_init(agent_info)
actions.append(current_agent.agent_start(0))
actions.append(current_agent.agent_step(2, 1))
current_agent.agent_end(1)
print("Action Value Estimates: \n", current_agent.q)
print("Actions:", actions)
```
**Expected Output:**
```
Action Value Estimates:
[[0. 0.2 0. 0. ]
[0. 0. 0. 0.1]
[0. 0. 0. 0. ]]
Actions: [1, 3]
```
## Section 3: Solving the Cliff World
We described the Cliff World environment in the video "Expected Sarsa in the Cliff World" in Lesson 3. This is an undiscounted episodic task and thus we set $\gamma$=1. The agent starts in the bottom left corner of the gridworld below and takes actions that move it in the four directions. Actions that would move the agent off of the cliff incur a reward of -100 and send the agent back to the start state. The reward for all other transitions is -1. An episode terminates when the agent reaches the bottom right corner.
<img src="cliffworld.png" alt="Drawing" style="width: 600px;"/>
Using the experiment program in the cell below we now compare the agents on the Cliff World environment and plot the sum of rewards during each episode for the two agents.
The result of this cell will be graded. If you make any changes to your algorithms, you have to run this cell again before submitting the assignment.
```
# Do not modify this cell!
agents = {
"Q-learning": QLearningAgent,
"Expected Sarsa": ExpectedSarsaAgent
}
env = cliffworld_env.Environment
all_reward_sums = {} # Contains sum of rewards during episode
all_state_visits = {} # Contains state visit counts during the last 10 episodes
agent_info = {"num_actions": 4, "num_states": 48, "epsilon": 0.1, "step_size": 0.5, "discount": 1.0}
env_info = {}
num_runs = 100 # The number of runs
num_episodes = 500 # The number of episodes in each run
for algorithm in ["Q-learning", "Expected Sarsa"]:
all_reward_sums[algorithm] = []
all_state_visits[algorithm] = []
for run in tqdm(range(num_runs)):
agent_info["seed"] = run
rl_glue = RLGlue(env, agents[algorithm])
rl_glue.rl_init(agent_info, env_info)
reward_sums = []
state_visits = np.zeros(48)
# last_episode_total_reward = 0
for episode in range(num_episodes):
if episode < num_episodes - 10:
# Runs an episode
rl_glue.rl_episode(0)
else:
# Runs an episode while keeping track of visited states
state, action = rl_glue.rl_start()
state_visits[state] += 1
is_terminal = False
while not is_terminal:
reward, state, action, is_terminal = rl_glue.rl_step()
state_visits[state] += 1
reward_sums.append(rl_glue.rl_return())
# last_episode_total_reward = rl_glue.rl_return()
all_reward_sums[algorithm].append(reward_sums)
all_state_visits[algorithm].append(state_visits)
# save results
import os
import shutil
os.makedirs('results', exist_ok=True)
np.save('results/q_learning.npy', all_reward_sums['Q-learning'])
np.save('results/expected_sarsa.npy', all_reward_sums['Expected Sarsa'])
shutil.make_archive('results', 'zip', '.', 'results')
for algorithm in ["Q-learning", "Expected Sarsa"]:
plt.plot(np.mean(all_reward_sums[algorithm], axis=0), label=algorithm)
plt.xlabel("Episodes")
plt.ylabel("Sum of\n rewards\n during\n episode",rotation=0, labelpad=40)
plt.xlim(0,500)
plt.ylim(-100,0)
plt.legend()
plt.show()
```
To see why these two agents behave differently, let's inspect the states they visit most. Run the cell below to generate plots showing the number of timesteps that the agents spent in each state over the last 10 episodes.
```
# Do not modify this cell!
for algorithm, position in [("Q-learning", 211), ("Expected Sarsa", 212)]:
plt.subplot(position)
average_state_visits = np.array(all_state_visits[algorithm]).mean(axis=0)
grid_state_visits = average_state_visits.reshape((4,12))
grid_state_visits[0,1:-1] = np.nan
plt.pcolormesh(grid_state_visits, edgecolors='gray', linewidth=2)
plt.title(algorithm)
plt.axis('off')
cm = plt.get_cmap()
cm.set_bad('gray')
plt.subplots_adjust(bottom=0.0, right=0.7, top=1.0)
cax = plt.axes([0.85, 0.0, 0.075, 1.])
cbar = plt.colorbar(cax=cax)
cbar.ax.set_ylabel("Visits during\n the last 10\n episodes", rotation=0, labelpad=70)
plt.show()
```
The Q-learning agent learns the optimal policy, one that moves along the cliff and reaches the goal in as few steps as possible. However, since the agent does not follow the optimal policy and uses $\epsilon$-greedy exploration, it occasionally falls off the cliff. The Expected Sarsa agent takes exploration into account and follows a safer path. Note this is different from the book. The book shows Sarsa learns the even safer path
Previously we used a fixed step-size of 0.5 for the agents. What happens with other step-sizes? Does this difference in performance persist?
In the next experiment we will try 10 different step-sizes from 0.1 to 1.0 and compare the sum of rewards per episode averaged over the first 100 episodes (similar to the interim performance curves in Figure 6.3 of the textbook). Shaded regions show standard errors.
This cell takes around 10 minutes to run. The result of this cell will be graded. If you make any changes to your algorithms, you have to run this cell again before submitting the assignment.
```
# Do not modify this cell!
agents = {
"Q-learning": QLearningAgent,
"Expected Sarsa": ExpectedSarsaAgent
}
env = cliffworld_env.Environment
all_reward_sums = {}
step_sizes = np.linspace(0.1,1.0,10)
agent_info = {"num_actions": 4, "num_states": 48, "epsilon": 0.1, "discount": 1.0}
env_info = {}
num_runs = 100
num_episodes = 100
all_reward_sums = {}
for algorithm in ["Q-learning", "Expected Sarsa"]:
for step_size in step_sizes:
all_reward_sums[(algorithm, step_size)] = []
agent_info["step_size"] = step_size
for run in tqdm(range(num_runs)):
agent_info["seed"] = run
rl_glue = RLGlue(env, agents[algorithm])
rl_glue.rl_init(agent_info, env_info)
return_sum = 0
for episode in range(num_episodes):
rl_glue.rl_episode(0)
return_sum += rl_glue.rl_return()
all_reward_sums[(algorithm, step_size)].append(return_sum/num_episodes)
for algorithm in ["Q-learning", "Expected Sarsa"]:
algorithm_means = np.array([np.mean(all_reward_sums[(algorithm, step_size)]) for step_size in step_sizes])
algorithm_stds = np.array([sem(all_reward_sums[(algorithm, step_size)]) for step_size in step_sizes])
plt.plot(step_sizes, algorithm_means, marker='o', linestyle='solid', label=algorithm)
plt.fill_between(step_sizes, algorithm_means + algorithm_stds, algorithm_means - algorithm_stds, alpha=0.2)
plt.legend()
plt.xlabel("Step-size")
plt.ylabel("Sum of\n rewards\n per episode",rotation=0, labelpad=50)
plt.xticks(step_sizes)
plt.show()
```
## Wrapping up
Expected Sarsa shows an advantage over Q-learning in this problem across a wide range of step-sizes.
Congratulations! Now you have:
- implemented Q-Learning with $\epsilon$-greedy action selection
- implemented Expected Sarsa with $\epsilon$-greedy action selection
- investigated the behavior of these two algorithms on Cliff World
| github_jupyter |
# CTEs Products Lab
### Introduction
In this lesson, we'll practice working with CTEs. As we know CTEs allow to break our queries into multiple steps by creating a temporary table. And we perform our CTEs with the following syntax:
```SQL
WITH table_name AS (
SELECT ...
)
SELECT ... FROM table_name;
```
Ok, let's get started.
### Getting set up
In this lesson we'll work with the northwind database, which is a sample ecommerce database.
We'll start by connecting to the database.
```
import sqlite3
conn = sqlite3.connect('Northwind_small.sqlite')
cursor = conn.cursor()
```
And then can see the various tables with the following.
```
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
cursor.fetchall()
```
Now we we'll only use a subset of the above tables -- focusing on the `Product`, `Supplier` and `Category` table. Ok, let's take a look at those tables.
```
import pandas as pd
pd.read_sql('SELECT * FROM Product LIMIT 2;', conn)
pd.read_sql("SELECT * FROM supplier LIMIT 2;", conn)
pd.read_sql("SELECT * FROM category LIMIT 2;", conn)
```
### Our First CTEs
Ok, now it's time to write our first CTE. Let's use a CTE to find the highest average unit price by category and supplier.
In doing so first create a temporary table called `avg_category_supplier` that computes the average unit prices, and then find the category and supplier combination with the highest average price.
```
sql = """
WITH avg_category_supplier as (
SELECT CategoryId, SupplierId, AVG(UnitPrice) as avg_price
FROM Product GROUP BY CategoryId, SupplierId
)
SELECT CategoryId, SupplierId, max(avg_price) as highest_avg_price from avg_category_supplier;
"""
pd.read_sql(sql, conn)
```
Now let's use a CTE to find just the category with the lowest average price.
```
sql = """
WITH avg_category as (
SELECT CategoryId, AVG(UnitPrice) as avg_price, AVG(UnitsInStock) as avg_units_stocked
FROM Product GROUP BY CategoryId
)
SELECT CategoryId, min(avg_price) as lowest_avg_price from avg_category;
"""
pd.read_sql(sql, conn)
```
Ok, so in this section, we used CTEs to perform multiple aggregations. We did so by using CTEs to perform an initial aggregation in a temporary table, and then queried from that temporary table.
### CTEs for Preprocessing
Another use case for CTEs in queries is when joining together multiple tables. Remember that in general, when coding, we often perform some initial pre-processing, and then act on that preprocessed data. With CTEs, this may mean using temporary tables to first selecting just the columns that we need from a couple individual tables, and then performing the query from there.
For example, if we want to find the different categories of products made in the `British Isles` we only need a few columns from eahc table. So we'll use CTEs to first select columns from each of those individual tables, and then from there combine these temporary tables together.
```
pd.read_sql("SELECT * FROM supplier LIMIT 2;", conn)
sql = """
WITH select_supplier as (
SELECT Id, CompanyName, Region FROM supplier
),
select_category as (
SELECT Id, CategoryName as name FROM Category
),
select_product as (
SELECT SupplierId, CategoryId, UnitPrice FROM Product
)
SELECT * FROM select_product
JOIN select_supplier ON select_product.SupplierId = select_supplier.Id
JOIN select_category ON select_product.CategoryId = select_category.Id
WHERE Region = 'British Isles';"""
pd.read_sql(sql, conn)
```
So we can see that we the products made in the British Isles are Beverages, Condiments and Confections.
Now, take another look at the CTE above. Notice that there is only a single `WITH` statement, and we separate each temporary table by a comma.
```SQL
WITH select_supplier as (
SELECT Id, CompanyName, Region FROM supplier
),
select_category as ( ...
```
Ok, so now it's your turn to practice with using CTEs in this pattern. Use CTEs to find the the average product price per city, and order from most highest product price to lowest.
So the first step is to use CTEs to select just the necessary columns from each of the needed tables. And then from there we can join the tables together and perform the aggregation.
```
sql = """
WITH select_supplier as (
SELECT Id, City FROM supplier
),
select_product as (
SELECT SupplierId, UnitPrice FROM Product
)
SELECT City, AVG(UnitPrice) as avg_price FROM select_supplier
JOIN select_product ON select_supplier.Id = select_product.SupplierId
GROUP BY City ORDER BY avg_price DESC LIMIT 5
"""
pd.read_sql(sql, conn)
```
### Summary
In this lesson we practiced using CTEs. We use CTEs to create one or more temporary tables, and we then query from those tables. We saw two use cases for CTEs. With the first one, we use CTEs when chaining aggregate queries.
And with the second one, we used the temporary tables for a sort of pre-processing on each table, before then joining these tables together. We separated each of the temporary tables with a comma.
```sql
WITH select_supplier as (
SELECT Id, CompanyName, Region FROM supplier
),
select_category as ( ...
```
| github_jupyter |
# 2. Imperative Programming Languages
우선 2.5까지 나오는 내용 중에서 빼고 살펴보는데, 지난번에 `CMa01.ipynb`에 작성했던 컴파일러 코드에서 문제점을 수정해 보자.
---
컴파일 타겟이 되는 VM의 단순화된 버전을 하스켈로 구현
```
-- {-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE FlexibleContexts #-}
data Instr pa
= HALT | NEG | ADD | SUB | MUL | DIV
| AND | OR | EQU | NEQ | GR | GEQ | LE | LEQ
| POP | DUP
| LOADc Int | LOAD -- | LOADr | LOADrc
| STORE -- | STOREr
| JUMP pa | JUMPz pa | JUMPi pa
-- | CALL | RETURN | ENTER | ALLOC | SLIDE | MARK
-- | NEW
deriving (Eq, Ord, Show, Functor)
type CMa = (Code, Stack)
type Stack = [Value]
type Value = Int
-- stack address as reverse index of stack
type SA = Int
type Code = [Instr PA]
-- program address representation
newtype PA = PA Code deriving (Eq,Ord,Show)
import Data.List
data DotDotDot = DotDotDot
instance Show DotDotDot where
show _ = "..."
-- to prevent infinite printing
instance {-# OVERLAPS #-} Show Code where
show is = "["++intercalate "," (show . fmap (\(PA _) -> DotDotDot) <$> is)++"]"
-- to prevent infinite printing
instance {-# OVERLAPS #-} Show CMa where
show (is,vs) = "{ stack = "++show vs++"\n , code = "++show is++" }"
-- load and store operation for Stack
load :: SA -> Stack -> Value
load i vs = reverse vs !! i
store :: SA -> Value -> Stack -> Stack
store i x vs = vs1++x:vs2
where
(vs1,_:vs2) = splitAt (length vs - 1 - i) vs
import Data.Bits
step :: CMa -> CMa
step (HALT : _, vs) = ([], vs)
step (NEG : is, v : vs) = (is, (-v):vs)
step (ADD : is, v2:v1:vs) = (is, v1 + v2 : vs)
step (SUB : is, v2:v1:vs) = (is, v1 - v2 : vs)
step (MUL : is, v2:v1:vs) = (is, v1 * v2 : vs)
step (DIV : is, v2:v1:vs) = (is, v1 `div` v2 : vs)
step (AND : is, v2:v1:vs) = (is, (v1 .&. v2) : vs)
step (OR : is, v2:v1:vs) = (is, (v1 .|. v2) : vs)
step (EQU : is, v2:v1:vs) = (is, b2i(v1 == v2) : vs)
step (NEQ : is, v2:v1:vs) = (is, b2i(v1 /= v2) : vs)
step (GR : is, v2:v1:vs) = (is, b2i(v1 > v2) : vs)
step (GEQ : is, v2:v1:vs) = (is, b2i(v1 >= v2) : vs)
step (LE : is, v2:v1:vs) = (is, b2i(v1 < v2) : vs)
step (LEQ : is, v2:v1:vs) = (is, b2i(v1 <= v2) : vs)
step (POP : is, _:vs) = (is, vs)
step (DUP : is, v:vs) = (is, v:v:vs)
step (LOADc v : is, vs) = (is, v:vs)
step (LOAD : is, a:vs) = (is, v:vs) where v = load a vs
step (STORE : is, a:n:vs) = (is, n:vs') where vs' = store a n vs
step (JUMP (PA c) : _, vs) = (c, vs)
step (JUMPz (PA c) : _, 0:vs) = (c, vs)
step (JUMPz _ : is, _:vs) = (is, vs)
step vm = error $ "VM is stuck: "++show vm
i2b 0 = False
i2b 1 = True
b2i False = 0
b2i True = 1
exec :: CMa -> [CMa]
exec vm@([],_) = [vm]
exec vm = vm : exec (step vm)
run :: CMa -> CMa
run = last . exec
type LabeledCode = [LabeledInstr]
data LabeledInstr = Label :. Instr Label deriving Show
type Label = String
lbis1 :: LabeledCode
lbis1 =
[ "" :. LOADc 3
, "loop" :. LOADc 1
, "" :. SUB
, "" :. DUP
, "" :. JUMPz "end"
, "" :. JUMP "loop"
, "end" :. HALT
]
import Data.Maybe
assemble :: LabeledCode -> Code
assemble lbis = is'
where
is' = map (fmap lb2a) is
(lbs,is) = unzip [(lb,i) | lb :. i <- lbis]
lb2a "" = error "empty string label"
lb2a lb = PA $ tails is' !! elemIndex' lb lbs
elemIndex' x xs = fromJust (elemIndex x xs)
is1 :: Code
is1 = [ LOADc 3 ] ++ loop
loop = [ LOADc 1
, SUB
, DUP
, JUMPz (PA end)
, JUMP (PA loop) ] ++ end
end = [ HALT ]
assemble lbis1
is1
mapM_ print . exec $ (is1,[])
mapM_ print . exec $ (assemble lbis1,[])
```
<br>
이제 책 Fig.2.8 (p.13) 에 나온 C언어 코드를 CMa 명령 코드으로 컴파일하는 함수들을 직접 구현해 보자.
**식**(expression)을 컴파일하는 `codeR` 및 `codeL`과
**문**(statement)을 컴파일하는 `code`를 하스켈로 작성해 보자.
```
data Expr
= Lit Int -- n (integer literal)
| Var String -- x
| Neg Expr -- -e
| Add Expr Expr -- e1 + 2e
| Sub Expr Expr -- e1 - e2
| Mul Expr Expr -- e1 * e2
| Div Expr Expr -- e1 / e2
| And Expr Expr -- e1 + e2
| Or Expr Expr -- e1 || e2
| Equ Expr Expr -- e1 == e2
| Neq Expr Expr -- e1 /= e2
| Gr Expr Expr -- e1 > e2
| Geq Expr Expr -- e1 >= e2
| Le Expr Expr -- e1 <= e2
| Leq Expr Expr -- e1 < e2
| Assign Expr Expr -- eL <- eR (assignment expression. 실제 C문법으로는 eL = eR)
deriving (Eq,Ord,Show)
data Stmt
= EStmt Expr -- e; (expression as statement)
| Block [Stmt] -- { s1; ...; sn; }
| If Expr Stmt (Maybe Stmt) -- if (e) s 또는 if (e) s1 else s0
| While Expr Stmt -- while (e) s
| For (Expr,Expr,Expr) Stmt -- for (e1;e2;e3) s
deriving (Eq,Ord,Show)
[1,2,3] ++ [4,5,6]
(4 :) [5,6,7]
import Data.Map (Map, (!), (!?))
import qualified Data.Map as Map
type AEnv = Map String SA
codeR :: Expr -> AEnv -> (Code -> Code)
codeR (Lit q) _ = (LOADc q :)
codeR (Var x) ρ = codeL (Var x) ρ . (LOAD :)
codeR (Neg e) ρ = codeR e ρ . (NEG :)
codeR (Add e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (ADD :)
codeR (Sub e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (SUB :)
codeR (Mul e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (MUL :)
codeR (Div e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (DIV :)
codeR (And e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (AND :)
codeR (Or e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (OR :)
codeR (Equ e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (EQU :)
codeR (Neq e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (NEQ :)
codeR (Gr e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (GR :)
codeR (Geq e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (GEQ :)
codeR (Le e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (LE :)
codeR (Leq e1 e2) ρ = codeR e1 ρ . codeR e2 ρ . (LEQ :)
codeR (Assign eL eR) ρ = codeR eR ρ . codeL eL ρ . (STORE :)
codeR e _ = error $ "R-value not defined: "++show e
codeL :: Expr -> AEnv -> (Code -> Code)
codeL (Var x) ρ = (LOADc (ρ ! x) :)
codeL e _ = error $ "L-value not defined: "++show e
code :: Stmt -> AEnv -> (Code -> Code)
code (EStmt e) ρ = codeR e ρ . (POP :)
code (Block ss) ρ = foldr (.) id [code s ρ | s <- ss]
code (If e s Nothing) ρ =
\k -> codeR e ρ . (JUMPz (PA k) :)
. code s ρ
$ k
code (If e s1 (Just s0)) ρ =
\k -> codeR e ρ . (JUMPz (PA (c0 k)) :)
. c1 . (JUMP (PA k) :)
. c0
$ k
where
c1 = code s1 ρ
c0 = code s0 ρ
code (While e s) ρ = c
where
c = \k -> codeR e ρ
. (JUMPz (PA k) :)
. code s ρ
. (JUMP (PA (c k)) :)
$ k
code (For (e1,e2,e3) s) ρ = code (Block ss) ρ
where ss = [ EStmt e1
, While e2 $ Block [s, EStmt e3]
]
```
지금은 변수 메모리 공간은 미리 할당되어 있다고 가정한다.
즉, 적절한 *주소환경*(address environment)과 그에 맞는 크기의 stack으로 시작한다고 가정한다는 말이다.
예컨대, 아래 코드를 컴파일한다면
$\rho = \{x\mapsto 0,\, i\mapsto 1\}$라는 주소환경으로
$x$와 $i$에 값을 저장할 주소를 미리 정해 놓고 초기 스택도 그에
맞춰 미리 크기를 잡아 놓고 시작하기로 하자.
```c
int x = 1000;
int i = 1;
x <- x + i;
i <- i + 1;
```
주소환경과 초기 스택을 적절하게 구성해 놓은 상태로 시작한다면 위 코드는 사실상 아래와 같은 코드를 컴파일하는 것과 같다.
```c
x <- 1000;
i <- 1;
x <- x + i;
i <- i + 1;
```
```
stmt3 = Block
[ EStmt $ Assign (Var "x") (Lit 1000)
, EStmt $ Assign (Var "i") (Lit 1)
, EStmt $ Assign (Var "x") (Add (Var "x") (Var "i"))
, EStmt $ Assign (Var "i") (Add (Var "i") (Lit 1))
]
is3 = code stmt3 (Map.fromList [("x",0),("i",1)])
is3 []
is3 [HALT]
is3 [DUP,POP,HALT]
mapM_ print $ exec (is3 [],[0,0])
run (is3 [],[1,1000])
```
<br>
이번엔 이 프로그램을 컴파일해 보자.
```c
int x = 1000;
int i = 1;
while (i < 5) {
x <- x + i;
i <- i + 1;
}
```
마찬가지로 $x$와 $i$에 대한 적절한 주소환경 $\{x\mapsto 0,\,i\mapsto 1\}$과 초기 스택으로 시작한다고 가정한다면 아래 코드를 컴파일하면 되는 것이다.
```c
x <- 1000;
i <- 1;
while (i < 5) {
x <- x + i;
i <- i + 1;
}
```
```
stmt41 = Block
[ EStmt $ Assign (Var "x") (Lit 1000) -- x <- 1000;
, EStmt $ Assign (Var "i") (Lit 1) -- i <- 1;
]
stmt42 = Block
[ While (Le (Var "i") (Lit 5)) $ Block -- while (i < 5) {
[ EStmt $ Assign (Var "x") (Add (Var "x") (Var "i")) -- x <- x + i;
, EStmt $ Assign (Var "i") (Add (Var "i") (Lit 1)) -- i <- i + 1;
] -- }
]
stmt43 = Block
[ EStmt $ Assign (Var "x") (Add (Var "x") (Lit 100)) -- x <- x + 100;
, EStmt $ Assign (Var "i") (Add (Var "i") (Lit 100)) -- i <- i + 100;
]
rho4 = Map.fromList [("x",0),("i",1)]
is41 = code stmt41 rho4
is42 = code stmt42 rho4
is43 = code stmt43 rho4
is41 . is42 $ []
is41 . is42 . is43 $ []
run (is41 . is41 $ [], [0,0])
run (is41 . is42 $ [], [0,0])
run (is41 . is42 . is43 $ [], [0,0])
run (is41 . is43 . is43 $ [], [0,0]) -- stmt43을 두번 실행했으므로 100을 두번씩 더해 200씩 증가
```
<br>
정리하자면, 컴파일 함수 `codeR`, `codeL`, `code`가 *식*(`Expr`) 또는 *문*(`Stmt`)과 *주소환경*(`AEnv`)을 받아 고정된 코드(`Code`)를 결과로 계산하는 대신,
뒤이어 오는 **나머지 할 일** 코드를 인자로 받아 전체 코드를 계산해내는 코드 변환 함수(`Code -> Code`)를 결과로 계산하도록 수정하였다.
이렇게 함으로써 조건문이나 반복문에서 그 다음 뒤이어 아직 정해지지 않은 코드 위치로 이동하는 코드를 작성하기에 용이해진다.
이렇게 **나머지 할 일**이라는 개념을 전문용어로는 continuation이라고 한다. 순차적으로 진행되지 않는 계산을 표현하기 위한 개념으로 다양한 곳에 활용된다.
```
stmt5 = Block
[ EStmt $ Assign (Var "i") (Lit 1) -- i <- 1;
, While (Le (Var "i") (Lit 5)) $ -- while (i < 5)
EStmt $ Assign (Var "i") (Add (Var "i") (Lit 1)) -- i <- i + 1;
, EStmt $ Assign (Var "i") (Add (Var "i") (Lit 1)) -- i <- i + 1;
]
c5 = code stmt5 (Map.fromList [("i",0)])
:type c5
c5 [HALT]
mapM_ print $ exec (c5 [HALT], [0])
```
| github_jupyter |
## Hi, i was having a hard time trying to load this huge data set as a pandas data frame on my pc, so i searched for alternative ways of doing this as i don't want to pay for cloud services and don't have access to better machines.
### actually the solution was pretty simple, so i'm sharing what i ended up with, maybe i can help other struggling with the same problem.
obs: this approach won't let you analyse or summarize the data as pandas data frames would (at least not easily),
any criticism or tips are welcomed.
```
import csv
from datetime import datetime
def clean_data(input_data_path='../input/train.csv', output_data_path='../data/train_cleaned.csv'):
"""
Clean the data set, removing any row with missing values,
delimiter longitudes and latitudes to fit only NY city values,
only fare amount greater than 0,
and passenger count greater than 0 and lesser than 7,
i also removed the header as i'm using tensorflow to load data.
:param input_data_path: path containing the raw data set.
:param output_data_path: path to write the cleaned data.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
count = 0
for row in csv.reader(inp):
# Remove header
if count > 0:
# Only rows with non-null values
if len(row) == 8:
try:
fare_amount = float(row[1])
pickup_longitude = float(row[3])
pickup_latitude = float(row[4])
dropoff_longitude = float(row[5])
dropoff_latitude = float(row[6])
passenger_count = float(row[7])
if ((-76 <= pickup_longitude <= -72) and (-76 <= dropoff_longitude <= -72) and
(38 <= pickup_latitude <= 42) and (38 <= dropoff_latitude <= 42) and
(1 <= passenger_count <= 6) and fare_amount > 0):
writer.writerow(row)
except:
pass
count += 1
def pre_process_train_data(input_data_path='data/train_cleaned.csv', output_data_path='data/train_processed.csv'):
"""
Pre process the train data, deriving, year, month, day and hour for each row.
:param input_data_path: path containing the full data set.
:param output_data_path: path to write the pre processed set.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
for row in csv.reader(inp):
pickup_datetime = datetime.strptime(row[2], '%Y-%m-%d %H:%M:%S %Z')
row.append(pickup_datetime.year)
row.append(pickup_datetime.month)
row.append(pickup_datetime.day)
row.append(pickup_datetime.hour)
row.append(pickup_datetime.weekday())
writer.writerow(row)
def pre_process_test_data(input_data_path='data/test.csv', output_data_path='data/test_processed.csv'):
"""
Pre process the test data, deriving, year, month, day and hour for each row.
:param input_data_path: path containing the full data set.
:param output_data_path: path to write the pre processed set.
"""
with open(input_data_path, 'r') as inp, open(output_data_path, 'w', newline='') as out:
writer = csv.writer(out)
count = 0
for row in csv.reader(inp):
if count > 0:
pickup_datetime = datetime.strptime(row[1], '%Y-%m-%d %H:%M:%S %Z')
row.append(pickup_datetime.year)
row.append(pickup_datetime.month)
row.append(pickup_datetime.day)
row.append(pickup_datetime.hour)
row.append(pickup_datetime.weekday())
writer.writerow(row)
else:
# Only the header
writer.writerow(row)
count += 1
def split_data(input_data_path, train_data_path, validation_data_path, ratio=30):
"""
Splits the csv file (meant to generate train and validation sets).
:param input_data_path: path containing the full data set.
:param train_data_path: path to write the train set.
:param validation_data_path: path to write the validation set.
:param ratio: ration to split train and validation sets, (default: 1 of every 30 rows will be validation or 0,033%)
"""
with open(input_data_path, 'r') as inp, open(train_data_path, 'w', newline='') as out1, \
open(validation_data_path, 'w', newline='') as out2:
writer1 = csv.writer(out1)
writer2 = csv.writer(out2)
count = 0
for row in csv.reader(inp):
if count % ratio == 0:
writer2.writerow(row)
else:
writer1.writerow(row)
count += 1
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
### Project: Build a Traffic Sign Recognition Classifier
---
### Step 0: Imports
```
# imported pacakges which are being used in project
import pickle
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.layers import flatten
import matplotlib.pyplot as plt
import cv2
from scipy import ndimage, misc
import numpy as np
from collections import Counter
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
```
### Step 1: Load the data
```
training_file = './traffic-signs-data/train.p'
validation_file='./traffic-signs-data/valid.p'
testing_file = './traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
```
### Step 2: Dataset Summary
```
n_train = len(X_train)
n_validation = len(X_valid)
n_test = len(X_test)
image_shape = X_train[0].shape
n_classes = len(set(train['labels']))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Step 3: Exploratory Visualization of data
```
%matplotlib inline
l = Counter(y_train)
print()
plt.figure(figsize=(13, 6))
plt.ylabel('Training examples',rotation='horizontal')
plt.title('Graphical representation of number of training example of each classes')
plt.xlabel('Classes')
for i in range(n_classes):
plt.bar(i,l.get(i),0.8)
fig=plt.figure(figsize=(15, 6))
columns = 5
rows = 4
print()
plt.title('Visualization of different images from training set')
for i in range(1, columns*rows +1):
#img = np.random.randint(10, size=(h,w))
fig.add_subplot(rows, columns, i)
plt.imshow(X_train[np.where(y_train==i)[0][0]])
plt.show()
```
* In first visulization we can see that training set is distributed propperly for various classes.
* Second visulization shows traffic signs from different classes.
### Step 4: Architecture Design
#### Pre-process the Data Set
```
#Pre-processing training data
X_train = 0.299*X_train[:,:,:,0] + 0.587*X_train[:,:,:,1] + 0.114*X_train[:,:,:,2]
X_train = X_train.reshape(X_train.shape + (1,))
X_train = (X_train - 128.0) / 128.0
#Pre-processing validation data
X_valid = 0.299*X_valid[:,:,:,0] + 0.587*X_valid[:,:,:,1] + 0.114*X_valid[:,:,:,2]
X_valid = X_valid.reshape(X_valid.shape + (1,))
X_valid = (X_valid - 128.0) / 128.0
#Pre-processing test data
X_test = 0.299*X_test[:,:,:,0] + 0.587*X_test[:,:,:,1] + 0.114*X_test[:,:,:,2]
X_test = X_test.reshape(X_test.shape + (1,))
X_test = (X_test - 128.0) / 128.0
#Shuffling training set
X_train,y_train = shuffle(X_train,y_train)
```
#### Model Architecture
```
EPOCHS = 40
BATCH_SIZE = 128
def LeNet(x,keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 32x32x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
#fc0 = tf.nn.dropout(fc0,0.5)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1,keep_prob)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
#fc2 = tf.nn.dropout(fc2,0.5)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
```
#### Defining operations
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32, (None))
one_hot_y = tf.one_hot(y, n_classes)
rate = 0.002
logits = LeNet(x,keep_prob)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
### Step 5: Train and test
#### Evaluating a model
```
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
#### Training a model
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
plt.figure(figsize=(15, 6))
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.5})
validation_accuracy = evaluate(X_valid, y_valid)
train = evaluate(X_train, y_train)
plt.plot(i,validation_accuracy,'ro')
plt.plot(i,train,'b+')
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print("Train_acc: ", validation_accuracy)
print()
saver.save(sess, './lenet')
print("Model saved")
```
#### Testing pipeline
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
### Step 6: Test a Model on New Images
#### Load and Output the Images
```
X_new = []
for i in range(1,9):
im = cv2.imread("./examples/Image"+str(i)+".jpg")
im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
im = misc.imresize(im, (32, 32))
X_new.append(im)
X_new = np.asarray(X_new)
fig=plt.figure(figsize=(15, 6))
columns = 4
rows = 2
plt.title('New data set')
j = 0
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i)
plt.imshow(X_new[j])
j+=1
plt.show()
#pre-process
X_newt = 0.299*X_new[:,:,:,0] + 0.587*X_new[:,:,:,1] + 0.114*X_new[:,:,:,2]
X_newt = X_newt.reshape(X_newt.shape + (1,))
X_newt = (X_newt - 128.0) / 128.0
```
#### Predict the Sign Type for Each Image
```
#y_new = [13,33,25,28,44,]
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
scores = sess.run(logits, feed_dict={x: X_newt, keep_prob:1})
scores = tf.nn.softmax(scores)
scores = tf.nn.top_k(scores,k=5)
scores = sess.run(scores)
```
#### Analyze Performance
```
#Importing CSV file
import csv
with open('signnames.csv', mode='r') as infile:
reader = csv.reader(infile)
mydict = {rows[0]:rows[1] for rows in reader}
```
#### Output Top 5 Softmax Probabilities For Each Image Found on the Web
```
fig=plt.figure(figsize=(20, 15))
columns = 4
rows = 2
j = 0
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i)
plt.title('Top Predicted : '+str(mydict[str(scores[1][j][0])])+' \n\n top 5 softmax probabilities'
+ '\n' + str(scores[0][j][0]) + ' : ' +str(mydict[str(scores[1][j][0])])
+'\n' + str(scores[0][j][1]) + ' : ' +str(mydict[str(scores[1][j][1])])
+'\n'+ str(scores[0][j][2]) + ' : ' +str(mydict[str(scores[1][j][2])])
+'\n'+ str(scores[0][j][3]) + ' : ' +str(mydict[str(scores[1][j][3])])
+'\n'+ str(scores[0][j][4])+ ' : ' +str(mydict[str(scores[1][j][4])])
)
plt.imshow(X_new[j])
j+=1
plt.show()
```
| github_jupyter |
# European Soccer Database for data analysis Using SQLite,
- In this report, I will analyse the Europeon Soccer Dataset to answer some questions. This dataset comes from Kaggle and is well suited for data analysis and machine learning. It contains data for 25k+ soccer matches, 10k+ players, and teams from several European countries from 2008 to 2016.
https://www.kaggle.com/hugomathien/soccer
```
import sqlite3
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
### Data Wrangeling
```
conn= sqlite3.connect('European_soccer_database.sqlite')
dataframe= pd.read_sql("""SELECT * FROM sqlite_master
GROUP BY tbl_name;""", conn)
dataframe
dataframe.info()
dataframe.describe()
df_table= pd.read_sql("""SELECT * FROM sqlite_master
WHERE TYPE='table';""", conn)
df_table
df_country= pd.read_sql("""SELECT * FROM Country;""", conn)
df_country
df_player= pd.read_sql("""SELECT * FROM Player;""", conn)
df_player
```
### Now I want to joint two tables 'Country' and 'Player' to see how many players we have from each country
```
df_country_player= pd.read_sql(""" SELECT * FROM Player
JOIN Country ON Country.id= Player.id;""", conn)
df_country_player
# rename the column name to country
df_country_player.rename(columns={'name': 'country'})
df_player_attribute= pd.read_sql("""SELECT * FROM Player_Attributes
LIMIT 10;""", conn)
df_player_attribute
# List of teams
df_team= pd.read_sql("""SELECT * FROM Team ;""", conn)
df_team
df_league= pd.read_sql("""SELECT * FROM League;""", conn)
df_league
df_match= pd.read_sql("""SELECT * FROM Match;""", conn)
df_match
df_league_matches= pd.read_sql("""SELECT * FROM Match
JOIN League ON League.id= Match.id
ORDER BY season DESC
""", conn)
df_league_matches
```
## Obtaining Basic Information About DataFrame
```
#data frame index
df_league_matches.index
df_league_matches.info
# name of columns
df_league_matches.columns
```
### SQLite foreign key constraint actions; https://www.sqlitetutorial.net/sqlite-foreign-key/
Now, I will get the foreign key list of each table
```
pd.read_sql("PRAGMA foreign_key_list(Team);",conn)
pd.read_sql("PRAGMA foreign_key_list(Match);",conn)
```
### Now lets get the details of matches in Belgium
```
pd.read_sql("""SELECT Match.id,
Country.name AS country_name,
League.name AS league_name,
season,
stage,
date,
HT.team_long_name AS home_team,
AT.team_long_name AS away_team,
home_team_goal,
away_team_goal
FROM Match
JOIN Country on Country.id = Match.country_id
JOIN League on League.id = Match.league_id
LEFT JOIN Team AS HT on HT.team_api_id = Match.home_team_api_id
LEFT JOIN Team AS AT on AT.team_api_id = Match.away_team_api_id
WHERE country_name = 'Belgium'
ORDER by date
LIMIT 10;""", conn)
```
| github_jupyter |
# Train XGboost Model With Hyper-Params Using Serverless Functions
```
# nuclio: ignore
import nuclio
```
### Define function dependencies
```
%%nuclio cmd
pip install sklearn
pip install xgboost
pip install matplotlib
pip install mlrun
%nuclio config spec.build.baseImage = "python:3.6-jessie"
```
### Function code
```
import xgboost as xgb
import os
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.metrics import accuracy_score
from mlrun.artifacts import TableArtifact, PlotArtifact
import pandas as pd
def iris_generator(context, target=''):
iris = load_iris()
iris_dataset = pd.DataFrame(data=iris.data, columns=iris.feature_names)
iris_labels = pd.DataFrame(data=iris.target, columns=['label'])
iris_dataset = pd.concat([iris_dataset, iris_labels], axis=1)
context.logger.info('saving iris dataframe to {}'.format(target))
context.log_artifact(TableArtifact('iris_dataset', df=iris_dataset, target_path=target))
def xgb_train(context,
dataset='',
model_name='model.bst',
max_depth=6,
num_class=10,
eta=0.2,
gamma=0.1,
steps=20):
df = pd.read_csv(dataset)
X = df.drop(['label'], axis=1)
y = df['label']
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2)
dtrain = xgb.DMatrix(X_train, label=Y_train)
dtest = xgb.DMatrix(X_test, label=Y_test)
# Get params from event
param = {"max_depth": max_depth,
"eta": eta, "nthread": 4,
"num_class": num_class,
"gamma": gamma,
"objective": "multi:softprob"}
# Train model
xgb_model = xgb.train(param, dtrain, steps)
preds = xgb_model.predict(dtest)
best_preds = np.asarray([np.argmax(line) for line in preds])
# log results and artifacts
context.log_result('accuracy', float(accuracy_score(Y_test, best_preds)))
context.log_artifact('model', body=bytes(xgb_model.save_raw()),
target_path=model_name, labels={'framework': 'xgboost'})
import matplotlib
import matplotlib.pyplot as plt
from io import BytesIO
def plot_iter(context, iterations, col='accuracy', num_bins=10):
df = pd.read_csv(BytesIO(iterations.get()))
x = df['output.{}'.format(col)]
fig, ax = plt.subplots(figsize=(6,6))
n, bins, patches = ax.hist(x, num_bins, density=1)
ax.set_xlabel('Accuraccy')
ax.set_ylabel('Count')
context.log_artifact(PlotArtifact('myfig', body=fig))
# nuclio: end-code
# marks the end of a code section
```
## Import MLRUN, and run the data collection and training locally
```
from mlrun import new_function, code_to_function, NewTask, mount_v3io, new_model_server, mlconf, get_run_db
# for local DB path use 'User/mlrun' instead
mlconf.dbpath = 'http://mlrun-db:8080'
```
### Generate the iris dataset and store in a CSV
```
df_path = '/User/mlrun/df.csv'
gen = new_function().run(name='iris_gen', handler=iris_generator, params={'target': df_path})
```
### Define a training task with Hyper parameters (GridSearch) and run locally
```
# create a task and test our function locally with multiple parameters
parameters = {
"eta": [0.05, 0.10, 0.20],
"max_depth": [3, 4, 6, 8, 10],
"gamma": [0.0, 0.1, 0.3],
}
task = NewTask(handler=xgb_train, out_path='/User/mlrun/data', inputs={'dataset': df_path}).with_hyper_params(parameters, 'max.accuracy')
run = new_function().run(task)
```
## Deploy XGB function to Nuclio (with paralelism), and run remotely
```
# create the function from the notebook code + annotations, add volumes and parallel HTTP trigger
xgbfn = code_to_function('xgb', runtime='nuclio:mlrun')
xgbfn.add_volume('User','~/').with_http(workers=16).with_v3io()
# deploy the function to the cluster
xgbfn.deploy(project='iris')
nrun = xgbfn.run(task, handler='xgb_train')
```
## Create a multi-stage KubeFlow Pipeline from our functions
* Load Iris dataset into a CSV
* Train a model using XGBoost with Hyper-parameter
* Deploy the model using Nuclio-serving
* Generate a plot of the training results
```
import kfp
from kfp import dsl
artifacts_path = 'v3io:///users/admin/mlrun/kfp/{{workflow.uid}}/'
@dsl.pipeline(
name='My XGBoost training pipeline',
description='Shows how to use mlrun.'
)
def xgb_pipeline(
eta = [0.1, 0.2, 0.3], gamma = [0.0, 0.1, 0.2, 0.3]
):
ingest = xgbfn.as_step(name='ingest_iris', handler='iris_generator',
params = {'target': df_path},
outputs=['iris_dataset'], out_path=artifacts_path).apply(mount_v3io())
train = xgbfn.as_step(name='xgb_train', handler='xgb_train',
hyperparams = {'eta': eta, 'gamma': gamma},
selector='max.accuracy',
inputs = {'dataset': ingest.outputs['iris_dataset']},
outputs=['model'], out_path=artifacts_path).apply(mount_v3io())
plot = xgbfn.as_step(name='plot', handler='plot_iter',
inputs={'iterations': train.outputs['iteration_results']},
outputs=['iris_dataset'], out_path=artifacts_path).apply(mount_v3io())
# define a nuclio-serving functions, generated from a notebook file
srvfn = new_model_server('iris-serving', model_class='XGBoostModel', filename='nuclio_serving.ipynb')
# deploy the model serving function with inputs from the training stage
deploy = srvfn.with_v3io('User','~/').deploy_step(project = 'iris', models={'iris_v1': train.outputs['model']})
```
### Create a KubeFlow client and submit the pipeline with parameters
```
# for debug generate the pipeline dsl
#kfp.compiler.Compiler().compile(xgb_pipeline, 'mlrunpipe.yaml')
client = kfp.Client(namespace='default-tenant')
arguments = {'eta': [0.05, 0.10, 0.30], 'gamma': [0.0, 0.1, 0.2, 0.3]}
run_result = client.create_run_from_pipeline_func(xgb_pipeline, arguments, run_name='xgb 1', experiment_name='xgb')
# connect to the run db
db = get_run_db().connect()
# query the DB with filter on workflow ID (only show this workflow)
db.list_runs('', labels=f'workflow={run_result.run_id}').show()
# use this to supress XGB FutureWarning
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
```
| github_jupyter |
# KNN - Accuracy estimation
```
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
pd.options.mode.chained_assignment = None
# Load dataset
df = pd.read_excel('HCCI CFR data.xlsx', sheet_name = 'Data Compiled', index_col=0)
target = df['Output']
features = df[df.columns[0:9]]
# Define search space
n_neighbors = [2,3,5,6,7,9,10,11,13,14,15,17,18,19]
# Setup the grid to be searched over
param_grid = dict(n_neighbors=n_neighbors)
# Define outer folds
kFolds = KFold(n_splits=10, shuffle=True, random_state=1).split(X=features.values, y=target.values)
# Define inner folds
grid_search = GridSearchCV(KNeighborsClassifier(weights='uniform'), param_grid, cv=KFold(n_splits=10, shuffle=True, random_state=1),
n_jobs=19, verbose=1, scoring='precision_micro')
# Open results file and write out headers
out_file = open("grid_search_KNN.csv", 'w')
wr = csv.writer(out_file, dialect='excel')
headers = ['neighbours', 'micro_precision']
wr.writerow(headers)
out_file.flush()
KNN_file = open("KNN_results.csv", 'w',newline='')
wrr = csv.writer(KNN_file, dialect='excel')
headers = ['Actual', 'Predicted']
wrr.writerow(headers)
KNN_file.flush()
for index_train, index_test in kFolds:
# Get train and test splits
x_train, x_test = features.iloc[index_train].values, features.iloc[index_test].values
y_train, y_test = target.iloc[index_train].values, target.iloc[index_test].values
# Apply min max normalization
scaler = MinMaxScaler().fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# Fit
grid_search.fit(x_train, y_train)
# Get best params
best_params = grid_search.best_params_
##Testing
knn = KNeighborsClassifier(n_neighbors=best_params['n_neighbors'], weights='uniform')
knn.fit(x_train, y_train)
Y_pred = knn.predict(x_test)
print("precision:",metrics.precision_score(Y_pred, y_test, average='micro'))
# Write results
row = [best_params['n_neighbors'], metrics.precision_score(Y_pred, y_test, average='micro')]
wr.writerow(row)
out_file.flush()
# Write results
for i in range(len(y_test)):
row = (y_test[i], Y_pred[i])
wrr.writerow(row)
KNN_file.flush()
out_file.close()
KNN_file.close()
```
# SVM - Accuracy estimation
```
import numpy as np
import pandas as pd
import csv
import pickle
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.svm import SVC
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
pd.options.mode.chained_assignment = None
# Load dataset
df = pd.read_excel('HCCI CFR data.xlsx', sheet_name = 'Data Compiled', index_col=0)
target = df['Output']
# features = df[df.columns[0:7]]
#for sensitivity analysis remove a feature and run this block -- repeat this for all the seven features
features = df[['RON','S','Fuel rate','O2','Intake Temperature','Intake Pressure','Compression ratio']]
# Define search space
C = [1, 10, 100, 1000, 10000]
# Setup the grid to be searched over
param_grid = dict(C=C)
####Test sets accuracy
# Define outer folds
kFolds = KFold(n_splits=10, shuffle=True, random_state=1).split(X=features.values, y=target.values)
# Define inner folds
grid_search = GridSearchCV(SVC(gamma='auto', kernel = 'rbf'), param_grid, cv=KFold(n_splits=10, shuffle=True, random_state=1),
n_jobs=19, verbose=1, scoring='precision_micro')
# Open results file and write out headers
out_file = open("grid_search_SVM.csv", 'w')
wr = csv.writer(out_file, dialect='excel')
headers = ['C', 'micro_precision']
wr.writerow(headers)
out_file.flush()
SVM_file = open("SVM_results.csv", 'w',newline='')
wrr = csv.writer(SVM_file, dialect='excel')
headers = ['Actual', 'Predicted']
wrr.writerow(headers)
SVM_file.flush()
for index_train, index_test in kFolds:
# Get train and test splits
x_train, x_test = features.iloc[index_train].values, features.iloc[index_test].values
y_train, y_test = target.iloc[index_train].values, target.iloc[index_test].values
# Apply min max normalization
scaler = MinMaxScaler().fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# Fit
grid_search.fit(x_train, y_train)
# Get best params
best_params = grid_search.best_params_
##Testing
svm = SVC(gamma='auto', kernel = 'rbf', C=best_params['C'])
svm.fit(x_train, y_train)
Y_pred = svm.predict(x_test)
print("precision:",metrics.precision_score(Y_pred, y_test, average='micro'))
# Write results
row = [best_params['C'], metrics.precision_score(Y_pred, y_test, average='micro')]
wr.writerow(row)
out_file.flush()
# Write results
for i in range(len(y_test)):
row = (y_test[i], Y_pred[i])
wrr.writerow(row)
SVM_file.flush()
out_file.close()
SVM_file.close()
```
# SVM - Final Model
```
import numpy as np
import pandas as pd
import csv
import pickle
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.svm import SVC
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
pd.options.mode.chained_assignment = None
# Load dataset
df = pd.read_excel('HCCI CFR data.xlsx', sheet_name = 'Data Compiled', index_col=0)
target = df['Output']
# features = df[df.columns[0:9]]
features = df[['RON','S','Fuel rate','O2','Intake Temperature','Intake Pressure','Compression ratio']]
# features = df[['RON','S','Fuel rate','O2','Intake Temperature','Intake Pressure','Compression ratio', 'M.W', 'LHV(KJ/kg)']]
# Define search space
C = [1, 10, 100, 1000, 10000]
# Setup the grid to be searched over
param_grid = dict(C=C)
#####Save Final Model
# Define grid search
grid_search = GridSearchCV(SVC(kernel='rbf', gamma='auto'), param_grid, cv=KFold(n_splits=10, shuffle=True, random_state=1),
n_jobs=19, verbose=1, scoring='precision_micro')
# Split data in to features and target
x_train = features.values
y_train = target.values
# Apply min max normalization
scaler = MinMaxScaler().fit(x_train)
x_train = scaler.transform(x_train)
# Find best parameters
grid_search.fit(x_train, y_train)
print(grid_search.best_params_)
print(grid_search.best_score_)
# Retrain model with best parameters found from grid search
best_params = grid_search.best_params_
model = SVC(kernel='rbf', gamma='auto', C=best_params['C'])
model.fit(x_train, y_train)
# save the model
filename = 'final_SVR_model.sav'
pickle.dump(model, open(filename, 'wb'))
```
# Data for contour diagrams
```
import numpy as np
import pandas as pd
import pickle
import csv
from keras.models import load_model
# Load SVR model
filename = 'final_SVR_model.sav'
model = pickle.load(open(filename, 'rb'))
out_file = open("Counter_diagram_data.csv", 'w')
wr = csv.writer(out_file, dialect='excel', lineterminator = '\n')
headers = ['RON','S','Fuel rate','O2','Intake Temperature','Intake Pressure','CR','MW','LHV','Output']
wr.writerow(headers)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0,0]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0,0.3333]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0,0.6666]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0,1]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0.2,0]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0.2,0.3333]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0.2,0.6666]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
for i in range(0,101,1):
for j in range (0,101,1):
inp = [[i/100,0.34,j/100,1,0.3333,0.2,1]]
res = model.predict(inp)
#Write results
row = [inp[0][0],inp[0][1],inp[0][2],inp[0][3],inp[0][4],inp[0][5],inp[0][6],res[0]]
wr.writerow(row)
out_file.flush()
out_file.close()
```
| github_jupyter |
# [ NLP DAY 2 ] :
```
import nltk #natural language tool kit
```
1. Preprocessing:
a. segmentation
b. tokenization
c. POS tagging (parts of speech)
2. Word level processing:
a. wordnet
b. lemmatization
c. stemming
d. NGrams
3. Utilities:
a. Tree
b. FreqDist
c. ConditionalFreqDist
d. Streaming CorpusReaders
4. Classification:
a. Maximum Entropy
b. Naive Bayes
c. Decision Tree
5. Chunking
6. Named Entity Recognition
7. Parsers Galorel
# Corpus from ntlk
```
for name in dir(nltk.corpus):
print(name)
if name.islower() and not name.startswith('_'):
print(name)
len(dir(nltk))
```
# Changing the property of Jupyter notebook to print all the intermediatrary variables:
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
a,b = 10,20
a
b
```
### Here, the nb has printed both a and b, else it would print only b if print() wasn't given to the a
# Text files which the nltk corpus provides by default:
```
nltk.corpus.gutenberg.fileids()
nltk.corpus.shakespeare.fileids()
```
# Create variables to read the default corpus text files:
### 1. Get the individual words of the text in a list format:
```
nltk.corpus.gutenberg.words("shakespeare-hamlet.txt")
```
### 2. Create the variable:
```
hamlet = nltk.text.Text(nltk.corpus.gutenberg.words("shakespeare-hamlet.txt"))
hamlet
```
# Searching :
Word-Search: "king"
Word-Limit: 55
Line-Limit: 10
```
hamlet.concordance("king",55,10)
hamlet.concordance("king",1,1)
hamlet.concordance("king",55,1)
len(" elfe Bar . Long liue the King Fran . Barnardo ? Bar . ")
```
# [NLP DAY 3]
```
import nltk
import os
dir(nltk)
len(dir(nltk))
nltk.corpus.gutenberg.fileids()
```
# 1. Finds the zip files of corpora:
```
print(os.listdir(nltk.data.find("corpora")))
```
# Get the text xml file in a variable:
```
hamlet = nltk.corpus.gutenberg.words("shakespeare-hamlet.txt")
hamlet
```
# Get the first 500 words:
```
for i in hamlet[:500]:
print(i,sep=" ", end = " ")
for i in hamlet[:10]:
print(i,sep=" ",end = " ")
ai = "How are you Hello I am awesome bro How are you I am straight up not having a good time bro"
type(ai)
```
# Topic: Tokenization:
## word_tokenize
```
from nltk.tokenize import word_tokenize
ai_tokens = word_tokenize(ai)
ai_tokens
len(hamlet)
type(hamlet)
```
# Check the frequency of the words in the tokens:
```
from nltk.probability import FreqDist
fdist = FreqDist()
for i in ai_tokens:
fdist[i.lower()]+=1
fdist
type(fdist)
```
## Indexing and Slicing:
```
fdist["bro"]
```
# Check : The most common words in the object text file
```
fdist_top2 = fdist.most_common(2)
fdist_top2
ai = "Bro help bro \n no bro this is hopeless \n bro be optimistic bro \n ok bro"
ai_tokens = word_tokenize(ai)
fdist2 = FreqDist()
for i in ai_tokens:
fdist2[i.lower()]+=1
fdist2
fdist2_top2 = fdist2.most_common(2)
fdist2_top2
```
# Check the number of paragraphs:
```
from nltk.tokenize import blankline_tokenize
ai_blank = blankline_tokenize(ai)
len(hamlet_blank)
```
# TOPIC: Bigrams, Trigrams, ngrams
## USES: chatbots, search engines, etc
Google uses ngrams concept for their search engine
```
from nltk.util import bigrams,trigrams,ngrams
string = "The Tragedie of Hamlet by William Shakespeare 1599 Actus Primus . Scoena Prima "
quotes_token = nltk.word_tokenize(string)
quotes_token
```
## Bigrams:
```
quotes_bigrams = nltk.bigrams(quotes_token)
bi = list(quotes_bigrams)
bi
```
## Trigrams:
```
quotes_trigrams = nltk.trigrams(quotes_token)
tri = list(quotes_trigrams)
tri
```
| github_jupyter |
# Randomized Benchmarking
## Contents
1. [Introduction](#intro)
2. [The Randomized Benchmarking Protocol](#protocol)
3. [The Intuition Behind RB](#intuition)
4. [Simultaneous Randomized Benchmarking](#simultaneousrb)
5. [Predicted Gate Fidelity](#predicted-gate-fidelity)
6. [References](#references)
## 1. Introduction <a id='intro'></a>
One of the main challenges in building a quantum information processor is the non-scalability of completely
characterizing the noise affecting a quantum system via process tomography. In addition, process tomography is sensitive to noise in the pre- and post rotation gates plus the measurements (SPAM errors). Gateset tomography can take these errors into account, but the scaling is even worse. A complete characterization
of the noise is useful because it allows for the determination of good error-correction schemes, and thus
the possibility of reliable transmission of quantum information.
Since complete process tomography is infeasible for large systems, there is growing interest in scalable
methods for partially characterizing the noise affecting a quantum system. A scalable (in the number $n$ of qubits comprising the system) and robust algorithm for benchmarking the full set of Clifford gates by a single parameter using randomization techniques was presented in [1]. The concept of using randomization methods for benchmarking quantum gates is commonly called **Randomized Benchmarking
(RB)**.
## 2. The Randomized Benchmarking Protocol <a id='protocol'></a>
We should first import the relevant qiskit classes for the demonstration:
```
# Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
# Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
# Import Qiskit classes
import qiskit
from qiskit import assemble, transpile
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
```
A RB protocol (see [1,2]) consists of the following steps:
### Step 1: Generate RB sequences
The RB sequences consist of random Clifford elements chosen uniformly from the Clifford group on $n$-qubits,
including a computed reversal element,
that should return the qubits to the initial state.
More precisely, for each length $m$, we choose $K_m$ RB sequences.
Each such sequence contains $m$ random elements $C_{i_j}$ chosen uniformly from the Clifford group on $n$-qubits, and the $m+1$ element is defined as follows: $C_{i_{m+1}} = (C_{i_1}\cdot ... \cdot C_{i_m})^{-1}$. It can be found efficiently by the Gottesmann-Knill theorem.
For example, we generate below several sequences of 2-qubit Clifford circuits.
```
# Generate RB circuits (2Q RB)
# number of qubits
nQ = 2
rb_opts = {}
#Number of Cliffords in the sequence
rb_opts['length_vector'] = [1, 10, 20, 50, 75, 100, 125, 150, 175, 200]
# Number of seeds (random sequences)
rb_opts['nseeds'] = 5
# Default pattern
rb_opts['rb_pattern'] = [[0, 1]]
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
```
As an example, we print the circuit corresponding to the first RB sequence
```
rb_circs[0][0].draw()
```
One can verify that the Unitary representing each RB circuit should be the identity (with a global phase).
We simulate this using Aer unitary simulator.
```
# Create a new circuit without the measurement
qregs = rb_circs[0][-1].qregs
cregs = rb_circs[0][-1].cregs
qc = qiskit.QuantumCircuit(*qregs, *cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
# The Unitary is an identity (with a global phase)
sim = qiskit.Aer.get_backend('aer_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
qc.save_unitary()
unitary = sim.run(qc).result().get_unitary()
from qiskit.visualization import array_to_latex
array_to_latex(unitary, prefix="\\text{Unitary} = ")
```
### Step 2: Execute the RB sequences (with some noise)
We can execute the RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results.
By assumption each operation $C_{i_j}$ is allowed to have some error, represented by $\Lambda_{i_j,j}$, and each sequence can be modeled by the operation:
$$\textit{S}_{\textbf{i}_\textbf{m}} = \bigcirc_{j=1}^{m+1} (\Lambda_{i_j,j} \circ C_{i_j})$$
where ${\textbf{i}_\textbf{m}} = (i_1,...,i_m)$ and $i_{m+1}$ is uniquely determined by ${\textbf{i}_\textbf{m}}$.
```
# Run on a noisy simulator
noise_model = NoiseModel()
# Depolarizing error on the gates u2, u3 and cx (assuming the u1 is virtual-Z gate and no error)
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2 * p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
backend = qiskit.Aer.get_backend('aer_simulator')
```
### Step 3: Get statistics about the survival probabilities
For each of the $K_m$ sequences the survival probability $Tr[E_\psi \textit{S}_{\textbf{i}_\textbf{m}}(\rho_\psi)]$
is measured.
Here $\rho_\psi$ is the initial state taking into account preparation errors and $E_\psi$ is the
POVM element that takes into account measurement errors.
In the ideal (noise-free) case $\rho_\psi = E_\psi = | \psi {\rangle} {\langle} \psi |$.
In practice one can measure the probability to go back to the exact initial state, i.e. all the qubits in the ground state $ {|} 00...0 {\rangle}$ or just the probability for one of the qubits to return back to the ground state. Measuring the qubits independently can be more convenient if a correlated measurement scheme is not possible. Both measurements will fit to the same decay parameter according to the properties of the *twirl*.
### Step 4: Find the averaged sequence fidelity
Average over the $K_m$ random realizations of the sequence to find the averaged sequence **fidelity**,
$$F_{seq}(m,|\psi{\rangle}) = Tr[E_\psi \textit{S}_{K_m}(\rho_\psi)]$$
where
$$\textit{S}_{K_m} = \frac{1}{K_m} \sum_{\textbf{i}_\textbf{m}} \textit{S}_{\textbf{i}_\textbf{m}}$$
is the average sequence operation.
### Step 5: Fit the results
Repeat Steps 1 through 4 for different values of $m$ and fit the results for the averaged sequence fidelity to the model:
$$ \textit{F}_{seq}^{(0)} \big(m,{|}\psi {\rangle} \big) = A_0 \alpha^m +B_0$$
where $A_0$ and $B_0$ absorb state preparation and measurement errors as well as an edge effect from the
error on the final gate.
$\alpha$ determines the average error-rate $r$, which is also called **Error per Clifford (EPC)**
according to the relation
$$ r = 1-\alpha-\frac{1-\alpha}{2^n} = \frac{2^n-1}{2^n}(1-\alpha)$$
(where $n=nQ$ is the number of qubits).
As an example, we calculate the average sequence fidelity for each of the RB sequences, fit the results to the exponential curve, and compute the parameters $\alpha$ and EPC.
```
# Create the RB fitter
backend = qiskit.Aer.get_backend('aer_simulator')
basis_gates = ['u1','u2','u3','cx']
shots = 200
transpiled_circs_list = []
rb_fit = rb.RBFitter(None, xdata, rb_opts['rb_pattern'])
for rb_seed, rb_circ_seed in enumerate(rb_circs):
print(f'Compiling seed {rb_seed}')
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
transpiled_circs_list.append(new_rb_circ_seed)
print(f'Simulating seed {rb_seed}')
qobj = assemble(new_rb_circ_seed, shots=shots)
job = backend.run(qobj,
noise_model=noise_model,
max_parallel_experiments=0)
# Add data to the fitter
rb_fit.add_data(job.result())
print('After seed %d, alpha: %f, EPC: %f'%(rb_seed,rb_fit.fit[0]['params'][1], rb_fit.fit[0]['epc']))
```
### Extra Step: Plot the results
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rb_fit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(nQ), fontsize=18)
plt.show()
```
## 3. The Intuition Behind RB <a id='intuition'></a>
The depolarizing quantum channel has a parameter $\alpha$, and works like this: with probability $\alpha$, the state remains the same as before; with probability $1-\alpha$, the state becomes the totally mixed state, namely:
$$\rho_f = \alpha \rho_i + \frac{1-\alpha}{2^n} * \mathbf{I}$$
Suppose that we have a sequence of $m$ gates, not necessarily Clifford gates,
where the error channel of the gates is a depolarizing channel with parameter $\alpha$
(same $\alpha$ for all the gates).
Then with probability $\alpha^m$ the state is correct at the end of the sequence,
and with probability $1-\alpha^m$ it becomes the totally mixed state, therefore:
$$\rho_f^m = \alpha^m \rho_i + \frac{1-\alpha^m}{2^n} * \mathbf{I}$$
Now suppose that in addition we start with the ground state;
that the entire sequence amounts to the identity;
and that we measure the state at the end of the sequence with the standard basis.
We derive that the probability of success at the end of the sequence is:
$$\alpha^m + \frac{1-\alpha^m}{2^n} = \frac{2^n-1}{2^n}\alpha^m + \frac{1}{2^n} = A_0\alpha^m + B_0$$
It follows that the probability of success, aka fidelity, decays exponentially with the sequence length, with exponent $\alpha$.
The last statement is not necessarily true when the channel is other than the depolarizing channel. However, it turns out that if the gates are uniformly-randomized Clifford gates, then the noise of each gate behaves on average as if it was the depolarizing channel, with some parameter that can be computed from the channel, and we obtain the exponential decay of the fidelity.
Formally, taking an average over a finite group $G$ (like the Clifford group) of a quantum channel $\bar \Lambda$ is also called a *twirl*:
$$ W_G(\bar \Lambda) \frac{1}{|G|} \sum_{u \in G} U^{\dagger} \circ \bar \Lambda \circ U$$
Twirling over the entire unitary group yields exactly the same result as the Clifford group. The Clifford group is a *2-design* of the unitary group.
## 4. Simultaneous Randomized Benchmarking <a id='simultaneousrb'></a>
RB is designed to address fidelities in multiqubit systems in two ways. For one, RB over the full $n$-qubit space
can be performed by constructing sequences from the $n$-qubit Clifford group. Additionally, the $n$-qubit space
can be subdivided into sets of qubits $\{n_i\}$ and $n_i$-qubit RB performed in each subset simultaneously [4].
Both methods give metrics of fidelity in the $n$-qubit space.
For example, it is common to perform 2Q RB on the subset of two-qubits defining a CNOT gate while the other qubits are quiescent. As explained in [4], this RB data will not necessarily decay exponentially because the other qubit subspaces are not twirled. Subsets are more rigorously characterized by simultaneous RB, which also measures some level of crosstalk error since all qubits are active.
An example of simultaneous RB (1Q RB and 2Q RB) can be found in:
https://github.com/Qiskit/qiskit-tutorials/blob/master/tutorials/noise/4_randomized_benchmarking.ipynb
## 5. Predicted Gate Fidelity <a id='predicted-gate-fidelity'></a>
If we know the errors on the underlying gates (the gateset) we can predict the EPC without running RB experiment. This calculation verifies that your RB experiment followed by fitting yields correct EPC value. First we need to count the number of these gates per Clifford.
Then, the two qubit Clifford gate error function ``calculate_2q_epc`` gives the error per 2Q Clifford. It assumes that the error in the underlying gates is depolarizing. This function is derived in the supplement to [5].
```
# count the number of single and 2Q gates in the 2Q Cliffords
qubits = rb_opts['rb_pattern'][0]
gate_per_cliff = rb.rb_utils.gates_per_clifford(
transpiled_circuits_list=transpiled_circs_list,
clifford_lengths=xdata[0],
basis=basis_gates,
qubits=qubits)
for basis_gate in basis_gates:
print("Number of %s gates per Clifford: %f"%(
basis_gate,
np.mean([gate_per_cliff[qubit][basis_gate] for qubit in qubits])))
# convert from depolarizing error to epg (1Q)
epg_q0 = {'u1': 0, 'u2': p1Q/2, 'u3': 2 * p1Q/2}
epg_q1 = {'u1': 0, 'u2': p1Q/2, 'u3': 2 * p1Q/2}
# convert from depolarizing error to epg (2Q)
epg_q01 = 3/4 * p2Q
# calculate the predicted epc from underlying gate errors
pred_epc = rb.rb_utils.calculate_2q_epc(
gate_per_cliff=gate_per_cliff,
epg_2q=epg_q01,
qubit_pair=qubits,
list_epgs_1q=[epg_q0, epg_q1])
print("Predicted 2Q Error per Clifford: %e (qasm simulator result: %e)" % (pred_epc, rb_fit.fit[0]['epc']))
```
On the other hand, we can calculate the errors on the underlying gates (the gateset) from the experimentally obtained EPC. Given that we know the errors on the every single-qubit gates in the RB sequence, we can predict 2Q gate error from the EPC of two qubit RB experiment.
The two qubit gate error function ``calculate_2q_epg`` gives the estimate of error per 2Q gate. In this section we prepare single-qubit errors using the deporalizing error model. If the error model is unknown, EPGs of those gates, for example [``u1``, ``u2``, ``u3``], can be estimated with a separate 1Q RB experiment with the utility function ``calculate_1q_epg``.
```
# use 2Q EPC from qasm simulator result and 1Q EPGs from depolarizing error model
pred_epg = rb.rb_utils.calculate_2q_epg(
gate_per_cliff=gate_per_cliff,
epc_2q=rb_fit.fit[0]['epc'],
qubit_pair=qubits,
list_epgs_1q=[epg_q0, epg_q1])
print("Predicted 2Q Error per gate: %e (gate error model: %e)" % (pred_epg, epg_q01))
```
## 6. References <a id='references'></a>
1. Easwar Magesan, J. M. Gambetta, and Joseph Emerson, *Robust randomized benchmarking of quantum processes*,
https://arxiv.org/pdf/1009.3639
2. Easwar Magesan, Jay M. Gambetta, and Joseph Emerson, *Characterizing Quantum Gates via Randomized Benchmarking*,
https://arxiv.org/pdf/1109.6887
3. A. D. C'orcoles, Jay M. Gambetta, Jerry M. Chow, John A. Smolin, Matthew Ware, J. D. Strand, B. L. T. Plourde, and M. Steffen, *Process verification of two-qubit quantum gates by randomized benchmarking*, https://arxiv.org/pdf/1210.7011
4. Jay M. Gambetta, A. D. C´orcoles, S. T. Merkel, B. R. Johnson, John A. Smolin, Jerry M. Chow,
Colm A. Ryan, Chad Rigetti, S. Poletto, Thomas A. Ohki, Mark B. Ketchen, and M. Steffen,
*Characterization of addressability by simultaneous randomized benchmarking*, https://arxiv.org/pdf/1204.6308
5. David C. McKay, Sarah Sheldon, John A. Smolin, Jerry M. Chow, and Jay M. Gambetta, *Three Qubit Randomized Benchmarking*, https://arxiv.org/pdf/1712.06550
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## R50-FPN-3x
```
test_results = {'AP': 30.183924421787978,
'AP-Bathtub': 53.39773724295068,
'AP-Bed': 52.754509322305175,
'AP-Billiard table': 67.45412317841321,
'AP-Ceiling fan': 23.310566361650434,
'AP-Coffeemaker': 51.375885917263,
'AP-Couch': 25.679265150881648,
'AP-Countertop': 11.176039237004812,
'AP-Dishwasher': 0.0,
'AP-Fireplace': 23.42211115504908,
'AP-Fountain': 38.38767930113601,
'AP-Gas stove': 12.07564382601228,
'AP-Jacuzzi': 0.0,
'AP-Kitchen & dining room table': 13.75287501981837,
'AP-Microwave oven': 40.84644399475737,
'AP-Mirror': 35.00769838477773,
'AP-Oven': 25.496395102413526,
'AP-Pillow': 20.544141258990937,
'AP-Porch': 4.3426715044379485,
'AP-Refrigerator': 55.40458899535102,
'AP-Shower': 9.888904481361353,
'AP-Sink': 19.07946534850723,
'AP-Sofa bed': 50.07526874397189,
'AP-Stairs': 22.491392352723764,
'AP-Swimming pool': 72.9971033812601,
'AP-Television': 60.842008749251285,
'AP-Toilet': 48.78555603130734,
'AP-Towel': 27.24336336476925,
'AP-Tree house': 0.0,
'AP-Washing machine': 28.857248056840657,
'AP-Wine rack': 10.829047190433332,
'AP50': 48.5871640675183,
'AP75': 32.5724886725335,
'APl': 30.740800398603614,
'APm': 4.45571372903077,
'APs': 0.0}
val_results = {'AP': 29.035,
'AP-Bathtub': 24.578,
'AP-Bed': 51.695,
'AP-Billiard table': 66.619,
'AP-Ceiling fan': 33.442,
'AP-Coffeemaker': 38.939,
'AP-Couch': 28.291,
'AP-Countertop': 14.857,
'AP-Dishwasher': 0.0,
'AP-Fireplace': 27.116,
'AP-Fountain': 43.165,
'AP-Gas stove': 15.449,
'AP-Jacuzzi': 0.0,
'AP-Kitchen & dining room table': 12.335,
'AP-Microwave oven': 34.291,
'AP-Mirror': 40.240,
'AP-Oven': 19.135,
'AP-Pillow': 18.432,
'AP-Porch': 6.291,
'AP-Refrigerator': 65.017,
'AP-Shower': 0.356,
'AP-Sink': 30.484,
'AP-Sofa bed': 47.172,
'AP-Stairs': 22.639,
'AP-Swimming pool': 66.461,
'AP-Television': 62.537,
'AP-Toilet': 40.841,
'AP-Towel': 21.892,
'AP-Tree house': 0.0,
'AP-Washing machine': 26.884,
'AP-Wine rack': 11.881,
'AP50': 47.212,
'AP75': 29.823,
'APl': 29.521,
'APm': 8.174,
'APs': 0.0}
```
## R101-FPN-3x
```
test_results = {'AP': 28.766909850927302,
'AP-Bathtub': 52.8782402573615,
'AP-Bed': 50.70832689983663,
'AP-Billiard table': 66.13464702811605,
'AP-Ceiling fan': 27.428841247258944,
'AP-Coffeemaker': 47.91662006356297,
'AP-Couch': 24.70202737120508,
'AP-Countertop': 13.138098151057076,
'AP-Dishwasher': 0.0,
'AP-Fireplace': 24.470721462337696,
'AP-Fountain': 36.17654043036692,
'AP-Gas stove': 12.488702772678968,
'AP-Jacuzzi': 0.0,
'AP-Kitchen & dining room table': 14.653719514085903,
'AP-Microwave oven': 40.50083273011718,
'AP-Mirror': 38.41435891370926,
'AP-Oven': 20.873738830025143,
'AP-Pillow': 20.05872752064416,
'AP-Porch': 3.4498669435768674,
'AP-Refrigerator': 58.293186422096944,
'AP-Shower': 7.062300368552515,
'AP-Sink': 18.582545225580418,
'AP-Sofa bed': 45.59211340725856,
'AP-Stairs': 23.649185867447596,
'AP-Swimming pool': 66.10703995176956,
'AP-Television': 51.687249913558084,
'AP-Toilet': 55.12556639625279,
'AP-Towel': 15.76906387979253,
'AP-Tree house': 0.0,
'AP-Washing machine': 20.827402196393372,
'AP-Wine rack': 6.317631763176316,
'AP50': 46.61545312066519,
'AP75': 31.415690542064546,
'APl': 29.406371855761982,
'APm': 5.212065007401241,
'APs': 0.0}
val_results = {'AP': 29.029,
'AP-Bathtub': 19.976,
'AP-Bed': 48.495,
'AP-Billiard table': 61.895,
'AP-Ceiling fan': 26.736,
'AP-Coffeemaker': 38.939,
'AP-Couch': 21.837,
'AP-Countertop': 16.060,
'AP-Dishwasher': 0.0,
'AP-Fireplace': 29.547,
'AP-Fountain': 44.420,
'AP-Gas stove': 20.592,
'AP-Jacuzzi': 0.0,
'AP-Kitchen & dining room table': 10.273,
'AP-Microwave oven': 40.946,
'AP-Mirror': 41.844,
'AP-Oven': 24.474,
'AP-Pillow': 14.924,
'AP-Porch': 10.364,
'AP-Refrigerator': 71.404,
'AP-Shower': 1.814,
'AP-Sink': 23.286,
'AP-Sofa bed': 36.370,
'AP-Stairs': 25.209,
'AP-Swimming pool': 64.806,
'AP-Television': 58.349,
'AP-Toilet': 57.587,
'AP-Towel': 18.631,
'AP-Tree house': 0.0,
'AP-Washing machine': 30.103,
'AP-Wine rack': 14.851,
'AP50': 46.871,
'AP75': 29.029,
'APl': 29.522,
'APm': 7.737,
'APs': 0.0}
```
## RN-50-3x
```
test_results = {'AP': 38.492384449355676,
'AP-Bathtub': 60.115406677858815,
'AP-Bed': 57.83057924335149,
'AP-Billiard table': 75.51729046140261,
'AP-Ceiling fan': 45.18986833756713,
'AP-Coffeemaker': 51.970384613792795,
'AP-Couch': 23.824464499510597,
'AP-Countertop': 7.5333804114591345,
'AP-Dishwasher': 0.2611187892244602,
'AP-Fireplace': 42.060241857955674,
'AP-Fountain': 44.873704286774775,
'AP-Gas stove': 16.439547201431257,
'AP-Jacuzzi': 15.70258672546233,
'AP-Kitchen & dining room table': 11.599905718477451,
'AP-Microwave oven': 45.02085736870218,
'AP-Mirror': 43.812744467203615,
'AP-Oven': 42.11172757937511,
'AP-Pillow': 27.273736611804427,
'AP-Porch': 5.555847755821968,
'AP-Refrigerator': 75.04729718629045,
'AP-Shower': 10.573880914087361,
'AP-Sink': 26.0970404108146,
'AP-Sofa bed': 53.07206951762474,
'AP-Stairs': 30.971772175630193,
'AP-Swimming pool': 74.96574053773193,
'AP-Television': 65.65506318778338,
'AP-Toilet': 56.7543593099767,
'AP-Towel': 33.37522247440722,
'AP-Tree house': 15.550969305550783,
'AP-Washing machine': 39.793207405905434,
'AP-Wine rack': 56.221518447691665,
'AP50': 56.20747908075276,
'AP75': 40.936661522293235,
'APl': 39.232677787456346,
'APm': 7.138823519721613,
'APs': 0.0}
val_results = {'AP': 39.412,
'AP-Bathtub': 17.3,
'AP-Bed': 57.507,
'AP-Billiard table': 69.881,
'AP-Ceiling fan': 74.183,
'AP-Coffeemaker': 54.816,
'AP-Couch': 26.903,
'AP-Countertop': 7.952,
'AP-Dishwasher': 3.922,
'AP-Fireplace': 35.853,
'AP-Fountain': 44.115,
'AP-Gas stove': 22.093,
'AP-Jacuzzi': 12.610,
'AP-Kitchen & dining room table': 10.115,
'AP-Microwave oven': 52.319,
'AP-Mirror': 56.331,
'AP-Oven': 40.307,
'AP-Pillow': 23.396,
'AP-Porch': 12.546,
'AP-Refrigerator': 70.455,
'AP-Shower': 2.472,
'AP-Sink': 33.583,
'AP-Sofa bed': 55.140,
'AP-Stairs': 28.852,
'AP-Swimming pool': 71.250,
'AP-Television': 63.350,
'AP-Toilet': 53.052,
'AP-Towel': 42.210,
'AP-Tree house': 46.378,
'AP-Washing machine': 39.166,
'AP-Wine rack': 54.297,
'AP50': 58.921,
'AP75': 45.521,
'APl': 39.918,
'APm': 11.587,
'APs': 0.0}
```
## RN-101-3x
```
test_results = {'AP': 38.629005503971094,
'AP-Bathtub': 57.88445796297448,
'AP-Bed': 59.37857224325405,
'AP-Billiard table': 76.94003410603575,
'AP-Ceiling fan': 43.559413116751685,
'AP-Coffeemaker': 48.989599570274635,
'AP-Couch': 28.408037999550057,
'AP-Countertop': 9.371518046833943,
'AP-Dishwasher': 0.3057563835101482,
'AP-Fireplace': 33.22536116260668,
'AP-Fountain': 51.67562280533134,
'AP-Gas stove': 12.330097602716778,
'AP-Jacuzzi': 12.350546301821613,
'AP-Kitchen & dining room table': 18.68710022505696,
'AP-Microwave oven': 53.34352009331587,
'AP-Mirror': 42.89079611781959,
'AP-Oven': 36.861902837366436,
'AP-Pillow': 27.396564938073226,
'AP-Porch': 5.103006680850462,
'AP-Refrigerator': 76.09240465640137,
'AP-Shower': 13.597115846657411,
'AP-Sink': 21.370478428246212,
'AP-Sofa bed': 52.38685348876891,
'AP-Stairs': 25.14933002641884,
'AP-Swimming pool': 78.33067628089339,
'AP-Television': 66.9673571851024,
'AP-Toilet': 59.34648419020555,
'AP-Towel': 30.715535811238592,
'AP-Tree house': 16.56763331964142,
'AP-Washing machine': 49.20082443485014,
'AP-Wine rack': 50.44356325656495,
'AP50': 56.89461825223869,
'AP75': 40.42469217370782,
'APl': 39.27304034450038,
'APm': 8.877459691321537,
'APs': 0.0}
val_results = {'AP': 38.994,
'AP-Bathtub': 21.001,
'AP-Bed': 58.507,
'AP-Billiard table': 74.140,
'AP-Ceiling fan': 58.704,
'AP-Coffeemaker': 56.840,
'AP-Couch': 31.043,
'AP-Countertop': 11.130,
'AP-Dishwasher': 1.226,
'AP-Fireplace': 38.402,
'AP-Fountain': 52.095,
'AP-Gas stove': 18.639,
'AP-Jacuzzi': 5.965,
'AP-Kitchen & dining room table': 12.709,
'AP-Microwave oven': 58.799,
'AP-Mirror': 49.530,
'AP-Oven': 32.037,
'AP-Pillow': 18.782,
'AP-Porch': 16.070,
'AP-Refrigerator': 79.243,
'AP-Shower': 2.042,
'AP-Sink': 27.710,
'AP-Sofa bed': 47.664,
'AP-Stairs': 25.455,
'AP-Swimming pool': 73.408,
'AP-Television': 69.934,
'AP-Toilet': 49.344,
'AP-Towel': 48.058,
'AP-Tree house': 39.197,
'AP-Washing machine': 46.957,
'AP-Wine rack': 45.206,
'AP50': 58.644,
'AP75': 41.617,
'APl': 39.375,
'APm': 11.715,
'APs': 0.0}
```
## Plotting
```
result = pd.DataFrame([test_results], index=[0])
result = result.T
result.columns = ['result']
result.sort_values('result',inplace=True, ascending = True)
result.head()
plotted = []
for i in result.index:
if i not in ['APl', 'APm', 'APs', 'AP50', 'AP75']:
plotted.append(i)
plt.figure(figsize=(10,8))
chart = plt.barh([i.split('-')[1] if i not in ['AP'] else i for i in plotted], result.loc[plotted,'result'])
chart[plotted.index('AP')].set_color('orange')
plt.title('Average Precision by Class for RN-101-3x (Test Evaluation)')
plt.xlabel('Average Precision')
plt.ylabel('Classes')
plt.show()
```
| github_jupyter |
```
# default_exp datasets
#export
from fastai.text import *
from tse.preprocessing import *
from tse.tokenizers import *
```
### Prepare Data Inputs for Q/A
Following for each input for training is needed:
`input_ids`, `attention_mask`, `token_type_ids`, `offsets`, `answer_text`, `start_tok_idx`, `end_tok_idx`
Preprocess
```
train_df = pd.read_csv("../data/train.csv").dropna().reset_index(drop=True)
test_df = pd.read_csv("../data/test.csv")
strip_text(train_df, "text")
strip_text(train_df, "selected_text")
strip_text(test_df, "text")
replace_whitespace(train_df, "text")
replace_whitespace(train_df, "selected_text")
replace_whitespace(test_df, "text")
replace_URLs(train_df, "text")
replace_URLs(train_df, "selected_text")
replace_URLs(test_df, "text")
replace_user(train_df, "text")
replace_user(train_df, "selected_text")
replace_user(test_df, "text")
is_wrong = train_df.apply(lambda o: is_wrong_selection(o['text'], o['selected_text']), 1)
train_df = train_df[~is_wrong].reset_index(drop=True)
list(train_df['text'])
train_df.shape
```
Tokenizer
```
tokenizer = init_roberta_tokenizer("../roberta-base/vocab.json", "../roberta-base/merges.txt", max_length=192)
train_df.head()
#export
def get_start_end_idxs(context, answer):
"Get string start and end char for answer span"
len_a = len(answer)
for i, _ in enumerate(context):
if context[i:i+len_a] == answer:
start_idx, end_idx = i, i+len_a-1
return start_idx, end_idx
raise Exception("No overlapping segment found")
#export
def get_start_end_tok_idxs(offsets, start_idx, end_idx):
"Generate target from tokens - first 4 tokens belong to question"
start_tok_idx, end_tok_idx = None, None
for tok_idx, off in enumerate(offsets[4:]):
if (off[0] <= start_idx) & (off[1] > start_idx): start_tok_idx = tok_idx + 4
if (off[0] <= end_idx) & (off[1] > end_idx): end_tok_idx = tok_idx + 4
return (start_tok_idx, end_tok_idx)
trn_stxt, trn_txt, trn_sent = train_df.selected_text.values, train_df.text.values, train_df.sentiment
test_txt, test_sent = test_df.text.values, test_df.sentiment.values
train_tok_input = list(tuple(zip(trn_sent, trn_txt)))
test_tok_input = list(tuple(zip(test_sent, test_txt)))
# encode batch
train_outputs = tokenizer.encode_batch(train_tok_input)
test_outputs = tokenizer.encode_batch(test_tok_input)
start_end_idxs = [get_start_end_idxs(s1,s2) for (s1,s2) in zip(trn_txt, trn_stxt)]
#export
class QAInputGenerator:
def __init__(self, contexts, questions, text_ids=None, answers=None, tokenizer=None):
self.contexts, self.questions, self.answers = contexts, questions, answers
self.outputs = tokenizer.encode_batch(list(tuple(zip(questions, contexts))))
if text_ids is not None: self.text_ids = text_ids
if self.answers is not None:
self.start_end_idxs = [get_start_end_idxs(s1,s2) for (s1,s2) in zip(self.contexts, self.answers)]
@classmethod
def from_df(cls, df,
ctx_col='text', q_col='sentiment', id_col='textID', ans_col='selected_text',
is_test=False, tokenizer=None):
contexts = df[ctx_col].values
questions = df[q_col].values
text_ids = None if id_col is None else df[id_col].values
answers = None if is_test else df[ans_col].values
return cls(contexts, questions, text_ids, answers, tokenizer)
def __getitem__(self, i):
input_ids = array(self.outputs[i].ids)
attention_mask = array(self.outputs[i].attention_mask)
offsets = array(self.outputs[i].offsets)
tokens = array(self.outputs[i].tokens)
res = {"input_ids": input_ids, "attention_mask": attention_mask, "offsets": offsets,
"tokens": tokens, "context_text": self.contexts[i]}
if self.answers is not None:
answer_text = self.answers[i]
start_tok_idx, end_tok_idx = get_start_end_tok_idxs(offsets, *self.start_end_idxs[i])
res["answer_text"] = answer_text
res["start_end_tok_idxs"] = (start_tok_idx, end_tok_idx)
if self.text_ids is not None:
text_id = self.text_ids[i]
res["text_id"] = text_id
return res
def __len__(self): return len(self.contexts)
train_inputs = QAInputGenerator.from_df(train_df, tokenizer=tokenizer)
test_inputs = QAInputGenerator.from_df(test_df, is_test=True, tokenizer=tokenizer)
i = np.random.choice(range(len(train_inputs)))
print(train_inputs[i].keys())
print(train_inputs[i]['tokens'][train_inputs[i]['start_end_tok_idxs'][0]:train_inputs[i]['start_end_tok_idxs'][1]+1])
print(train_inputs[i]['answer_text'])
i = np.random.choice(range(len(test_inputs)))
print(test_inputs[i].keys())
print(test_inputs[i]['tokens'][test_inputs[i]['attention_mask'].astype(bool)])
train_inputs = list(train_inputs)
test_inputs = list(test_inputs)
len(train_inputs), len(test_inputs)
```
### TSEDataAugmentor
#### 1) Random Left - Right Truncate
```
-> tok3 anstok anstok anstok tok7 (rand left and right idxs)
-> tok3 anstok anstok anstok tok7 tok8 (rand left idx)
-> Tok1 tok2 tok3 anstok anstok anstok tok7 (rand right idx)
```
#### 2) Random Mask
```
-> Tok1 tok2 <MASK> anstok anstok anstok tok7 <MASK>
-> Tok1 tok2 <UNK> anstok anstok anstok tok7 <UNK>
```
#### 3) Replace with pseudolabel
```
#export
class TSEDataAugmentor:
def __init__(self, tokenizer, input_ids, attention_mask, start_position, end_position):
self.tokenizer = tokenizer
self.input_ids = input_ids
self.attention_mask = attention_mask
# initial answer start and end positions
self.ans_start_pos, self.ans_end_pos = start_position.item(), end_position.item()
# context token start and end excluding bos - eos tokens
self.context_start_pos = 4
self.context_end_pos = torch.where(attention_mask)[0][-1].item() - 1
# left and right indexes excluding answer tokens and eos token
@property
def left_idxs(self): return np.arange(self.context_start_pos, self.ans_start_pos)
@property
def right_idxs(self): return np.arange(self.ans_end_pos+1, self.context_end_pos+1)
@property
def left_right_idxs(self): return np.concatenate([self.left_idxs, self.right_idxs])
@property
def rand_left_idx(self): return np.random.choice(self.left_idxs) if self.left_idxs.size > 0 else None
@property
def rand_right_idx(self): return np.random.choice(self.right_idxs) if self.right_idxs.size > 0 else None
def right_truncate(self, right_idx):
"""
Truncate context from random right index to beginning, answer pos doesn't change
Note: token_type_ids NotImplemented
"""
if not right_idx: raise Exception("Right index can't be None")
# clone for debugging
new_input_ids = self.input_ids.clone()
nopad_input_ids = new_input_ids[self.attention_mask.bool()]
# truncate from right idx to beginning - add eos_token_id to end
truncated = torch.cat([nopad_input_ids[:right_idx+1], tensor([self.tokenizer.eos_token_id])])
# pad new context until size are equal
# replace original input context with new
n_pad = len(nopad_input_ids) - len(truncated)
new_context = F.pad(truncated, (0,n_pad), value=self.tokenizer.pad_token_id)
new_input_ids[:self.context_end_pos+2] = new_context
# find new attention mask, update new context end position (exclude eos token)
# Note: context start doesn't change since we don't manipulate question
new_attention_mask = tensor([1 if i != 1 else 0 for i in new_input_ids])
new_context_end_pos = torch.where(new_attention_mask)[0][-1].item() - 1
self.context_end_pos = new_context_end_pos
# update input_ids and attention_masks
self.input_ids = new_input_ids
self.attention_mask = new_attention_mask
return self.input_ids, self.attention_mask, (tensor(self.ans_start_pos), tensor(self.ans_end_pos))
def random_right_truncate(self):
right_idx = self.rand_right_idx
if right_idx: self.right_truncate(right_idx)
def left_truncate(self, left_idx):
"""
Truncate context from random left index to end, answer pos changes too
Note: token_type_ids NotImplemented
"""
if not left_idx: raise Exception("Left index can't be None")
# clone for debugging
new_input_ids = self.input_ids.clone()
# pad new context until size are equal
# replace original input context with new
n_pad = len(new_input_ids[self.context_start_pos:]) - len(new_input_ids[left_idx:])
new_context = F.pad(new_input_ids[left_idx:], (0,n_pad), value=self.tokenizer.pad_token_id)
new_input_ids[self.context_start_pos:] = new_context
# find new attention mask, update new context end position (exclude eos token)
# Note: context start doesn't change since we don't manipulate question
new_attention_mask = tensor([1 if i != 1 else 0 for i in new_input_ids])
new_context_end_pos = torch.where(new_attention_mask)[0][-1].item() - 1
self.context_end_pos = new_context_end_pos
# find new answer start and end positions
# update new answer start and end positions
ans_shift = left_idx - self.context_start_pos
self.ans_start_pos, self.ans_end_pos = self.ans_start_pos-ans_shift, self.ans_end_pos-ans_shift
# update input_ids and attention_masks
self.input_ids = new_input_ids
self.attention_mask = new_attention_mask
return self.input_ids, self.attention_mask, (tensor(self.ans_start_pos), tensor(self.ans_end_pos))
def random_left_truncate(self):
left_idx = self.rand_left_idx
if left_idx: self.left_truncate(left_idx)
def replace_with_mask(self, idxs_to_mask):
"""
Replace given input ids with tokenizer.mask_token_id
"""
# clone for debugging
new_input_ids = self.input_ids.clone()
new_input_ids[idxs_to_mask] = tensor([self.tokenizer.mask_token_id]*len(idxs_to_mask))
self.input_ids = new_input_ids
def random_replace_with_mask(self, mask_p=0.2):
"""
mask_p: Proportion of tokens to replace with mask token id
"""
idxs_to_mask = np.random.choice(self.left_right_idxs, int(len(self.left_right_idxs)*mask_p))
if idxs_to_mask.size > 0: self.replace_with_mask(idxs_to_mask)
i = np.random.choice(range(len(train_inputs)))
input_ids = tensor(train_inputs[i]['input_ids'])
attention_mask = tensor(train_inputs[i]['attention_mask'])
start_position, end_position = train_inputs[i]['start_end_tok_idxs']
start_position, end_position = tensor(start_position), tensor(end_position)
answer_text = train_inputs[i]['answer_text']
context_text = train_inputs[i]['context_text']
offsets = train_inputs[i]['offsets']
input_ids[attention_mask.bool()]
start_position, end_position
answer_text, context_text, start_position.item(), end_position.item()
" ".join([tokenizer.id_to_token(o) for o in input_ids[attention_mask.bool()]])
" ".join([tokenizer.id_to_token(o) for o in input_ids[start_position.item(): end_position.item()+1]])
char_start = min(np.concatenate([offsets[start_position.item()], offsets[end_position.item()]]))
char_end = max(np.concatenate([offsets[start_position.item()], offsets[end_position.item()]]))
context_text[char_start:char_end]
def convert_ids_to_tokens(toks):
return [tokenizer.id_to_token(o) for o in toks]
tokenizer.convert_ids_to_tokens = convert_ids_to_tokens
```
### demo right truncate
```
da = TSEDataAugmentor(tokenizer, input_ids, attention_mask, start_position, end_position)
da.random_right_truncate()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.attention_mask.bool()])))
print()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.ans_start_pos :da.ans_end_pos+1])))
```
### demo left truncate
```
da = TSEDataAugmentor(tokenizer, input_ids, attention_mask, start_position, end_position)
da.random_left_truncate()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.attention_mask.bool()])))
print()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.ans_start_pos :da.ans_end_pos+1])))
da.ans_start_pos, da.ans_end_pos
```
### demo replace with mask
```
da = TSEDataAugmentor(tokenizer, input_ids, attention_mask, start_position, end_position)
da.random_replace_with_mask(0.2)
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.attention_mask.bool()])))
print()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.ans_start_pos :da.ans_end_pos+1])))
da.left_idxs, da.right_idxs
```
### demo all
```
da = TSEDataAugmentor(tokenizer, input_ids, attention_mask, start_position, end_position)
da.random_left_truncate()
da.random_right_truncate()
da.random_replace_with_mask(0.3)
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.attention_mask.bool()])))
print()
print(" ".join(tokenizer.convert_ids_to_tokens(da.input_ids[da.ans_start_pos :da.ans_end_pos+1])))
```
### TSEDataset
```
#export
do_tfms = {}
do_tfms["random_left_truncate"] = {"p":.3}
do_tfms["random_right_truncate"] = {"p":.3}
do_tfms["random_replace_with_mask"] = {"p":.3, "mask_p":0.2}
do_tfms["random_replace_with_pseudo"] = {"p":.3}
do_tfms
pseudo_df = pd.read_csv("../data/pseudo_labels/pseudo_labelled_sample.csv")
pseudo_df = pseudo_df[['ids', 'text', 'target', 'predicted_answer']]
pseudo_df.head()
pseudo_df.shape
#export
class TSEDataset(Dataset):
def __init__(self, inputs, tokenizer=None, is_test=False, do_tfms:Dict=None, pseudo_inputs=None):
# eval
self.inputs = inputs
# augmentation
self.is_test = is_test
self.tokenizer = tokenizer
self.do_tfms = do_tfms
self.pseudo_inputs = pseudo_inputs
if self.pseudo_inputs: self.pseudo_idxs = list(range(len(self.pseudo_inputs)))
def __getitem__(self, i):
'fastai requires (xb, yb) to return'
input_ids = tensor(self.inputs[i]['input_ids'])
attention_mask = tensor(self.inputs[i]['attention_mask'])
if not self.is_test:
start_position, end_position = self.inputs[i]['start_end_tok_idxs']
start_position, end_position = tensor(start_position), tensor(end_position)
if self.do_tfms:
if self.pseudo_inputs and (np.random.uniform() < self.do_tfms["random_replace_with_pseudo"]["p"]):
rand_idx = np.random.choice(self.pseudo_idxs)
input_ids = tensor(self.pseudo_inputs[rand_idx]['input_ids'])
attention_mask = tensor(self.pseudo_inputs[rand_idx]['attention_mask'])
start_position, end_position = self.pseudo_inputs[rand_idx]['start_end_tok_idxs']
start_position, end_position = tensor(start_position), tensor(end_position)
else:
augmentor = TSEDataAugmentor(self.tokenizer,
input_ids,
attention_mask,
start_position, end_position)
if np.random.uniform() < self.do_tfms["random_left_truncate"]["p"]:
augmentor.random_left_truncate()
if np.random.uniform() < self.do_tfms["random_right_truncate"]["p"]:
augmentor.random_right_truncate()
if np.random.uniform() < self.do_tfms["random_replace_with_mask"]["p"]:
augmentor.random_replace_with_mask(self.do_tfms["random_replace_with_mask"]["mask_p"])
input_ids = augmentor.input_ids
attention_mask = augmentor.attention_mask
start_position, end_position = tensor(augmentor.ans_start_pos), tensor(augmentor.ans_end_pos)
xb = (input_ids, attention_mask)
if not self.is_test: yb = (start_position, end_position)
else: yb = (0,0)
return xb, yb
def __len__(self): return len(self.inputs)
#export
do_tfms = {}
do_tfms["random_left_truncate"] = {"p":.3}
do_tfms["random_right_truncate"] = {"p":.3}
do_tfms["random_replace_with_mask"] = {"p":.3, "mask_p":0.2}
do_tfms["random_replace_with_pseudo"] = {"p":1.}
do_tfms
pseudo_inputs = QAInputGenerator.from_df(pseudo_df,
tokenizer=tokenizer,
q_col='target', id_col='ids', ans_col='predicted_answer')
len(pseudo_inputs)
train_ds = TSEDataset(train_inputs, tokenizer, is_test=False, do_tfms=do_tfms, pseudo_inputs=pseudo_inputs)
test_ds = TSEDataset(test_inputs, tokenizer, is_test=True, do_tfms=None)
do_tfms
(input_ids, att_masks), (start_idx, end_idx) = train_ds[0]
" ".join(tokenizer.convert_ids_to_tokens(input_ids[att_masks.bool()]))
" ".join(tokenizer.convert_ids_to_tokens(input_ids[att_masks.bool()][start_idx:end_idx+1]))
# ### `predict_answer_text`
# TODO: Migrate to proper notebook
# #export
# def predict_answer_text(start_logits, end_logits, attention_mask,
# context_text, char_to_word_offset, token_to_orig_map):
# "Find best answer from context"
# # find best start and end
# context_start, context_end = min(token_to_orig_map), max(token_to_orig_map)
# truncated_start_logits = start_logits[attention_mask.bool()][context_start:context_end+1]
# truncated_end_logits = end_logits[attention_mask.bool()][context_start:context_end+1]
# best_start_idx, best_end_idx = find_best_start_end_idxs(truncated_start_logits, truncated_end_logits)
# # generate answer
# tok_orig_char_start = token_to_orig_map[best_start_idx+context_start]
# tok_orig_char_end = token_to_orig_map[best_end_idx+context_start]
# return answer_from_orig_context(context_text, char_to_word_offset, tok_orig_char_start, tok_orig_char_end)
# predict_answer_text(start_logits, end_logits, attention_mask,
# context_text, char_to_word_offset, token_to_orig_map)
```
### export
```
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 특성 세트
**학습 목표:** 복잡한 특성 세트만큼 좋은 성능을 발휘하는 최소한의 특성 세트를 만듭니다.
지금까지는 모델에 모든 특성을 집어넣었습니다. 그러나 모델에 포함된 특성이 적을수록 리소스 사용이 감소하며 유지보수도 쉬워집니다. 이제부터는 주택 관련 특성을 최소한으로 사용하면서 데이터 세트의 모든 특성을 사용하는 모델과 동등한 성능을 발휘하는 모델을 만들 수 있는지를 살펴보겠습니다.
## 설정
이전과 마찬가지로 캘리포니아 주택 데이터를 로드하고 준비하겠습니다.
```
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print "Training examples summary:"
display.display(training_examples.describe())
print "Validation examples summary:"
display.display(validation_examples.describe())
print "Training targets summary:"
display.display(training_targets.describe())
print "Validation targets summary:"
display.display(validation_targets.describe())
```
## 작업 1: 효율적인 특성 세트 개발
**특성을 2~3개만 사용하면서 성능을 어디까지 올릴 수 있을까요?**
**상관행렬**은 각 특성을 타겟과 비교한 결과 및 각 특성을 서로 비교한 결과에 따라 쌍의 상관성을 보여줍니다.
여기에서는 상관성을 [피어슨 상관계수](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient)로 정의합니다. 이 실습을 위해 자세한 수학적 원리를 이해할 필요는 없습니다.
상관성 값의 의미는 다음과 같습니다.
* `-1.0`: 완벽한 음의 상관성
* `0.0`: 상관성 없음
* `1.0`: 완벽한 양의 상관성
```
correlation_dataframe = training_examples.copy()
correlation_dataframe["target"] = training_targets["median_house_value"]
correlation_dataframe.corr()
```
타겟과 상관성이 높은 특성을 찾아야 합니다.
또한 각 특성이 서로 독립적인 정보를 추가하도록 서로간의 상관성이 높지 않은 특성을 찾는 것이 좋습니다.
이 정보를 참고하여 특성을 삭제해 보세요. 두 가지 원시 특성의 비율과 같은 합성 특성을 추가로 만들어 볼 수도 있습니다.
편의를 위해 이전 실습의 학습 코드를 포함해 두었습니다.
```
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
```
5분 동안 효율적인 특성 세트 및 학습 매개변수를 찾아보세요. 그런 다음 해결 방법을 확인하여 모범 답안을 알아보세요. 특성마다 필요한 학습 매개변수가 다를 수 있다는 점에 유의하시기 바랍니다.
```
#
# Your code here: add your features of choice as a list of quoted strings.
#
minimal_features = [
]
assert minimal_features, "You must select at least one feature!"
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
#
# Don't forget to adjust these parameters.
#
train_model(
learning_rate=0.001,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
```
### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
```
minimal_features = [
"median_income",
"latitude",
]
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
```
## 작업 2: 위도 활용 고도화
`latitude`와 `median_house_value`로 그래프를 그리면 선형 관계가 없다는 점이 드러납니다.
대신, 로스앤젤레스 및 샌프란시스코에 해당하는 위치 부근에 마루가 나타납니다.
```
plt.scatter(training_examples["latitude"], training_targets["median_house_value"])
```
**위도를 더 잘 활용할 수 있는 합성 특성을 만들어 보세요.**
예를 들어 `latitude`를 `|latitude - 38|`의 값에 매핑하는 특성을 만들고 이름을 `distance_from_san_francisco`로 지정할 수 있습니다.
또는 공간을 10개의 버킷으로 나눌 수 있습니다. `latitude_32_to_33`, `latitude_33_to_34` 등의 특성을 만들고 `latitude`가 해당 버킷의 범위에 포함되면 `1.0` 값을, 그렇지 않으면 `0.0` 값을 표시하면 됩니다.
상관행렬을 개발에 참고하면서 적절한 특성이 발견되면 모델에 추가하세요.
검증 성능을 최대 어느 정도까지 높일 수 있나요?
```
#
# YOUR CODE HERE: Train on a new data set that includes synthetic features based on latitude.
#
```
### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
`latitude` 이외에 `median_income`도 유지하여 이전 결과와 비교하겠습니다.
위도를 버킷화하기로 결정했습니다. Pandas에서 `Series.apply`를 사용하면 되므로 매우 간단합니다.
```
LATITUDE_RANGES = zip(xrange(32, 44), xrange(33, 45))
def select_and_transform_features(source_df):
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Automated Machine Learning
_**Prepare Data using `azureml.dataprep` for Local Execution**_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Results](#Results)
1. [Test](#Test)
## Introduction
In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).
Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.
In this notebook you will learn how to:
1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.
2. Pass the `Dataflow` to AutoML for a local run.
3. Pass the `Dataflow` to AutoML for a remote run.
## Setup
Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros.
As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import logging
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import azureml.dataprep as dprep
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-dataprep-local'
# project folder
project_folder = './sample_projects/automl-dataprep-local'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Data
```
# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.
# The data referenced here was a 1MB simple random sample of the Chicago Crime data into a local temporary directory.
# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)
# and convert column types manually.
example_data = 'https://dprepdata.blob.core.windows.net/demo/crime0-random.csv'
dflow = dprep.auto_read_file(example_data).skip(1) # Remove the header row.
dflow.get_profile()
# As `Primary Type` is our y data, we need to drop the values those are null in this column.
dflow = dflow.drop_nulls('Primary Type')
dflow.head(5)
```
### Review the Data Preparation Result
You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets.
`Dataflow` objects are immutable and are composed of a list of data preparation steps. A `Dataflow` object can be branched at any point for further usage.
```
X = dflow.drop_columns(columns=['Primary Type', 'FBI Code'])
y = dflow.keep_columns(columns=['Primary Type'], validate_column_exists=True)
```
## Train
This creates a general AutoML settings object applicable for both local and remote runs.
```
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 2,
"primary_metric" : 'AUC_weighted',
"preprocess" : True,
"verbosity" : logging.INFO
}
```
### Pass Data with `Dataflow` Objects
The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training.
```
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
X = X,
y = y,
**automl_settings)
local_run = experiment.submit(automl_config, show_output = True)
local_run
```
## Results
#### Widget for Monitoring Runs
The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
```
from azureml.widgets import RunDetails
RunDetails(local_run).show()
```
#### Retrieve All Child Runs
You can also use SDK methods to fetch all the child runs and see individual metrics that we log.
```
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
### Retrieve the Best Model
Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
```
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
```
#### Best Model Based on Any Other Metric
Show the run and the model that has the smallest `log_loss` value:
```
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
```
#### Model from a Specific Iteration
Show the run and the model from the first iteration:
```
iteration = 0
best_run, fitted_model = local_run.get_output(iteration = iteration)
print(best_run)
print(fitted_model)
```
## Test
#### Load Test Data
For the test data, it should have the same preparation step as the train data. Otherwise it might get failed at the preprocessing step.
```
dflow_test = dprep.auto_read_file(path='https://dprepdata.blob.core.windows.net/demo/crime0-test.csv').skip(1)
dflow_test = dflow_test.drop_nulls('Primary Type')
```
#### Testing Our Best Fitted Model
We will use confusion matrix to see how our model works.
```
from pandas_ml import ConfusionMatrix
y_test = dflow_test.keep_columns(columns=['Primary Type']).to_pandas_dataframe()
X_test = dflow_test.drop_columns(columns=['Primary Type', 'FBI Code']).to_pandas_dataframe()
ypred = fitted_model.predict(X_test)
cm = ConfusionMatrix(y_test['Primary Type'], ypred)
print(cm)
cm.plot()
```
| github_jupyter |
# Introduction to AlTar/Pyre applications
### 1. Introduction
An AlTar application is based on the [pyre](https://github.com/pyre/pyre) framework. Compared with traditional Python programming, the `pyre` framework provides enhanced features for developing high performance scientific applications, including
- It introduces a new programming model based on configurable components. A configurable component can be an attribute/parameter, or a method/protocol which may have different implementations. The latter will be especially helpful for users to swap between different algorithms/methods for a given procedure at runtime.
- Configurable components also offer users an easy way to configure parameters and settings in an application. To pass parameters through command line (e.g, by `argparse`) and property `setter` is usually a formidable task for applications with a large parameter set. In pyre, this can be done by one `json`-type configuration file.
- An AlTar/pyre application can deploy itself automatically to different computing platforms, such as a standalone computer, GPU workstations, computer clusters or clouds, with a simple change of the `shell` configuration, a configurable component.
- Pyre also integrates high performance scientific libraries such as [GNU Scientific Library](https://www.gnu.org/software/gsl/) (for linear algebra and statistics), and [CUDA](https://developer.nvidia.com/cuda-downloads) (for GPU accelerated computing). It also offers an easy procedure for users to develop their own applications with mixed Python/C/C++/Fortran/CUDA programming, to achieve both high performance and user-friendly interfaces.
In this tutorial, we will use a `Hello world!` application to demonstrate how an AlTar application, with configurable components, is constructed and runs slightly differently from conventional Python scripts.
### 2. The Hello application
We create below an application to say "Hello" to someone (attribute `who`) several times (attribute `times`).
```
# import the altar module
import altar
# create an application based on altar.application
class HelloApp(altar.application, family='altar.applications.hello'):
"""
A specialized AlTar application to say hello
"""
# user configurable components
who = altar.properties.str(default='world')
who.doc = "the person to say hello to"
times = altar.properties.int(default=1)
times.validators = altar.constraints.isPositive()
times.doc = "how many times you want to say hello"
# define methods
def main(self):
"""
The main method
"""
for i in range(self.times):
print(f"Hello {self.who}!")
# all done
return
```
The `HelloApp` application is derived from the `altar.application` base class in order to inherit various features offered by the pyre framework. It has two attributes, `who` and `times`, which are defined as configurable compnents. A component can be one of the basic Python data types, specified by altar.properties.[int, float, str, list, dict ...], or a user-defined component class.
To run the HelloApp, we create an instance with a name='hello'. We pass the settings of `who` and `times` by a configuration file [hello.pfg](hello.pfg) (in default, the app instance searches for a `NAME.pfg` configuration file with `NAME` the same as the instance name):
```
; application instance name
hello:
; components configuration
who = AlTar users ; no start/end quotes for strings are needed in pfg file
times = 3
```
In a `pfg` (pyre config) configuration file, indents are used to show the hierarchy of each configurable component. An alternative is to use the dot notation in Python, e.g.,
```
; an alternative way to write configurations
hello.who = AlTar users
hello.times = 3
```
```
# create a HelloApp instance with a name
helloapp = HelloApp(name='hello')
# when it is created, it searches for settings in hello.pfg to initialize configurable components
# run the instance main method
helloapp.run()
```
Once an instance is created(registered), all its components are processed to be regular Python objects which you may access/modify.
```
print(f"'{helloapp.who}' is going to be changed")
helloapp.who='pyre users'
helloapp.main()
```
You may also modify the [hello.pfg](hello.pfg) file for new configurations and re-run the program. Caveat: for jupyter/ipython, you may need to restart the kernel for new settings to be accepted.
### 3. Run HelloApp from command line
AlTar/pyre applications are designed to run as regular shell applications, which offer more options to run with command line arguments. We create a [hello.py](hello.py) script to include the `HelloApp` class definition as well as to define a `__main__` method to create an instance and call `main()`.
```
# bootstrap
if __name__ == "__main__":
# build an instance of the default app
app = HelloApp(name="hello")
# invoke the main entry point
status = app.main()
# share
raise SystemExit(status)
```
```
# run hello app from a shell with cmdLine settings
!python3 hello.py --who="World" --times=1
```
By default, the app instance searches for the configuration file named `hello.pfg` as its name='hello'. It is also possible to use a different configuration file by a ``--config`` option.
```
; hello2.pfg
; application instance name (still need to be the same as the instance name)
hello:
; configurations
who = pyre users
times = 1
```
```
# run hello app with a specified configuration file
!python3 hello.py --config=hello2.pfg
# run hello app with both a configuration file and cmdLine settings
# pfg file settings will be overriden by the cmdLine ones
!python3 hello.py --config=hello2.pfg --times=2
```
| github_jupyter |
```
import os
import random
```
# get speech reps for all utts in lj
```
fp = "/home/s1785140/fairseq/examples/speech_audio_corrector/lj_speech_quantized.txt"
# load file contents
with open(fp, 'r') as f:
lines = f.readlines()
# return dict mapping from id to speech rep codes
ids2speechreps = {}
for l in lines:
utt_id, codes = l.split('|')
codes = codes.rstrip() # strip trailing newline char
codes = [int(s) for s in codes.split(' ')] # convert from str of ints to list of ints
ids2speechreps[utt_id] = codes
len(ids2speechreps['LJ004-0001'])
```
# get all word alignments
```
import textgrid
from collections import Counter
def get_word_alignments(
textgrid_path,
utt_dur_from_last_word=False,
ignore_list=['<unk>'],
):
"""
extract word alignments from textgrid file corresponding to one utterance
utt_dur_from_last_word: whether to set utt_dur to end timestamp of last real wordtype, or from
the very last alignment in the utterance (likely corresponding to silence)
"""
tg = textgrid.TextGrid.fromFile(textgrid_path)
words_intervaltier, _phones_intervaltier = tg
words = []
counter = Counter()
for word in words_intervaltier:
if word.mark and word.mark not in ignore_list: # if word.mark is False then it is SILENCE
counter[word.mark] += 1
words.append({
"wordtype": word.mark,
"utt_id": textgrid_path.split('/')[-1].split('.')[0],
"example_no": counter[word.mark], # the number of times we have seen this word in this utterance
"start": word.minTime,
"end": word.maxTime,
})
if utt_dur_from_last_word:
# use last real word end time as the utt_dur
utt_dur = words[-1]['end']
else:
# at this point word is the last item in words_intervaltier (most likely sil / None)
utt_dur = word.maxTime
# add utt_dur to all words
for w in words:
w["utt_dur"] = utt_dur
return words
alignment_dir = "/home/s1785140/data/ljspeech_MFA_alignments_from_fb"
count = 0
MAX_TO_PROCESS = 100
ids = list(ids2speechreps.keys())[:MAX_TO_PROCESS]
ids2word_alignments = {}
for utt_id in ids:
words_align = get_word_alignments(textgrid_path=f"{alignment_dir}/{utt_id}.TextGrid", utt_dur_from_last_word=False)
ids2word_alignments[utt_id] = words_align
# check for words that contain non alphabet chars such as <unk> token
# print(words_align)
for w in words_align:
if '<' in w['wordtype']:
# print(w, words_align)
if w['wordtype'] != '<unk>':
print("not <unk>:", w['wordtype'])
count += 1
if not w['wordtype']:
print('word is FALSE', w, words_align)
print("count is", count)
len(ids2word_alignments)
```
# create wordtype to speech aligned feats data structure
```
def get_wordlevel_reprs(speechreps, word_align):
"""
extract subsequence of 'repr' that corresponds to a particular word
function expects input to be of dimension 2: (timesteps, hidden_size)
"""
start_fraction = word_align['start'] / word_align['utt_dur']
end_fraction = word_align['end'] / word_align['utt_dur']
timesteps = len(speechreps)
start_idx = round(start_fraction * timesteps)
end_idx = round(end_fraction * timesteps)
return speechreps[start_idx:end_idx]
word2speechreps = {}
for utt_id in ids:
speech_reps = ids2speechreps[utt_id]
word_aligns = ids2word_alignments[utt_id]
for word_align in word_aligns:
word_align['speech_reps'] = get_wordlevel_reprs(speech_reps, word_align)
# following info to debug whether alignments are consistent in len
# word_align['speech_reps_len'] = len(word_align['speech_reps'])
# word_align['speech_reps_len_dur_ratio'] = word_align['speech_reps_len'] / (word_align['end']-word_align['start'])
wordtype = word_align['wordtype']
example_no = word_align['example_no']
unique_id = utt_id + '|' + str(example_no)
if wordtype not in word2speechreps:
word2speechreps[wordtype] = {}
word2speechreps[wordtype][unique_id] = word_align['speech_reps']
```
# implement fn to get position of each word in the text seq
```
def get_mfa_text(word_align):
return " ".join(w['wordtype'] for w in word_align)
def get_mfa_text_from_utt_id(utt_id):
word_align = ids2word_alignments[utt_id]
return get_mfa_text(word_align)
tg = textgrid.TextGrid.fromFile(f"{alignment_dir}/{utt_id}.TextGrid")
words_intervaltier, _phones_intervaltier = tg
words_intervaltier
mfa_text = get_mfa_text(word_aligns)
mfa_text
def get_word_pos_2(text, whitespace_tok="_", boundary_same_pos=True, with_eos=True, boundary_pos=0):
"""
return words and their word pos
and also word pos of each grapheme in the seq
"""
graphemes = text.split(' ')
# double check that we are dealing with a seq output by bpe tokenizer
assert graphemes[0] == whitespace_tok
word_count = 0
word_and_word_pos = []
word_pos_of_graphemes = []
current_word = ""
for i, c in enumerate(graphemes):
# reached the last char of the utt
if i == len(graphemes) - 1:
current_word += c # add last char
word_and_word_pos.append((current_word, word_count)) # add last word
word_pos_of_graphemes.append(word_count)
# whitespace
elif c == whitespace_tok:
if current_word: # at a whitespace token AFTER processing at least one word
word_and_word_pos.append((current_word, word_count))
current_word = ""
if boundary_same_pos:
word_pos_of_graphemes.append(boundary_pos)
else:
word_count += 1 # because we count each whitespace_tok as a new word position
word_pos_of_graphemes.append(word_count)
# processing a grapheme in a word
else:
if graphemes[i-1] == whitespace_tok:
word_count += 1 # only increment word position if we are at the beginning of a new word, not within it
word_pos_of_graphemes.append(word_count)
current_word += c
if with_eos:
word_pos_of_graphemes.append(word_count+1)
return word_and_word_pos, word_pos_of_graphemes
words = ["how" , "are", "you"]
txt = "_"+ "_".join(words)
txt = " ".join([c for c in txt])
print(txt)
get_word_pos_2(txt, whitespace_tok="_", boundary_same_pos=True, with_eos=True, boundary_pos=0)
# create some input text for testing
words = ["how" , "are", "you"]
txt = "_"+ "_".join(words)
txt = " ".join([c for c in txt])
txt
get_word_pos_2(txt, boundary_same_pos=True, with_eos=False)
get_word_pos_2(txt, boundary_same_pos=True, with_eos=True)
get_word_pos_2(txt, boundary_same_pos=False, with_eos=False)
get_word_pos_2(txt, boundary_same_pos=False, with_eos=True)
```
# implement getting speech reps for words in an utterance
Also add ability for regularisation:
* shuffling word examples
* removing duplicates
```
def run_len_encoding(seq):
"""encode a seq using run length encoding
e.g. [1,2,2,2,2,2,3,3,3,3,3] -> [(1, 1), (2, 5), (3, 5)]
"""
encoding = []
prev_char = ''
count = 1
if not seq: return []
for char in seq:
# If the prev and current characters
# don't match...
if char != prev_char:
# ...then add the count and character
# to our encoding
if prev_char:
encoding.append((prev_char, count))
count = 1
prev_char = char
else:
# Or increment our counter
# if the characters do match
count += 1
else:
# Finish off the encoding
encoding.append((prev_char, count))
return encoding
speechreps = [1,2,2,2,2,2,3,3,3,3,3]
run_len_encoding(speechreps)
def remove_dups_random(rle, min_count=1):
"""return a rle where each char's count is reduced a random amount"""
compressed_rle = []
for char, count in rle:
new_count = random.randint(min_count, count)
compressed_rle.append((char, new_count))
return compressed_rle
speechreps = [1,2,2,2,2,2,3,3,3,3,3]
rle = run_len_encoding(speechreps)
remove_dups_random(rle)
def expand_rle(rle):
"""expand an RLE back to a list"""
expanded_rle = []
for char, count in rle:
expanded_rle.extend(count*[char])
return expanded_rle
speechreps = [1,2,2,2,2,2,3,3,3,3,3]
rle = run_len_encoding(speechreps)
print("compressed", rle)
expand_rle(rle)
def collapse_dups(speechreps, remove_dup_prob, remove_dup_rand_num):
"""take a list of elements and remove duplicates
optionally do not remove all duplicates but remove a random amount
TODO add option of sometimes ADDING codes? to make neural model more robust to duration changes
"""
if remove_dup_prob > 0.0 and random.random() > (1.0 - remove_dup_prob):
rle = run_len_encoding(speechreps)
if remove_dup_rand_num:
compressed_rle = remove_dups_random(rle)
else:
# remove all duplicates for each code (i.e. set count to 0)
compressed_rle = [(char, 1) for char, count in rle]
speechreps = expand_rle(compressed_rle)
return speechreps
speechreps = [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3]
print("original", speechreps)
for _ in range(10):
collapse_dups(speechreps, remove_dup_prob=0.5, remove_dup_rand_num=False)
def dropout_timesteps(seq, p):
"""randomly dropout timesteps seq"""
if p > 0.0 :
new_seq = []
for c in seq:
if random.random() < (1.0 - p):
new_seq.append(c)
else:
pass
return new_seq
else:
return seq
def get_speechreps_for_word(word, utt_id, count_of_word, word2speechreps, randomise,
remove_dup_prob, remove_dup_rand_num, dropout_p):
"""return the speechreps for a wordtype
optionally remove duplicates"""
unique_id = f"{utt_id}|{count_of_word}"
# get speechreps corresponding to word
if not randomise and unique_id in word2speechreps[word]:
word_reps = word2speechreps[word][unique_id]
else:
random_unique_id = random.sample(word2speechreps[word].keys(), k=1)[0]
word_reps = word2speechreps[word][random_unique_id]
# optionally collapse duplicate codes
word_reps = collapse_dups(word_reps, remove_dup_prob=remove_dup_prob, remove_dup_rand_num=remove_dup_rand_num)
# optionally randomly dropout codes
word_reps = dropout_timesteps(word_reps, p=dropout_p)
return word_reps
word = "the"
utt_id = "LJ033-0206"
count_of_word = 1
# unique_id = "LJ033-0206" + "|" + "1"
get_speechreps_for_word(word, utt_id, count_of_word, word2speechreps, randomise=False,
remove_dup_prob=0.0, remove_dup_rand_num=False, dropout_p=0.0)
def get_speechreps_for_utt(word_and_word_pos, utt_id, word2speechreps,
randomise_examples=False, remove_dup_prob=0.0,
remove_dup_rand_num=False, dropout_p=0.0):
"""
get speech reps for all the words in an utterance
optionally:
- randomly retrieve speech reps for different examples of the word
- remove duplicate codes
- dropout codes
"""
speechreps, speechreps_word_pos, word_counter = [], [], Counter()
for word, word_pos in word_and_word_pos:
word_counter[word] += 1
word_speechreps = get_speechreps_for_word(word, utt_id, word_counter[word], word2speechreps, randomise=randomise_examples,
remove_dup_prob=remove_dup_prob, remove_dup_rand_num=remove_dup_rand_num,
dropout_p=dropout_p)
speechreps.extend(word_speechreps)
speechreps_word_pos.extend(len(word_speechreps)*[word_pos])
# TODO add interword separator tokens
# TODO <sep> or "_" according to tgt_dict
return speechreps, speechreps_word_pos
utt_id = "LJ033-0206"
mfa_text = get_mfa_text_from_utt_id(utt_id)
print(mfa_text)
mfa_text = mfa_text.split(" ")
mfa_text = "_"+ "_".join(mfa_text)
mfa_text = " ".join([c for c in mfa_text])
print(mfa_text)
word_and_word_pos, word_pos_of_graphemes = get_word_pos_2(mfa_text, boundary_same_pos=True, with_eos=False)
print(word_and_word_pos)
print(word_pos_of_graphemes)
speechreps, speechreps_word_pos = get_speechreps_for_utt(word_and_word_pos, utt_id, word2speechreps,
randomise_examples=False, remove_dup_prob=1.0, remove_dup_rand_num=True)
print(speechreps_word_pos)
```
# helpers for dictionary encoding of speech reps
```
def prep_speechreps_for_dict_encoding(speechreps):
"""
take hubert codes (int from 0 to K-1 where K is number of k-means clusters)
return a string version suitable for dictionary encoding
"""
new_speechreps = []
for x in speechreps:
new_speechreps.append(f"HUB{x}")
return " ".join(new_speechreps)
prep_speechreps_for_dict_encoding([1,2,3,2,3,2,2,2,2,1,2,3])
```
# helpers for generating word masks
```
def two_random_partitions(indices, p=0.5):
"""given a list of indices (indicating word positions)
partition into two sets
p is probability of entering set1
"""
set1, set2 = set(), set()
for idx in indices:
if random.random() > (1.0 - p):
set1.add(idx)
else:
set2.add(idx)
return set1, set2
two_random_partitions(list(range(1,11)))
def get_word_pos(graphemes, padding_idx, bpe_whitespace_tok="▁", boundary_same_pos=True,
append_eos=False, eos_symbol = "</s>", boundary_start_pos=None):
"""
for some space delimited sequence of symbols (e.g. text)
return words and their word pos
and also word pos of each grapheme in the seq (a list of the same length,
of ints representing the words that each symbol / whitespace corresponds to)
by default the boundary start position is initiated as padding_idx + 1
and then word counts start from that value
args:
text: str of space delimited graphemes in the utterance ('_' denotes whitespace in the original utterance)
e.g. "_ h o w _ a r e _ y o u" this is the format returned by sentence piece tokeniser
e.g.
_ h o w _ a r e _ y o u
padding_idx == 1
boundary_start_pos == 2
boundary_same_pos == True
before padding:
[('how', 3), ('are', 4), ('you', 5)]
[2, 3, 3, 3, 2, 4, 4, 4, 2, 5, 5, 5, 6]
after concat with speechreps and padding (not performed in this fn, performed in SAC dataset collater):
[2, 3, 3, 3, 2, 4, 4, 4, 2, 5, 5, 5, 6, <speechreps>, 1, 1, 1, ...]
_ h o w _ a r e _ y o u
padding_idx == 1
boundary_start_pos == 2
boundary_same_pos == False
before padding:
[('how', 3), ('are', 5), ('you', 7)]
[2, 3, 3, 3, 4, 5, 5, 5, 6, 7, 7, 7, 8]
after concat with speechreps and padding (not performed in this fn, performed in SAC dataset collater):
[2, 3, 3, 3, 4, 5, 5, 5, 6, 7, 7, 7, 8, <speechreps>, 1, 1, 1, ...]
"""
# double check that we are dealing with a seq output by bpe tokenizer
assert graphemes[0] == bpe_whitespace_tok, f"graphemes == {graphemes}"
if boundary_start_pos is None:
boundary_start_pos = padding_idx + 1
if boundary_same_pos:
word_count = boundary_start_pos
else:
word_count = padding_idx
word_and_word_pos = []
word_pos_of_graphemes = []
current_word = ""
for i, c in enumerate(graphemes):
# reached the last symbol of the utt
if c == eos_symbol:
word_and_word_pos.append((current_word, word_count)) # add last word
word_pos_of_graphemes.append(word_count+1)
# whitespace
elif c == bpe_whitespace_tok:
if current_word: # at a whitespace token AFTER processing at least one word
word_and_word_pos.append((current_word, word_count))
current_word = ""
if boundary_same_pos:
word_pos_of_graphemes.append(boundary_start_pos)
else:
word_count += 1 # because we count each whitespace_tok as a new word position
word_pos_of_graphemes.append(word_count)
# processing a grapheme in a word
else:
if graphemes[i - 1] == bpe_whitespace_tok:
word_count += 1 # only increment word position if we are at the beginning of a new word, not within it
word_pos_of_graphemes.append(word_count)
current_word += c
if append_eos:
word_pos_of_graphemes.append(word_count + 1)
return word_and_word_pos, word_pos_of_graphemes
graphemes = '▁ h o w ▁ a r e ▁ y o u </s>'.split(' ')
print(graphemes)
padding_idx = 1
get_word_pos(graphemes, padding_idx=padding_idx, bpe_whitespace_tok="▁", boundary_same_pos=True,
append_eos=False, eos_symbol = "</s>")
get_word_pos(graphemes, padding_idx=padding_idx, bpe_whitespace_tok="▁", boundary_same_pos=False,
append_eos=False, eos_symbol = "</s>")
print("should be [2, 3, 3, 3, 4, 5, 5, 5, 6, 7, 7, 7, 8]")
```
# adapt sinusoidal positional embedding to take in positions as an argument rather than just build then one per timestep
```
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
from typing import Any, Optional
import torch
import torch.onnx.operators
from fairseq import utils
from torch import Tensor, nn
class SinusoidalPositionalEmbedding(nn.Module):
"""This module produces sinusoidal positional embeddings of any length.
Padding symbols are ignored.
"""
def __init__(self, embedding_dim, padding_idx, init_size=1024):
super().__init__()
self.embedding_dim = embedding_dim
self.padding_idx = padding_idx if padding_idx is not None else 0
self.weights = SinusoidalPositionalEmbedding.get_embedding(
init_size, embedding_dim, padding_idx
)
self.onnx_trace = False
self.register_buffer("_float_tensor", torch.FloatTensor(1))
self.max_positions = int(1e5)
def prepare_for_onnx_export_(self):
self.onnx_trace = True
@staticmethod
def get_embedding(
num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None
):
"""Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attention Is All You Need".
"""
half_dim = embedding_dim // 2
emb = math.log(10000) / (half_dim - 1)
emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb)
emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(
1
) * emb.unsqueeze(0)
emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(
num_embeddings, -1
)
if embedding_dim % 2 == 1:
# zero pad
emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1)
if padding_idx is not None:
emb[padding_idx, :] = 0
return emb
def forward(
self,
input,
incremental_state: Optional[Any] = None,
timestep: Optional[Tensor] = None,
positions: Optional[Any] = None,
):
"""Input is expected to be of size [bsz x seqlen]."""
bspair = torch.onnx.operators.shape_as_tensor(input)
bsz, seq_len = bspair[0], bspair[1]
max_pos = self.padding_idx + 1 + seq_len
if self.weights is None or max_pos > self.weights.size(0):
# recompute/expand embeddings if needed
self.weights = SinusoidalPositionalEmbedding.get_embedding(
max_pos, self.embedding_dim, self.padding_idx
)
self.weights = self.weights.to(self._float_tensor)
if incremental_state is not None:
# positions is the same for every token when decoding a single step
pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len
if self.onnx_trace:
return (
self.weights.index_select(index=self.padding_idx + pos, dim=0)
.unsqueeze(1)
.repeat(bsz, 1, 1)
)
return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1)
if positions is None:
positions = utils.make_positions(
input, self.padding_idx, onnx_trace=self.onnx_trace
)
if self.onnx_trace:
flat_embeddings = self.weights.detach().index_select(0, positions.view(-1))
embedding_shape = torch.cat(
(bsz.view(1), seq_len.view(1), torch.tensor([-1], dtype=torch.long))
)
embeddings = torch.onnx.operators.reshape_from_tensor_shape(
flat_embeddings, embedding_shape
)
return embeddings
return (
self.weights.index_select(0, positions.view(-1))
.view(bsz, seq_len, -1)
.detach()
)
padding_idx = 1
max_source_positions = 1024
num_embeddings = max_source_positions
pos_emb = SinusoidalPositionalEmbedding(embedding_dim=128, padding_idx=padding_idx, init_size=num_embeddings + padding_idx + 1,)
positions = torch.Tensor([2, 3, 3, 3, 4, 5, 5, 5, 6, 7, 7, 7, 8, 1, 1, 1]).long()
positions.size()
# introduce batch dim
positions = positions.unsqueeze(0)
positions.size()
positions.view(-1)
rv = pos_emb(positions, positions=positions)
rv
rv.size()
rv[0,:,0]
rv[0,:,1]
def make_positions(tensor, padding_idx: int, onnx_trace: bool = False):
"""Replace non-padding symbols with their position numbers.
Position numbers begin at padding_idx+1. Padding symbols are ignored.
"""
# The series of casts and type-conversions here are carefully
# balanced to both work with ONNX export and XLA. In particular XLA
# prefers ints, cumsum defaults to output longs, and ONNX doesn't know
# how to handle the dtype kwarg in cumsum.
mask = tensor.ne(padding_idx).int()
return (torch.cumsum(mask, dim=1).type_as(mask) * mask).long() + padding_idx
make_positions(positions, 1)
positions = torch.Tensor([2,2,2,2,2,2,2,2,2]).long()
positions = positions.unsqueeze(0)
pos_emb(positions, positions=positions)[0,:,:]
```
| github_jupyter |
# Hacking Into FasterRcnn in Pytorch
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
- image: images/chart-preview.png
# Brief Intro
In the post I will show how to tweak some of the internals of FaterRcnn in Pytorch. I am assuming the reader is someone who already have trained an object detection model using pytorch. If not there is and excellent tutorial in [pytorch website](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html).
## Small Insight into the model
Basically Faster Rcnn is a two stage detector
1. The first stage is the Region proposal network which is resposible for knowing the objectness and corresponding bounding boxes. So essentially the RegionProposalNetwork will give the proposals of whether and object is there or not
2. These proposals will be used by the RoIHeads which outputs the detections .
* Inside the RoIHeads roi align is done
* There will be a box head and box predictor
* The losses for the predictions
3. In this post i will try to show how we can add custom parts to the torchvision FasterRcnn
```
#collapse-hide
import torch
import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
import torch.nn as nn
import torch.nn.functional as F
print(f'torch version {torch.__version__}')
print(f'torchvision version {torchvision.__version__}')
```
# Custom Backone
1. The backbone can be without FeaturePyramidNetwork
2. With FeaturePyramidNetwork
## Custom Backbone without FPN
This is pretty well written in the pytorch tutorials section, i will add some comments to it additionally
```
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
#we need to specify an outchannel of this backone specifically because this outchannel will be
#used as an inchannel for the RPNHEAD which is producing the out of RegionProposalNetwork
#we can know the number of outchannels by looking into the backbone "backbone??"
backbone.out_channels = 1280
#by default the achor generator FasterRcnn assign will be for a FPN backone, so
#we need to specify a different anchor generator
anchor_generator = AnchorGenerator(sizes=((128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
#here at each position in the grid there will be 3x3=9 anchors
#and if our backbone is not FPN then the forward method will assign the name '0' to feature map
#so we need to specify '0 as feature map name'
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=9,
sampling_ratio=2)
#the output size is the output shape of the roi pooled features which will be used by the box head
model = FasterRCNN(backbone,num_classes=2,rpn_anchor_generator=anchor_generator)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 600)]
predictions = model(x)
```
## Custom Backbone with FPN
The Resnet50Fpn available in torchvision
```
# load a model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 2 # 1 class (person) + background
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
### Adding a different resenet backbone
1. Just change to a different resenet
1. Shows how we should change roi_pooler and anchor_generator along with the backbone changes if we are not using all the layers from FPN
### Using all layers from FPN
```
#hte returned layers are layer1,layer2,layer3,layer4 in returned_layers
backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet101',pretrained=True)
model = FasterRCNN(backbone,num_classes=2)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
### Using not all layers from FPN
The size of the last fature map in a Resnet50.Later i will show the sizes of the feature maps we use when we use FPN.
```
#collapse-hide
#just to show what will be out of of a normal resnet without fpn
res = torchvision.models.resnet50()
pure = nn.Sequential(*list(res.children())[:-2])
temp = torch.rand(1,3,400,400)
pure(temp).shape
```
The required layers can be obtained by specifying the returned layers parameters.Also the resnet backbone of different depth can be used.
```
#the returned layers are layer1,layer2,layer3,layer4 in returned_layers
backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet101',pretrained=True,
returned_layers=[2,3,4])
```
Here we are using feature maps of the following shapes.
```
#collapse-hide
out = backbone(temp)
for i in out.keys():
print(i,' ',out[i].shape)
#from the above we can see that the feature are feat maps should be 0,1,2,pool
#where pool comes from the default extra block
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0','1','2','pool'],
output_size=7,
sampling_ratio=2)
```
So essentially what we did was we selected the last three layers in FPN by specifying them in the returned layers, by default, the backbone will add a pool layer on top of the last layer. So we are left with four layers. Now the RoIAlign need to be done in these four layers. If we dnt specify the RoIAlign it will use the by default assume we have used all layers from FPN in torchvision. So we need to specifically give the feauture maps that we used. The usage of feature maps can be our application specific, some time you might need to detect small objects sometimes the object of interest will be large objects only.
```
#we will need to give anchor_generator because the deafault anchor generator assumes we use all layers in fpn
#since we have four layers in fpn here we need to specify 4 anchors
anchor_sizes = ((32), (64), (128),(256) )
aspect_ratios = ((0.5,1.0, 1.5,2.0,)) * len(anchor_sizes)
anchor_generator = AnchorGenerator(anchor_sizes, aspect_ratios)
```
Since we have four layers in our FPN we need to specify the anchors. So here each feature map will have 4 anchors at each position.So the first feature map will have anchor size 32 and four of them will be there at each position in the feature map of aspect_ratios (0.5,1.0, 1.5,2.0). Now we can pass these to the FasterRCNN class
```
model = FasterRCNN(backbone,num_classes=2,rpn_anchor_generator=anchor_generator,box_roi_pool=roi_pooler)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# Custom Predictor
The predictor is what that outputs the classes and the corresponding bboxes . By default these have two layers one for class and one for bboxes,but we can add more before it if we want to,so if you have a ton of data this might come handy,(remember there is already a box head before the predictor head, so you might not need this)
```
class Custom_predictor(nn.Module):
def __init__(self,in_channels,num_classes):
super(Custom_predictor,self).__init__()
self.additional_layer = nn.Linear(in_channels,in_channels) #this is the additional layer
self.cls_score = nn.Linear(in_channels, num_classes)
self.bbox_pred = nn.Linear(in_channels, num_classes * 4)
def forward(self,x):
if x.dim() == 4:
assert list(x.shape[2:]) == [1, 1]
x = x.flatten(start_dim=1)
x = self.additional_layer(x)
scores = self.cls_score(x)
bbox_deltas = self.bbox_pred(x)
return scores, bbox_deltas
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
#we need the out channels of the box head to pass tpp custom predictor
in_features = model.roi_heads.box_head.fc7.out_features
#now we can add the custom predictor to the model
num_classes =2
model.roi_heads.box_predictor = Custom_predictor(in_features,num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# Custom BoxHead
The ouptuts of the roi_align are first passed through the box head before they are passed to the Predictor, there are two linear layers and we can customize them as we want, be careful with the dimensions since they can break the pipeline
```
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
class CustomHead(nn.Module):
def __init__(self,in_channels,roi_outshape,representation_size):
super(CustomHead,self).__init__()
self.conv = nn.Conv2d(in_channels,in_channels,kernel_size=3,padding=1)#this is teh additional layer adde
#we will be sending a flattened layer, the size will eb in_channels*w*h, here roi_outshape represents it
self.fc6 = nn.Linear(in_channels*roi_outshape**2, representation_size)
self.fc7 = nn.Linear(representation_size, representation_size)
def forward(self,x):
# breakpoint()
x = self.conv(x)
x = x.flatten(start_dim=1)
import torch.nn.functional as F
x = F.relu(self.fc6(x))
x = F.relu(self.fc7(x))
return x
```
1. We need in_channels and representation size, remember the output of this is the input of box_predictor, so we can get the representation size of box_head from the input of box_predictor.
2. The in_channels can be got from the backbone out channels.
3. After the flattening the width and height also need to be considered which we wil get from roi_pool output.
```
in_channels = model.backbone.out_channels
roi_outshape = model.roi_heads.box_roi_pool.output_size[0]
representation_size=model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_head = CustomHead(in_channels,roi_outshape,representation_size)
num_classes=2
model.roi_heads.box_predictor = FastRCNNPredictor(representation_size, num_classes)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
```
# CustomLoss Function
This is the modification for loss of FasterRcnn Predictor.
1. You can modify the loss by defining the fastrcnn_loss and making chages where you want.
2. Then pass as say model.roi_heads.fastrcnn_loss = Custom_loss
3. Usually we replace the F.crossentropy loss by say Focal loss or label smoothing loss
```
import torchvision.models.detection._utils as det_utils
import torch.nn.functional as F
```
The below loss function is taken from [Aman Aroras blog](https://amaarora.github.io/2020/07/18/label-smoothing.html).
```
# Helper functions from fastai
def reduce_loss(loss, reduction='mean'):
return loss.mean() if reduction=='mean' else loss.sum() if reduction=='sum' else loss
# Implementation from fastai https://github.com/fastai/fastai2/blob/master/fastai2/layers.py#L338
class LabelSmoothingCrossEntropy(nn.Module):
def __init__(self, ε:float=0.1, reduction='mean'):
super().__init__()
self.ε,self.reduction = ε,reduction
def forward(self, output, target):
# number of classes
c = output.size()[-1]
log_preds = F.log_softmax(output, dim=-1)
loss = reduce_loss(-log_preds.sum(dim=-1), self.reduction)
nll = F.nll_loss(log_preds, target, reduction=self.reduction)
# (1-ε)* H(q,p) + ε*H(u,p)
return (1-self.ε)*nll + self.ε*(loss/c)
custom_loss = LabelSmoothingCrossEntropy()
#torchvision.models.detection.roi_heads.fastrcnn_loss??
def custom_fastrcnn_loss(class_logits, box_regression, labels, regression_targets):
# type: (Tensor, Tensor, List[Tensor], List[Tensor]) -> Tuple[Tensor, Tensor]
"""
Computes the loss for Faster R-CNN.
Arguments:
class_logits (Tensor)
box_regression (Tensor)
labels (list[BoxList])
regression_targets (Tensor)
Returns:
classification_loss (Tensor)
box_loss (Tensor)
"""
labels = torch.cat(labels, dim=0)
regression_targets = torch.cat(regression_targets, dim=0)
classification_loss = custom_loss(class_logits, labels) #ADDING THE CUSTOM LOSS HERE
# get indices that correspond to the regression targets for
# the corresponding ground truth labels, to be used with
# advanced indexing
sampled_pos_inds_subset = torch.where(labels > 0)[0]
labels_pos = labels[sampled_pos_inds_subset]
N, num_classes = class_logits.shape
box_regression = box_regression.reshape(N, -1, 4)
box_loss = det_utils.smooth_l1_loss(
box_regression[sampled_pos_inds_subset, labels_pos],
regression_targets[sampled_pos_inds_subset],
beta=1 / 9,
size_average=False,
)
box_loss = box_loss / labels.numel()
return classification_loss, box_loss
```
# Note on how to vary the anchor generator
The way in which anchor generators are assigned when we use backbone with and without fpn is different. When we are not using FPN there will be only one feature map and for that feature map we need to specify anchors of different shapes.
```
anchor_generator = AnchorGenerator(sizes=((128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
```
In the above case suppose we have a feature map of shape 7x7, then at each cell in it there will be 9 anchors,three each of shapes 128,256 and 512,with the corresponding aspect rations. But when we are using FPN we have different feature maps, so its more effective we use different feature maps for different layers. Small sized objects are deteted using the earlier feature maps and thus for those we can specify a small sized anchor say 32 and for the later layers we can specify larger anchors.
```
anchor_sizes = ((32), (64), (128),(256) )
aspect_ratios = ((0.5,1.0, 1.5,2.0,)) * len(anchor_sizes)
anchor_generator = AnchorGenerator(anchor_sizes, aspect_ratios)
```
In the above i am using the same aspect ratio for all the sizes so i am just multiplying by the lenght of the anchor_sizes, but if we want to specify different aspect ratios its totally possible. But be carefull to specifiy the same number of aspect ratios for each anchor sizes
# Credits
All the above hacks are just modification of the existing wonderful torchvision library.
| github_jupyter |
<!-- dom:TITLE: PHY321: Harmonic Oscillations, Damping, Resonances and time-dependent Forces -->
# PHY321: Harmonic Oscillations, Damping, Resonances and time-dependent Forces
<!-- dom:AUTHOR: [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/) at Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA & Department of Physics, University of Oslo, Norway -->
<!-- Author: -->
**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway
Date: **Mar 1, 2021**
Copyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license
## Aims and Overarching Motivation
### Monday
Damped oscillations. Analytical and numerical solutions
**Reading suggestion**: Taylor sections 5.4-5.5.
### Wednesday
No lecture, study day
### Friday
Driven oscillations and resonances with examples.
**Reading suggestion**: Taylor sections 5.5-5.6.
## Damped Oscillators
We consider only the case where the damping force is proportional to
the velocity. This is counter to dragging friction, where the force is
proportional in strength to the normal force and independent of
velocity, and is also inconsistent with wind resistance, where the
magnitude of the drag force is proportional the square of the
velocity. Rolling resistance does seem to be mainly proportional to
the velocity. However, the main motivation for considering damping
forces proportional to the velocity is that the math is more
friendly. This is because the differential equation is linear,
i.e. each term is of order $x$, $\dot{x}$, $\ddot{x}\cdots$, or even
terms with no mention of $x$, and there are no terms such as $x^2$ or
$x\ddot{x}$. The equations of motion for a spring with damping force
$-b\dot{x}$ are
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
m\ddot{x}+b\dot{x}+kx=0.
\label{_auto1} \tag{1}
\end{equation}
$$
## Harmonic Oscillator, Damping
Just to make the solution a bit less messy, we rewrite this equation as
<!-- Equation labels as ordinary links -->
<div id="eq:dampeddiffyq"></div>
$$
\begin{equation}
\label{eq:dampeddiffyq} \tag{2}
\ddot{x}+2\beta\dot{x}+\omega_0^2x=0,~~~~\beta\equiv b/2m,~\omega_0\equiv\sqrt{k/m}.
\end{equation}
$$
Both $\beta$ and $\omega$ have dimensions of inverse time. To find solutions (see appendix C in the text) you must make an educated guess at the form of the solution. To do this, first realize that the solution will need an arbitrary normalization $A$ because the equation is linear. Secondly, realize that if the form is
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
x=Ae^{rt}
\label{_auto2} \tag{3}
\end{equation}
$$
that each derivative simply brings out an extra power of $r$. This
means that the $Ae^{rt}$ factors out and one can simply solve for an
equation for $r$. Plugging this form into Eq. ([2](#eq:dampeddiffyq)),
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
r^2+2\beta r+\omega_0^2=0.
\label{_auto3} \tag{4}
\end{equation}
$$
## Harmonic Oscillator, Solutions of Damped Motion
Because this is a quadratic equation there will be two solutions,
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
r=-\beta\pm\sqrt{\beta^2-\omega_0^2}.
\label{_auto4} \tag{5}
\end{equation}
$$
We refer to the two solutions as $r_1$ and $r_2$ corresponding to the
$+$ and $-$ roots. As expected, there should be two arbitrary
constants involved in the solution,
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
x=A_1e^{r_1t}+A_2e^{r_2t},
\label{_auto5} \tag{6}
\end{equation}
$$
where the coefficients $A_1$ and $A_2$ are determined by initial
conditions.
The roots listed above, $\sqrt{\omega_0^2-\beta_0^2}$, will be
imaginary if the damping is small and $\beta<\omega_0$. In that case,
$r$ is complex and the factor $\exp{(rt)}$ will have some oscillatory
behavior. If the roots are real, there will only be exponentially
decaying solutions. There are three cases:
## Underdamped: $\beta<\omega_0$
$$
\begin{eqnarray}
x&=&A_1e^{-\beta t}e^{i\omega't}+A_2e^{-\beta t}e^{-i\omega't},~~\omega'\equiv\sqrt{\omega_0^2-\beta^2}\\
\nonumber
&=&(A_1+A_2)e^{-\beta t}\cos\omega't+i(A_1-A_2)e^{-\beta t}\sin\omega't.
\end{eqnarray}
$$
Here we have made use of the identity
$e^{i\omega't}=\cos\omega't+i\sin\omega't$. Because the constants are
arbitrary, and because the real and imaginary parts are both solutions
individually, we can simply consider the real part of the solution
alone:
<!-- Equation labels as ordinary links -->
<div id="eq:homogsolution"></div>
$$
\begin{eqnarray}
\label{eq:homogsolution} \tag{7}
x&=&B_1e^{-\beta t}\cos\omega't+B_2e^{-\beta t}\sin\omega't,\\
\nonumber
\omega'&\equiv&\sqrt{\omega_0^2-\beta^2}.
\end{eqnarray}
$$
## Critical dampling: $\beta=\omega_0$
In this case the two terms involving $r_1$ and $r_2$ are identical
because $\omega'=0$. Because we need to arbitrary constants, there
needs to be another solution. This is found by simply guessing, or by
taking the limit of $\omega'\rightarrow 0$ from the underdamped
solution. The solution is then
<!-- Equation labels as ordinary links -->
<div id="eq:criticallydamped"></div>
$$
\begin{equation}
\label{eq:criticallydamped} \tag{8}
x=Ae^{-\beta t}+Bte^{-\beta t}.
\end{equation}
$$
The critically damped solution is interesting because the solution
approaches zero quickly, but does not oscillate. For a problem with
zero initial velocity, the solution never crosses zero. This is a good
choice for designing shock absorbers or swinging doors.
## Overdamped: $\beta>\omega_0$
$$
\begin{eqnarray}
x&=&A_1\exp{-(\beta+\sqrt{\beta^2-\omega_0^2})t}+A_2\exp{-(\beta-\sqrt{\beta^2-\omega_0^2})t}
\end{eqnarray}
$$
This solution will also never pass the origin more than once, and then
only if the initial velocity is strong and initially toward zero.
Given $b$, $m$ and $\omega_0$, find $x(t)$ for a particle whose
initial position is $x=0$ and has initial velocity $v_0$ (assuming an
underdamped solution).
The solution is of the form,
$$
\begin{eqnarray*}
x&=&e^{-\beta t}\left[A_1\cos(\omega' t)+A_2\sin\omega't\right],\\
\dot{x}&=&-\beta x+\omega'e^{-\beta t}\left[-A_1\sin\omega't+A_2\cos\omega't\right].\\
\omega'&\equiv&\sqrt{\omega_0^2-\beta^2},~~~\beta\equiv b/2m.
\end{eqnarray*}
$$
From the initial conditions, $A_1=0$ because $x(0)=0$ and $\omega'A_2=v_0$. So
$$
x=\frac{v_0}{\omega'}e^{-\beta t}\sin\omega't.
$$
## Harmonic Oscillator, Solutions
Consider a single solution with no arbitrary constants, which we will
call a **particular solution**, $x_p(t)$. It should be emphasized
that this is **A** particular solution, because there exists an
infinite number of such solutions because the general solution should
have two arbitrary constants. Now consider solutions to the same
equation without the driving term, which include two arbitrary
constants. These are called either **homogenous solutions** or
**complementary solutions**, and were given in the previous section,
e.g. Eq. ([7](#eq:homogsolution)) for the underdamped case. The
homogenous solution already incorporates the two arbitrary constants,
so any sum of a homogenous solution and a particular solution will
represent the **general solution** of the equation. The general
solution incorporates the two arbitrary constants $A$ and $B$ to
accommodate the two initial conditions. One could have picked a
different particular solution, i.e. the original particular solution
plus any homogenous solution with the arbitrary constants $A_p$ and
$B_p$ chosen at will. When one adds in the homogenous solution, which
has adjustable constants with arbitrary constants $A'$ and $B'$, to
the new particular solution, one can get the same general solution by
simply adjusting the new constants such that $A'+A_p=A$ and
$B'+B_p=B$. Thus, the choice of $A_p$ and $B_p$ are irrelevant, and
when choosing the particular solution it is best to make the simplest
choice possible.
## Harmonic Oscillator, Particular Solution
To find a particular solution, one first guesses at the form,
<!-- Equation labels as ordinary links -->
<div id="eq:partform"></div>
$$
\begin{equation}
\label{eq:partform} \tag{9}
x_p(t)=D\cos(\omega t-\delta),
\end{equation}
$$
and rewrite the differential equation as
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
D\left\{-\omega^2\cos(\omega t-\delta)-2\beta\omega\sin(\omega t-\delta)+\omega_0^2\cos(\omega t-\delta)\right\}=\frac{F_0}{m}\cos(\omega t).
\label{_auto6} \tag{10}
\end{equation}
$$
One can now use angle addition formulas to get
$$
\begin{eqnarray}
D\left\{(-\omega^2\cos\delta+2\beta\omega\sin\delta+\omega_0^2\cos\delta)\cos(\omega t)\right.&&\\
\nonumber
\left.+(-\omega^2\sin\delta-2\beta\omega\cos\delta+\omega_0^2\sin\delta)\sin(\omega t)\right\}
&=&\frac{F_0}{m}\cos(\omega t).
\end{eqnarray}
$$
Both the $\cos$ and $\sin$ terms need to equate if the expression is to hold at all times. Thus, this becomes two equations
$$
\begin{eqnarray}
D\left\{-\omega^2\cos\delta+2\beta\omega\sin\delta+\omega_0^2\cos\delta\right\}&=&\frac{F_0}{m}\\
\nonumber
-\omega^2\sin\delta-2\beta\omega\cos\delta+\omega_0^2\sin\delta&=&0.
\end{eqnarray}
$$
After dividing by $\cos\delta$, the lower expression leads to
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
\tan\delta=\frac{2\beta\omega}{\omega_0^2-\omega^2}.
\label{_auto7} \tag{11}
\end{equation}
$$
## Solving with Driven Oscillations
Using the identities $\tan^2+1=\csc^2$ and $\sin^2+\cos^2=1$, one can also express $\sin\delta$ and $\cos\delta$,
$$
\begin{eqnarray}
\sin\delta&=&\frac{2\beta\omega}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}},\\
\nonumber
\cos\delta&=&\frac{(\omega_0^2-\omega^2)}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}
\end{eqnarray}
$$
Inserting the expressions for $\cos\delta$ and $\sin\delta$ into the expression for $D$,
<!-- Equation labels as ordinary links -->
<div id="eq:Ddrive"></div>
$$
\begin{equation}
\label{eq:Ddrive} \tag{12}
D=\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}.
\end{equation}
$$
For a given initial condition, e.g. initial displacement and velocity,
one must add the homogenous solution then solve for the two arbitrary
constants. However, because the homogenous solutions decay with time
as $e^{-\beta t}$, the particular solution is all that remains at
large times, and is therefore the steady state solution. Because the
arbitrary constants are all in the homogenous solution, all memory of
the initial conditions are lost at large times, $t>>1/\beta$.
The amplitude of the motion, $D$, is linearly proportional to the
driving force ($F_0/m$), but also depends on the driving frequency
$\omega$. For small $\beta$ the maximum will occur at
$\omega=\omega_0$. This is referred to as a resonance. In the limit
$\beta\rightarrow 0$ the amplitude at resonance approaches infinity.
## Alternative Derivation for Driven Oscillators
Here, we derive the same expressions as in Equations ([9](#eq:partform)) and ([12](#eq:Ddrive)) but express the driving forces as
$$
\begin{eqnarray}
F(t)&=&F_0e^{i\omega t},
\end{eqnarray}
$$
rather than as $F_0\cos\omega t$. The real part of $F$ is the same as before. For the differential equation,
<!-- Equation labels as ordinary links -->
<div id="eq:compdrive"></div>
$$
\begin{eqnarray}
\label{eq:compdrive} \tag{13}
\ddot{x}+2\beta\dot{x}+\omega_0^2x&=&\frac{F_0}{m}e^{i\omega t},
\end{eqnarray}
$$
one can treat $x(t)$ as an imaginary function. Because the operations
$d^2/dt^2$ and $d/dt$ are real and thus do not mix the real and
imaginary parts of $x(t)$, Eq. ([13](#eq:compdrive)) is effectively 2
equations. Because $e^{\omega t}=\cos\omega t+i\sin\omega t$, the real
part of the solution for $x(t)$ gives the solution for a driving force
$F_0\cos\omega t$, and the imaginary part of $x$ corresponds to the
case where the driving force is $F_0\sin\omega t$. It is rather easy
to solve for the complex $x$ in this case, and by taking the real part
of the solution, one finds the answer for the $\cos\omega t$ driving
force.
We assume a simple form for the particular solution
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
x_p=De^{i\omega t},
\label{_auto8} \tag{14}
\end{equation}
$$
where $D$ is a complex constant.
From Eq. ([13](#eq:compdrive)) one inserts the form for $x_p$ above to get
$$
\begin{eqnarray}
D\left\{-\omega^2+2i\beta\omega+\omega_0^2\right\}e^{i\omega t}=(F_0/m)e^{i\omega t},\\
\nonumber
D=\frac{F_0/m}{(\omega_0^2-\omega^2)+2i\beta\omega}.
\end{eqnarray}
$$
The norm and phase for $D=|D|e^{-i\delta}$ can be read by inspection,
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
|D|=\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}},~~~~\tan\delta=\frac{2\beta\omega}{\omega_0^2-\omega^2}.
\label{_auto9} \tag{15}
\end{equation}
$$
This is the same expression for $\delta$ as before. One then finds $x_p(t)$,
<!-- Equation labels as ordinary links -->
<div id="eq:fastdriven1"></div>
$$
\begin{eqnarray}
\label{eq:fastdriven1} \tag{16}
x_p(t)&=&\Re\frac{(F_0/m)e^{i\omega t-i\delta}}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}\\
\nonumber
&=&\frac{(F_0/m)\cos(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray}
$$
This is the same answer as before.
If one wished to solve for the case where $F(t)= F_0\sin\omega t$, the imaginary part of the solution would work
<!-- Equation labels as ordinary links -->
<div id="eq:fastdriven2"></div>
$$
\begin{eqnarray}
\label{eq:fastdriven2} \tag{17}
x_p(t)&=&\Im\frac{(F_0/m)e^{i\omega t-i\delta}}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}\\
\nonumber
&=&\frac{(F_0/m)\sin(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray}
$$
## Damped and Driven Oscillator
Consider the damped and driven harmonic oscillator worked out above. Given $F_0, m,\beta$ and $\omega_0$, solve for the complete solution $x(t)$ for the case where $F=F_0\sin\omega t$ with initial conditions $x(t=0)=0$ and $v(t=0)=0$. Assume the underdamped case.
The general solution including the arbitrary constants includes both the homogenous and particular solutions,
$$
\begin{eqnarray*}
x(t)&=&\frac{F_0}{m}\frac{\sin(\omega t-\delta)}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}
+A\cos\omega't e^{-\beta t}+B\sin\omega't e^{-\beta t}.
\end{eqnarray*}
$$
The quantities $\delta$ and $\omega'$ are given earlier in the
section, $\omega'=\sqrt{\omega_0^2-\beta^2},
\delta=\tan^{-1}(2\beta\omega/(\omega_0^2-\omega^2)$. Here, solving
the problem means finding the arbitrary constants $A$ and
$B$. Satisfying the initial conditions for the initial position and
velocity:
$$
\begin{eqnarray*}
x(t=0)=0&=&-\eta\sin\delta+A,\\
v(t=0)=0&=&\omega\eta\cos\delta-\beta A+\omega'B,\\
\eta&\equiv&\frac{F_0}{m}\frac{1}{\sqrt{(\omega_0^2-\omega^2)^2+4\beta^2\omega^2}}.
\end{eqnarray*}
$$
The problem is now reduced to 2 equations and 2 unknowns, $A$ and $B$. The solution is
$$
\begin{eqnarray}
A&=& \eta\sin\delta ,~~~B=\frac{-\omega\eta\cos\delta+\beta\eta\sin\delta}{\omega'}.
\end{eqnarray}
$$
## Resonance Widths; the $Q$ factor
From the previous two sections, the particular solution for a driving force, $F=F_0\cos\omega t$, is
$$
\begin{eqnarray}
x_p(t)&=&\frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}}\cos(\omega_t-\delta),\\
\nonumber
\delta&=&\tan^{-1}\left(\frac{2\beta\omega}{\omega_0^2-\omega^2}\right).
\end{eqnarray}
$$
If one fixes the driving frequency $\omega$ and adjusts the
fundamental frequency $\omega_0=\sqrt{k/m}$, the maximum amplitude
occurs when $\omega_0=\omega$ because that is when the term from the
denominator $(\omega_0^2-\omega^2)^2+4\omega^2\beta^2$ is at a
minimum. This is akin to dialing into a radio station. However, if one
fixes $\omega_0$ and adjusts the driving frequency one minimize with
respect to $\omega$, e.g. set
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
\frac{d}{d\omega}\left[(\omega_0^2-\omega^2)^2+4\omega^2\beta^2\right]=0,
\label{_auto10} \tag{18}
\end{equation}
$$
and one finds that the maximum amplitude occurs when
$\omega=\sqrt{\omega_0^2-2\beta^2}$. If $\beta$ is small relative to
$\omega_0$, one can simply state that the maximum amplitude is
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
x_{\rm max}\approx\frac{F_0}{2m\beta \omega_0}.
\label{_auto11} \tag{19}
\end{equation}
$$
$$
\begin{eqnarray}
\frac{4\omega^2\beta^2}{(\omega_0^2-\omega^2)^2+4\omega^2\beta^2}=\frac{1}{2}.
\end{eqnarray}
$$
For small damping this occurs when $\omega=\omega_0\pm \beta$, so the $FWHM\approx 2\beta$. For the purposes of tuning to a specific frequency, one wants the width to be as small as possible. The ratio of $\omega_0$ to $FWHM$ is known as the _quality_factor, or $Q$ factor,
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
Q\equiv \frac{\omega_0}{2\beta}.
\label{_auto12} \tag{20}
\end{equation}
$$
## Numerical Studies of Driven Oscillations
Solving the problem of driven oscillations numerically gives us much
more flexibility to study different types of driving forces. We can
reuse our earlier code by simply adding a driving force. If we stay in
the $x$-direction only this can be easily done by adding a term
$F_{\mathrm{ext}}(x,t)$. Note that we have kept it rather general
here, allowing for both a spatial and a temporal dependence.
Before we dive into the code, we need to briefly remind ourselves
about the equations we started with for the case with damping, namely
$$
m\frac{d^2x}{dt^2} + b\frac{dx}{dt}+kx(t) =0,
$$
with no external force applied to the system.
Let us now for simplicty assume that our external force is given by
$$
F_{\mathrm{ext}}(t) = F_0\cos{(\omega t)},
$$
where $F_0$ is a constant (what is its dimension?) and $\omega$ is the frequency of the applied external driving force.
**Small question:** would you expect energy to be conserved now?
Introducing the external force into our lovely differential equation
and dividing by $m$ and introducing $\omega_0^2=\sqrt{k/m}$ we have
$$
\frac{d^2x}{dt^2} + \frac{b}{m}\frac{dx}{dt}+\omega_0^2x(t) =\frac{F_0}{m}\cos{(\omega t)},
$$
Thereafter we introduce a dimensionless time $\tau = t\omega_0$
and a dimensionless frequency $\tilde{\omega}=\omega/\omega_0$. We have then
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\frac{F_0}{m\omega_0^2}\cos{(\tilde{\omega}\tau)},
$$
Introducing a new amplitude $\tilde{F} =F_0/(m\omega_0^2)$ (check dimensionality again) we have
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
Our final step, as we did in the case of various types of damping, is
to define $\gamma = b/(2m\omega_0)$ and rewrite our equations as
$$
\frac{d^2x}{d\tau^2} + 2\gamma\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
This is the equation we will code below using the Euler-Cromer method.
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions as one-dimensional arrays of time
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler-Cromer's method
for i in range(n-1):
# Set up the acceleration
# Here you could have defined your own function for this
a = -2*gamma*v[i]-x[i]+Ftilde*cos(t[i]*Omegatilde)
# update velocity, time and position
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockEulerCromer")
plt.show()
```
In the above example we have focused on the Euler-Cromer method. This
method has a local truncation error which is proportional to $\Delta t^2$
and thereby a global error which is proportional to $\Delta t$.
We can improve this by using the Runge-Kutta family of
methods. The widely popular Runge-Kutta to fourth order or just **RK4**
has indeed a much better truncation error. The RK4 method has a global
error which is proportional to $\Delta t$.
Let us revisit this method and see how we can implement it for the above example.
## Differential Equations, Runge-Kutta methods
Runge-Kutta (RK) methods are based on Taylor expansion formulae, but yield
in general better algorithms for solutions of an ordinary differential equation.
The basic philosophy is that it provides an intermediate step in the computation of $y_{i+1}$.
To see this, consider first the following definitions
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
\frac{dy}{dt}=f(t,y),
\label{_auto13} \tag{21}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
y(t)=\int f(t,y) dt,
\label{_auto14} \tag{22}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
y_{i+1}=y_i+ \int_{t_i}^{t_{i+1}} f(t,y) dt.
\label{_auto15} \tag{23}
\end{equation}
$$
To demonstrate the philosophy behind RK methods, let us consider
the second-order RK method, RK2.
The first approximation consists in Taylor expanding $f(t,y)$
around the center of the integration interval $t_i$ to $t_{i+1}$,
that is, at $t_i+h/2$, $h$ being the step.
Using the midpoint formula for an integral,
defining $y(t_i+h/2) = y_{i+1/2}$ and
$t_i+h/2 = t_{i+1/2}$, we obtain
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
\int_{t_i}^{t_{i+1}} f(t,y) dt \approx hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto16} \tag{24}
\end{equation}
$$
This means in turn that we have
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
y_{i+1}=y_i + hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto17} \tag{25}
\end{equation}
$$
However, we do not know the value of $y_{i+1/2}$. Here comes thus the next approximation, namely, we use Euler's
method to approximate $y_{i+1/2}$. We have then
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
y_{(i+1/2)}=y_i + \frac{h}{2}\frac{dy}{dt}=y(t_i) + \frac{h}{2}f(t_i,y_i).
\label{_auto18} \tag{26}
\end{equation}
$$
This means that we can define the following algorithm for
the second-order Runge-Kutta method, RK2.
4
6
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
<!-- Equation labels as ordinary links -->
<div id="_auto20"></div>
$$
\begin{equation}
k_2=hf(t_{i+1/2},y_i+k_1/2),
\label{_auto20} \tag{28}
\end{equation}
$$
with the final value
<!-- Equation labels as ordinary links -->
<div id="_auto21"></div>
$$
\begin{equation}
y_{i+i}\approx y_i + k_2 +O(h^3).
\label{_auto21} \tag{29}
\end{equation}
$$
The difference between the previous one-step methods
is that we now need an intermediate step in our evaluation,
namely $t_i+h/2 = t_{(i+1/2)}$ where we evaluate the derivative $f$.
This involves more operations, but the gain is a better stability
in the solution.
The fourth-order Runge-Kutta, RK4, has the following algorithm
4
9
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
k_3=hf(t_i+h/2,y_i+k_2/2)\hspace{0.5cm} k_4=hf(t_i+h,y_i+k_3)
$$
with the final result
$$
y_{i+1}=y_i +\frac{1}{6}\left( k_1 +2k_2+2k_3+k_4\right).
$$
Thus, the algorithm consists in first calculating $k_1$
with $t_i$, $y_1$ and $f$ as inputs. Thereafter, we increase the step
size by $h/2$ and calculate $k_2$, then $k_3$ and finally $k_4$. The global error goes as $O(h^4)$.
However, at this stage, if we keep adding different methods in our
main program, the code will quickly become messy and ugly. Before we
proceed thus, we will now introduce functions that enbody the various
methods for solving differential equations. This means that we can
separate out these methods in own functions and files (and later as classes and more
generic functions) and simply call them when needed. Similarly, we
could easily encapsulate various forces or other quantities of
interest in terms of functions. To see this, let us bring up the code
we developed above for the simple sliding block, but now only with the simple forward Euler method. We introduce
two functions, one for the simple Euler method and one for the
force.
Note that here the forward Euler method does not know the specific force function to be called.
It receives just an input the name. We can easily change the force by adding another function.
```
def ForwardEuler(v,x,t,n,Force):
for i in range(n-1):
v[i+1] = v[i] + DeltaT*Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]
t[i+1] = t[i] + DeltaT
def SpringForce(v,x,t):
# note here that we have divided by mass and we return the acceleration
return -2*gamma*v-x+Ftilde*cos(t*Omegatilde)
```
It is easy to add a new method like the Euler-Cromer
```
def ForwardEulerCromer(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
```
and the Velocity Verlet method (be careful with time-dependence here, it is not an ideal method for non-conservative forces))
```
def VelocityVerlet(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]+0.5*a
anew = Force(v[i],x[i+1],t[i+1])
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK2(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Final result
x[i+1] = x[i]+k2x
v[i+1] = v[i]+k2v
t[i+1] = t[i]+DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK4(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k3
vv = v[i]+k2v*0.5
xx = x[i]+k2x*0.5
k3x = DeltaT*vv
k3v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k4
vv = v[i]+k3v
xx = x[i]+k3x
k4x = DeltaT*vv
k4v = DeltaT*Force(vv,xx,t[i]+DeltaT)
# Final result
x[i+1] = x[i]+(k1x+2*k2x+2*k3x+k4x)/6.
v[i+1] = v[i]+(k1v+2*k2v+2*k3v+k4v)/6.
t[i+1] = t[i] + DeltaT
```
The Runge-Kutta family of methods are particularly useful when we have a time-dependent acceleration.
If we have forces which depend only the spatial degrees of freedom (no velocity and/or time-dependence), then energy conserving methods like the Velocity Verlet or the Euler-Cromer method are preferred. As soon as we introduce an explicit time-dependence and/or add dissipitave forces like friction or air resistance, then methods like the family of Runge-Kutta methods are well suited for this.
The code below uses the Runge-Kutta4 methods.
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions (can change to more than one dim)
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler's method
# Note that we define the force function as a SpringForce
RK4(v,x,t,n,SpringForce)
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockRK4")
plt.show()
```
<!-- !split -->
## Principle of Superposition and Periodic Forces (Fourier Transforms)
If one has several driving forces, $F(t)=\sum_n F_n(t)$, one can find
the particular solution to each $F_n$, $x_{pn}(t)$, and the particular
solution for the entire driving force is
<!-- Equation labels as ordinary links -->
<div id="_auto22"></div>
$$
\begin{equation}
x_p(t)=\sum_nx_{pn}(t).
\label{_auto22} \tag{30}
\end{equation}
$$
This is known as the principal of superposition. It only applies when
the homogenous equation is linear. If there were an anharmonic term
such as $x^3$ in the homogenous equation, then when one summed various
solutions, $x=(\sum_n x_n)^2$, one would get cross
terms. Superposition is especially useful when $F(t)$ can be written
as a sum of sinusoidal terms, because the solutions for each
sinusoidal (sine or cosine) term is analytic, as we saw above.
Driving forces are often periodic, even when they are not
sinusoidal. Periodicity implies that for some time $\tau$
$$
\begin{eqnarray}
F(t+\tau)=F(t).
\end{eqnarray}
$$
One example of a non-sinusoidal periodic force is a square wave. Many
components in electric circuits are non-linear, e.g. diodes, which
makes many wave forms non-sinusoidal even when the circuits are being
driven by purely sinusoidal sources.
The code here shows a typical example of such a square wave generated using the functionality included in the **scipy** Python package. We have used a period of $\tau=0.2$.
```
%matplotlib inline
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t)
plt.plot(t, SqrSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
For the sinusoidal example studied in the previous subsections the
period is $\tau=2\pi/\omega$. However, higher harmonics can also
satisfy the periodicity requirement. In general, any force that
satisfies the periodicity requirement can be expressed as a sum over
harmonics,
<!-- Equation labels as ordinary links -->
<div id="_auto23"></div>
$$
\begin{equation}
F(t)=\frac{f_0}{2}+\sum_{n>0} f_n\cos(2n\pi t/\tau)+g_n\sin(2n\pi t/\tau).
\label{_auto23} \tag{31}
\end{equation}
$$
From the previous subsection, one can write down the answer for
$x_{pn}(t)$, by substituting $f_n/m$ or $g_n/m$ for $F_0/m$ into Eq.s
([16](#eq:fastdriven1)) or ([17](#eq:fastdriven2)) respectively. By
writing each factor $2n\pi t/\tau$ as $n\omega t$, with $\omega\equiv
2\pi/\tau$,
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef1"></div>
$$
\begin{equation}
\label{eq:fourierdef1} \tag{32}
F(t)=\frac{f_0}{2}+\sum_{n>0}f_n\cos(n\omega t)+g_n\sin(n\omega t).
\end{equation}
$$
The solutions for $x(t)$ then come from replacing $\omega$ with
$n\omega$ for each term in the particular solution in Equations
([9](#eq:partform)) and ([12](#eq:Ddrive)),
$$
\begin{eqnarray}
x_p(t)&=&\frac{f_0}{2k}+\sum_{n>0} \alpha_n\cos(n\omega t-\delta_n)+\beta_n\sin(n\omega t-\delta_n),\\
\nonumber
\alpha_n&=&\frac{f_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\beta_n&=&\frac{g_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\delta_n&=&\tan^{-1}\left(\frac{2\beta n\omega}{\omega_0^2-n^2\omega^2}\right).
\end{eqnarray}
$$
Because the forces have been applied for a long time, any non-zero
damping eliminates the homogenous parts of the solution, so one need
only consider the particular solution for each $n$.
The problem will considered solved if one can find expressions for the
coefficients $f_n$ and $g_n$, even though the solutions are expressed
as an infinite sum. The coefficients can be extracted from the
function $F(t)$ by
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef2"></div>
$$
\begin{eqnarray}
\label{eq:fourierdef2} \tag{33}
f_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\cos(2n\pi t/\tau),\\
\nonumber
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\sin(2n\pi t/\tau).
\end{eqnarray}
$$
To check the consistency of these expressions and to verify
Eq. ([33](#eq:fourierdef2)), one can insert the expansion of $F(t)$ in
Eq. ([32](#eq:fourierdef1)) into the expression for the coefficients in
Eq. ([33](#eq:fourierdef2)) and see whether
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~\left\{
\frac{f_0}{2}+\sum_{m>0}f_m\cos(m\omega t)+g_m\sin(m\omega t)
\right\}\cos(n\omega t).
\end{eqnarray}
$$
Immediately, one can throw away all the terms with $g_m$ because they
convolute an even and an odd function. The term with $f_0/2$
disappears because $\cos(n\omega t)$ is equally positive and negative
over the interval and will integrate to zero. For all the terms
$f_m\cos(m\omega t)$ appearing in the sum, one can use angle addition
formulas to see that $\cos(m\omega t)\cos(n\omega
t)=(1/2)(\cos[(m+n)\omega t]+\cos[(m-n)\omega t]$. This will integrate
to zero unless $m=n$. In that case the $m=n$ term gives
<!-- Equation labels as ordinary links -->
<div id="_auto24"></div>
$$
\begin{equation}
\int_{-\tau/2}^{\tau/2}dt~\cos^2(m\omega t)=\frac{\tau}{2},
\label{_auto24} \tag{34}
\end{equation}
$$
and
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~f_n/2\\
\nonumber
&=&f_n~\checkmark.
\end{eqnarray}
$$
The same method can be used to check for the consistency of $g_n$.
Consider the driving force:
<!-- Equation labels as ordinary links -->
<div id="_auto25"></div>
$$
\begin{equation}
F(t)=At/\tau,~~-\tau/2<t<\tau/2,~~~F(t+\tau)=F(t).
\label{_auto25} \tag{35}
\end{equation}
$$
Find the Fourier coefficients $f_n$ and $g_n$ for all $n$ using Eq. ([33](#eq:fourierdef2)).
Only the odd coefficients enter by symmetry, i.e. $f_n=0$. One can find $g_n$ integrating by parts,
<!-- Equation labels as ordinary links -->
<div id="eq:fouriersolution"></div>
$$
\begin{eqnarray}
\label{eq:fouriersolution} \tag{36}
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt~\sin(n\omega t) \frac{At}{\tau}\\
\nonumber
u&=&t,~dv=\sin(n\omega t)dt,~v=-\cos(n\omega t)/(n\omega),\\
\nonumber
g_n&=&\frac{-2A}{n\omega \tau^2}\int_{-\tau/2}^{\tau/2}dt~\cos(n\omega t)
+\left.2A\frac{-t\cos(n\omega t)}{n\omega\tau^2}\right|_{-\tau/2}^{\tau/2}.
\end{eqnarray}
$$
The first term is zero because $\cos(n\omega t)$ will be equally
positive and negative over the interval. Using the fact that
$\omega\tau=2\pi$,
$$
\begin{eqnarray}
g_n&=&-\frac{2A}{2n\pi}\cos(n\omega\tau/2)\\
\nonumber
&=&-\frac{A}{n\pi}\cos(n\pi)\\
\nonumber
&=&\frac{A}{n\pi}(-1)^{n+1}.
\end{eqnarray}
$$
## Fourier Series
More text will come here, chpater 5.7-5.8 of Taylor are discussed
during the lectures. The code here uses the Fourier series discussed
in chapter 5.7 for a square wave signal. The equations for the
coefficients are are discussed in Taylor section 5.7, see Example
5.4. The code here visualizes the various approximations given by
Fourier series compared with a square wave with period $T=0.2$, witth
$0.1$ and max value $F=2$. We see that when we increase the number of
components in the Fourier series, the Fourier series approximation gets closes and closes to the square wave signal.
```
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
T =0.2
# Max value of square signal
Fmax= 2.0
# Width of signal
Width = 0.1
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
FourierSeriesSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t+np.pi*Width/T)
a0 = Fmax*Width/T
FourierSeriesSignal = a0
Factor = 2.0*Fmax/np.pi
for i in range(1,500):
FourierSeriesSignal += Factor/(i)*np.sin(np.pi*i*Width/T)*np.cos(i*t*2*np.pi/T)
plt.plot(t, SqrSignal)
plt.plot(t, FourierSeriesSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
| github_jupyter |
# Adding AI to Your App
There are multiple approaches one can use to leverage AI and ML in their business idea.
1. Building your own proprietary model
2. Using a pre-trained model within your application
3. Leverage Cloud providers to support AI functions in your application
These different approaches have pros and cons. Approach 1 gives most degree of control to you as a business in terms of ***taking ownership*** and ***being independent***.
This notebook provides code examples in going about building your proprietary ML model in part 1. Then in part 2, we look at an example of how a model can be evaluated. This function is quite important as model evaluation is critical for a business regardless the model is built in-house or provided by a an external vendor.
# Part 1: Building your Proprietary AI model
In this section, we look at how a ML model can be by built from scratch for your organisation. As seen from the example, this appears to be the more complex approach to leveraging ML as it entails costs relating to sourcing the relevant data and expertise required to building and maintaining your in-house model. In a strategic perspective, this approach is more preferable as it adds true value to the business as the ML model becomes the intellectual property of your business and gives the competitive edge. It also frees your business from having to rely on 3rd party licensing restrictions and terms of service agreements that are governed by the true owners of the AI/ML models that may become critical to your business.
## Installing the Python Libraries
The first step is to install the python libraries that are needed for training our own ML model. There are many off-the-shelf machine learning libraries that are available in majority of programming languages to use well tested ML algorithms to train models with your own data. These libraries often come with favorable licenses (Eg. Apache 2, MIT and BSD to name a few) that will give your the freedom to use these tools without compromising the legal ownership of the models you train with them.
In this example, we use Python, a programming language that has a very rich ecosystem for data science and machine learning. For the specific implementation, we need [scikit-learn](https://scikit-learn.org/stable/) and [pandas](https://pandas.pydata.org/). We can use [pip](https://pypi.org/project/pip/) Python package manager to install these two libraries.
```
!pip install -U scikit-learn
!pip install pandas
```
## Loading the Data
We use a popular publicly available labelled [sentiment analysis dataset](https://www.cs.cornell.edu/people/pabo/movie-review-data/) to demonstrate the different approaches. We use a star rating dataset from the famous movie review website, [IMDB](https://www.imdb.com/).
First, we load the data from the local disc using the `load_data` function that has been implemented here. Then we convert that dataset into a `pandas.DataFrame` object that is highly compatible with `scikit-learn` machine learning library.
```
# import required functions
from os.path import join
from os import listdir
data_dir = "data"
pos_data_dir = join(data_dir, "pos")
neg_data_dir = join(data_dir, "neg")
def load_data(filepath, label):
files = [join(filepath, filename) for filename in listdir(filepath) if filename.endswith(".txt")]
records = []
for file in files:
with open(file) as f:
text = f.read()
records.append({"text": text, "label": label})
return records
import pandas as pd
from sklearn.utils import shuffle
pos = load_data(pos_data_dir, 1)
neg = load_data(neg_data_dir, 0)
records_df = shuffle(pd.DataFrame(pos + neg)).reset_index(drop=True)
records_df
```
## Train-Test Split
When training a machine learning model, we need to make sure that there is an ***unseen*** set of examples that we can use to evaluate the true performance of the trained machine learning model. A popular approach to pre-allocate a percentage of the full dataset for testing the trained model and avoid using that data during the model training process.
`scikit-learn` already provides functions that can easily create this test-data allocation for us. In the following step, we create the train-test data split.
```
from sklearn.model_selection import train_test_split
X = records_df["text"]
y = records_df["label"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
```
### Data Vectorisation
Machine Learning models dominantly work with ***numerical representations***. What that means is that the input we provide into the machine learning algorithm should contain numbers rather than letters and symbols. In this example, we are working with movie reviews that are text representations. The process of taking non-numerical data and transforming them into a numerical vectors in a sensible manner is called ***data vectorisation*** in data science.
In order to create numerical representations of the text we have, there are well tested methods such as extracting the [TFIDF representation](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) of the textual document. We use pre-built functions available in `scikit-learn` in order to vectorise text. `scikit-learn` library provides different vectorisation methods (a.k.a feature extraction) for different modalities such as [text](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_extraction.text) and [images](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_extraction.image).
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectoriser = TfidfVectorizer(stop_words="english")
x_train = vectoriser.fit_transform(X_train)
```
### Model Training
After vectorising the data, we choose an appropriate machine learning model and train it by ***fitting the model*** to the training data.
```
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(x_train, y_train)
y_train_pred = model.predict(x_train)
print(list(y_train_pred))
print(list(y_train))
```
### Model Testing
Once the model is trained, we need to see how robust this model is in making predictions on data that it hasn't seen before. We can use the pre-allocated test data to serve this purpose.
```
x_test = vectoriser.transform(X_test)
y_test_pred = model.predict(x_test)
```
## Example Prediction
```
example_text = list(X_test)[1]
example_label = list(y_test)[1]
print("Text: {}\n Actual Label: {}".format(example_text, example_label))
```
### Predicting on example with our proprietary model
```
x_vect = vectoriser.transform([example_text])
y_pred = model.predict(x_vect)
print("Predicted Label: {}".format(y_pred[0]))
```
# Part 2: Evaluation
## Evaluating Accuracy of the train and Test data
Evaluating a trained machine learning model is critical to establishing the value it can bring to your business. In this section we look at how we can evaluate the performance of the trained sentiment classification model.
The [accuracy score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) can be used here to evaluate classification accuracy.
### Exercise
Using the `accuracy_score` function in `sklearn.metrics` module, calculate the accuracy classification accuracy of the trained model based on both training data and testing data (using actual and predicted labels)/
```
# insert code here
train_accuracy = # insert code here
test_accuracy = # insert code here
print("Accuracy of the model on training data: {}".format(train_accuracy))
print("Accuracy of the model on test data: {}".format(test_accuracy))
```
| github_jupyter |
# **Pseudotimes and cell fates**
---------------------------
**Motivation:**
While clustering is an useful type of analysis to try giving a structure to the development of cells towards their final stage (spermatozoa), it does not give an understanding of how the development "stretches" from start to end. For example, a cluster can have many cells and look "big" on UMAP, but actually its variability in terms of gene expressions could be low. Also, a developmental process can branches towards different ends (cell fates) or developmental checkpoints (e.g. stages where damaged cells express specific genes for apoptosis/cell death). Pseudotime and cell fates analysis can be used to hgihlight exactly those processes.
- **Pseudotimes** assigns to each cell the value of a timeline, starting from 0 for the cells at the beginning of the development. This value is purely a reference for ordering the cells development, but pseudotimes at specific stages can be assigned to real times, using previous biological knowledge.
- **Cell fates analysis** looks at the PCA projection of the data and the pseudotime of each data point on the PCA. From this, it tries to create a tree connecting the cells, so that the end branches of the tree are different end points or stages of the developmental process.

*Figure: cell fates tree on a 3D pca plot. Circles represent the middle point of each cluster. From Perredaeau et al. (2017)*
---------------------------
**Learning objectives:**
- Understand and determine the pseudotimes on a single cell dataset
- Infer cell fates and distinguish between differentiation stages or actual final developmental stages
- Compare gene expressions along differentiation
- Cluster genes with similar gene expression
----------------
**Execution time: 45 minutes**
---------------
***Import packages***
```
import scanpy as sc
import pandas as pd
import scvelo as scv
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import anndata as ad
import rpy2.rinterface_lib.callbacks
import logging
from rpy2.robjects import pandas2ri
import anndata2ri
# Ignore R warning messages
#Note: this can be commented out to get more verbose R output
rpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR)
# Automatically convert rpy2 outputs to pandas dataframes
pandas2ri.activate()
anndata2ri.activate()
#import os
#os.environ['R_HOME'] = '../../../scrna-environment/lib/R/' #path to your R installation
%load_ext rpy2.ipython
%%R
.libPaths( c( "../../../sandbox_scRNA_testAndFeedback/scrna-environment/lib/R/library/" ) )
%matplotlib inline
```
***Read data***
```
sample = sc.read('../../Data/notebooks_data/sample_123.filt.norm.red.clst.2.h5ad')
```
## Calculate pseudotimes and cell fates
We want to calculate pseudotimes on the spermatogenic process. We exclude the somatic cells from.
```
cellsToKeep = [ i not in ['Somatic'] for i in sample.obs['clusters_spc'] ]
sample = sample[ cellsToKeep, : ].copy()
```
we use the `python` package `palantir`. There will be some annoying warning messages sometimes in this notebook when using `palantir`, but they are just warnings related to font characters. Nothing to worry about.
```
%%capture --no-stdout
import palantir
palantir.core.random.seed( a=12345 ) #define random_state (here called 'a')
```
we create a table (`pandas` dataframe) with the logarithm of the corrected UMI matrix, since `palantir` needs logarithmized raw counts in input
```
palantir_data = pd.DataFrame(np.log1p(sample.layers['umi_sct'].todense()),
index=sample.obs_names,
columns=sample.var_names)
```
Instead of letting the package calculate the PCA (without any form of datasets integration), we use our integrated PCA.
```
pca_projections = pd.DataFrame( sample.obsm['X_pca'][:,0:15].copy(),
index=sample.obs_names )
```
Now we will infer the pseudotimes and related cell fates. We have to provide where the differentiation process starts from. In our case, we will choose one of the cells in the cluster `SpermatogoniaA`. Then `Palantir` will assign the pseudotimes=0 to the most appropriate cell in the cluster.
Note the option `num_waypoints=100` in the last command. This option will use a certain number of cells to build the tree from which to calculate pseudotimes and cell fates. it is suggested to use only a portion of cells from the dataset, since using all cells will make you experience the inference of many cell fates that are mostly due to noise. In other words, you will build a tree with some tiny branches that will be detected as cellular fates.
```
ORIGIN_STATE = 'SpermatogoniaA' #where to start
sc.tl.diffmap(sample)
diffusionMap = pd.DataFrame(sample.obsm['X_diffmap'][:,1::],
index=sample.obs_names,
columns = [str(i) for i in range(sample.obsm['X_diffmap'].shape[1]-1)])
#apply palantir
start_cell = str(sample[sample.obs['clusters_spc'] == ORIGIN_STATE].obs_names[0]) #assignment of diferentiation start
pr_res = palantir.core.run_palantir( diffusionMap, early_cell=start_cell, num_waypoints=1000) #fate detection
```
We save pseudotimes in our dataset and plot them on UMAP
```
sample.obs['pseudotime'] = pr_res.pseudotime
sc.pl.umap( sample, color=['clusters_spc','pseudotime'],
legend_loc='on data',
legend_fontsize=16,
ncols=2 )
```
We can look at how pseudotimes are distributed into each cluster. It seems the variability of pseudotimes increases along spermatogenesis with some oscillations. This can mean more variability in the expression of genes in the later clusters (but does not mean that there are more genes that are expressed). Note that there are considerable overlapping in pseudotimes. This is due to the fact that pseudotimes have a spike around Pachytene-Diplotene stages.
```
cluster_names = [i for i in ['SpermatogoniaA', 'SpermatogoniaB', 'Leptotene', 'Zygotene',
'Pachytene', 'SpermatocitesII', 'Diplotene', 'RoundSpermatids',
'ElongSpermatids'] if i in np.array(sample.obs['clusters_spc']) ]
sc.pl.violin(sample, keys='pseudotime', groupby='clusters_spc', rotation=90,
order=cluster_names)
```
## Analysis of cell fates
we can see how many fates we have. For each fate, there is the barcode of the cell best representing a differentiation stage. In some cases you can have more than two fates
```
fates = list(pr_res.branch_probs.columns)
fates
```
We can plot them on the UMAP plot. One fate is clearly the end of spermatogenesis, where cells become elongated spermatids and spermatozoa.There is another fate, probably due to something happening during meiosis.
```
#necessary because palantir somewhat disables plotting on notebooks
%matplotlib inline
f, ax = plt.subplots(1,1)
sc.pl.umap( sample,
legend_loc='on data',
legend_fontsize=16, ax=ax, show=False)
coordinates = sample[fates].obsm['X_umap']
ax.plot(coordinates[:,0],coordinates[:,1],'o',markersize=12)
for i in range(coordinates.shape[0]):
ax.text(coordinates[i,0]-1,coordinates[i,1]-2, f'Fate {i}')
ax.set_title("Inferred cell fates")
plt.show()
```
We rename the fates instead of using cell barcodes as we plotted above
```
fates = np.array( pr_res.branch_probs.columns )
for i in range(coordinates.shape[0]):
fates[i] = f'Fate {i}'
pr_res.branch_probs.columns = fates
```
We save in our data the probability that each cell differentiate into one of the fates
```
for i in pr_res.branch_probs.columns:
sample.obs[f'branch_prob_{i}'] = pr_res.branch_probs[i]
```
### Recognizing branchings or developmental stages
A good practice is to look at the probabilities of ending in a fate for each cluster. There are two possible scenarios:
- only one fate: all cells have probability 1 of ending at a specific cell fate
- more than one cell fate: some fates are actual branchings of the developmental process, and only some cells will have a probability of ending up in those branchings. Some other fates are just midpoints of the developmental process. Here, they will absorb with probability 1 entire sections of the dataset.
We plot below the probability of each cell (seen by cluster) to end up in a specific fate. Each violin plot corresponds to a single fate.
```
for i in range(coordinates.shape[0]):
x = sc.pl.violin(sample, groupby='clusters_spc', keys=f'branch_prob_Fate {i}', rotation=90,
order=cluster_names, ylabel=f'Probability of Fate {i}')
```
### Exploring gene expression and clusters
Here is a script that plots gene expressions of your choice along pseudotimes. This allows you to see how specific genes behave differently for different fates. Expressions are modeled using the fate probabilities we plotted above.
```
import palantir
GENES = ['PIWIL1','PIWIL2','PIWIL3']
GENES = np.intersect1d(GENES, sample.var_names)
NGENES = len(GENES)
CLUSTERS = sample.obs['clusters_spc']
PSEUDOTIMES = sample.obs['pseudotime']
gene_trends = palantir.presults.compute_gene_trends(pr_res,
pd.DataFrame(sample.layers['norm_sct'],
index=sample.obs_names,
columns=sample.var_names).loc[:, GENES]
)
plt.rcParams['figure.figsize']=(12,4*int(NGENES))
fig, ax = plt.subplots(NGENES,1)
c = CLUSTERS
x = PSEUDOTIMES
if(NGENES==1):
x2 = []
t = []
style = []
for FATE in list(gene_trends.keys()):
ARRAY = np.array( gene_trends[FATE]['trends'].loc[GENES[0],:].index )
for i in ARRAY:
idx = np.argmin(np.abs(x - i))
x2.append(c[idx])
t.append(i)
if(len(style)==0):
style = np.tile( FATE, 500 )
y = np.array(gene_trends[FATE]['trends'].loc[GENES[0],:])
else:
style = np.append(arr=style,
values=np.tile( FATE, 500 ))
y = np.append(arr=y,
values=np.array(gene_trends[FATE]['trends'].loc[GENES[0],:]))
sns.lineplot(x=t,
y=y, ci=False,
hue=x2, ax=ax, style = style,
linewidth = 5)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(xlabel = 'Pseudotime', ylabel=GENES[0])
if(NGENES>1):
for GENE_NR in range(NGENES):
style = []
x2 = []
t = []
for FATE in list(gene_trends.keys()):
ARRAY = np.array( gene_trends[FATE]['trends'].loc[GENES[GENE_NR],:].index )
for i in ARRAY:
idx = np.argmin(np.abs(x - i))
x2.append(c[idx])
t.append(i)
if(len(style)==0):
style = np.tile( FATE, 500 )
y = np.array(gene_trends[FATE]['trends'].loc[GENES[GENE_NR],:])
else:
style = np.append(arr=style,
values=np.tile( FATE, 500 ))
y = np.append(arr=y,
values=np.array(gene_trends[FATE]['trends'].loc[GENES[GENE_NR],:]))
sns.lineplot(x=t,
y=y, ci=False,
hue=x2, ax=ax[GENE_NR],
style = style, linewidth = 5, legend=GENE_NR==0)
ax[GENE_NR].set(ylabel = GENES[GENE_NR])
if(GENE_NR==0):
ax[0].legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax[0].set_title(f'Gene expression along the fates:\n{list(gene_trends.keys())}')
ax[GENE_NR].set(xlabel = 'Pseudotime')
plt.rcParams['figure.figsize']=(6,6)
```
**Gene clustering:**
A last thing you can do is to cluster together genes that have the same expression patterns. We can try to do this for each different fate. Here you can only look at one fate at a time
For making the clustering faster, we cluster together only the differentially expressed genes we found in the previous analysis. However, below you can define the variable `genes` as any list of genes. You can for example read them from a text file, or you can use all possible genes by writing `genes=list(sample.var_names)`
```
genes = []
for names in sample.uns['DE_clusters_spc']['names']:
genes.append( list( names ) )
genes = np.unique(np.ravel(genes))
```
model the gene expression along pseudotime
```
gene_trends = palantir.presults.compute_gene_trends( pr_res,
pd.DataFrame(sample[ :, genes ].layers['norm_sct'],
index=sample[ :, genes ].obs_names,
columns=sample[ :, genes ].var_names) )
```
cluster the expressions together and plot clusters. If you see that there should be more clusters than the algorithm calculates, you can try to increase their number by changing the value of `k=20`. Usually, you should see a lot of genes expressed (in gray colour) differently from their averaged expression (in blue colour)
```
trends = gene_trends['Fate 0']['trends']
gene_clusters = palantir.presults.cluster_gene_trends(trends, k=20)
palantir.plot.plot_gene_trend_clusters(trends, gene_clusters)
```
Here is a script to produce the plot as above, with averaged expression of each gene cluster coloured by cell types, together with confidence bands. It takes some time to do all the plots, so be patient.
```
GENE_CLST = np.array(gene_clusters)
UNIQUE_CLST = np.sort(np.unique(GENE_CLST))
CLST_NR = int(len(UNIQUE_CLST))
CLUSTERS = sample.obs['clusters_spc']
PSEUDOTIMES = sample.obs['pseudotime']
plt.rcParams['figure.figsize']=(12,4*CLST_NR)
fig, ax = plt.subplots(CLST_NR,1)
c = CLUSTERS
x = PSEUDOTIMES
if(CLST_NR==1):
t = []
x2 = []
ARRAY = np.array( trends.columns )
for i in ARRAY:
idx = np.argmin(np.abs(x - i))
x2.append(c[idx])
t.append(i)
x=np.tile(ARRAY,trends.loc[GENE_CLST==0,:].shape[0])
y=np.array(trends.loc[GENE_CLST==0,:]).ravel()
hue=np.tile(x2,trends.loc[GENE_CLST==0,:].shape[0])
ax = sns.lineplot(x=x, y=y, hue=hue)
sns.lineplot(x=x, y=y, hue=hue,
ax=ax, linewidth = 5)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
if(CLST_NR>1):
ARRAY = np.array( trends.columns )
t = []
x2 = []
for i in ARRAY:
idx = np.argmin(np.abs(x - i))
x2.append(c[idx])
t.append(i)
for CLST_NR in UNIQUE_CLST:
x=np.tile(ARRAY,trends.loc[GENE_CLST==CLST_NR,:].shape[0])
y=np.array(trends.loc[GENE_CLST==CLST_NR,:]).ravel()
hue=np.tile(x2,trends.loc[GENE_CLST==CLST_NR,:].shape[0])
sns.lineplot(x=x, y=y, hue=hue,
ax=ax[CLST_NR], linewidth = 5, legend=CLST_NR==0)
ax[CLST_NR].set(ylabel = f'Cluster {CLST_NR}')
if(CLST_NR==0):
ax[CLST_NR].legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax[CLST_NR].set_title('Gene expression clustering for cell fate 0')
ax[CLST_NR].set(xlabel = 'Pseudotime')
plt.rcParams['figure.figsize']=(6,6)
```
you can always look at the genes in a specific cluster. In this case, each cluster should be quite matching the differentially expressed genes for a cell type, since we grouped together differentially expressed genes
```
gene_clusters[gene_clusters==5]
```
We also want to save the dataset (including somatic cells) with pseudotimes. To do this we reopen the whole dataset and assign pseudotimes equal to 0 to the somatic cell.
```
whole_sample = sc.read('../../Data/notebooks_data/sample_123.filt.norm.red.clst.2.h5ad')
times = pd.Series(sample.obs['pseudotime'], index=sample.obs_names)
whole_times = pd.Series(index=whole_sample.obs_names)
names = sample.obs_names
whole_names = whole_sample.obs_names
whole_times = [ times[i] if i in names else 0 for i in whole_names ]
whole_sample.obs['pseudotimes'] = whole_times
whole_sample.write('../../Data/notebooks_data/sample_123.filt.norm.red.clst.2.times.h5ad')
```
## Wrapping up
This notebooks shows how to do pseudotimes analysis and exploring cell fates and gene expressions.
We have seen how to distinguish between an actual differentiation branch and a differentiation stage. Basically, all cells before (i.e. earlier in pseudotime) a differentiation stage will be associated to such stage with high probability, because they must go through that developmental stage. Finding a developmental stage around meiosis in spermatogenic samples is a common results across single cell datasets of many species (primates, humans, mice).
Using the `palantir` software, we can look at differences between gene expressions for different fates, and cluster together genes of interest for further analysis.
| github_jupyter |
# Data Processing
The project has five steps:
- delet irregular (too large or small (no data)) and non-image data
- remove duplicate image
- remove irrelevant image
- split dataset: create classes.txt, train.txt, test.txt
- rename images
### Deleting irragular images
```
import os
import sys
import imghdr
class ImageDelet():
def __init__(self):
self.path = '/home/gpu/Project/dataProcess/bun/'
self.imageTypes = ['.jpg', '.jpeg', '.png', '.gif']
delet_count = 0
def delet(self):
filelist = os.listdir(self.path)
total_num = len(filelist)
delet_count = 0
for item in filelist:
src = os.path.join(os.path.abspath(self.path), item)
image_type = os.path.splitext(src)[-1]
if not imghdr.what(src):
os.remove(src) # delet corrupted image
delet_count += 1
elif image_type in self.imageTypes:
imageSize = sys.getsizeof(src) # most abnormal image's getsizeof will exceed 150
# print(imageSize)
if imageSize > 150:
os.remove(src)
delet_count += 1
else:
continue
else:
os.remove(src) # delet non-image data
delet_count += 1
print ('Total: %d\nDelet: %d' % (total_num, delet_count))
deletImage = ImageDelet()
deletImage.delet()
```
### Renaming the images downloaded by the web crawler.
### Renaming the images which have been processed.
```
class ImageRename():
def __init__(self):
self.path = '/home/gpu/Project/dataProcess/bun/'
def rename(self):
filelist = os.listdir(self.path)
filelist.sort() # if the filelist is not sorted, some file will be replaced when repeating rename result in
total_num = len(filelist)
rename_count = 0
for item in filelist:
src = os.path.join(os.path.abspath(self.path), item)
image_type = os.path.splitext(src)[-1]
# if image_type in self.imageTypes:
# dst = os.path.join(os.path.abspath(self.path), format(str(rename_count), '0>4s') + '.jpg')
dst = os.path.join(os.path.abspath(self.path), str(rename_count).zfill(4) + image_type)
os.rename(src, dst)
print ('converting %s to %s ...' % (src, dst))
rename_count += 1
# elif os.path.isdir(src):
# continue
# else:
# os.remove(src)
# delet_count += 1
print ('Total: %d\nRename: %d' % (total_num, rename_count))
newName = ImageRename()
newName.rename()
```
### Removing the duplicate images
```
# Perceptual Hash Algorithm - dHash
import cv2
def dhash(image):
# convert image to 8*8
image = cv2.resize(image, (9, 8), interpolation=cv2.INTER_CUBIC)
# convert image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
dhash_str = ''
for i in range(8):
for j in range(8):
if gray[i, j] > gray[i, j + 1]:
dhash_str = dhash_str + '1'
else:
dhash_str = dhash_str + '0'
result = ''
for i in range(0, 64, 4):
result += ''.join('%x' % int(dhash_str[i: i + 4], 2))
return result
# calculate the difference between hash1 and hash2
def campHash(hash1, hash2):
n = 0
# If the hash length is different, the comparison cannot be made, and -1 is returned.
if len(hash1) != len(hash2):
return -1
# If the hash length is same, traversing hash1 ahd hash2 for comparison.
for i in range(len(hash1)):
if hash1[i] != hash2[i]:
n = n + 1
return n
image1 = cv2.imread('/home/gpu/Project/dataProcess/bun/0017.jpg')
image2 = cv2.imread('/home/gpu/Project/dataProcess/bun/0018.jpeg')
hash1 = dhash(image1)
hash2 = dhash(image2)
distance_hash = campHash(hash1, hash2)
# if campHash == 0, it means that the two images are duplicate images.
image2_path = '/home/gpu/Project/dataProcess/bun/0012.jpeg'
if distance_hash == 0:
os.remove(image2_path)
```
### Removing the irrelevant images
```
# Perceptual Hash Algorithm - dHash
import cv2
def dhash(image):
# convert image to 8*8
image = cv2.resize(image, (9, 8), interpolation=cv2.INTER_CUBIC)
# convert image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
dhash_str = ''
for i in range(8):
for j in range(8):
if gray[i, j] > gray[i, j + 1]:
dhash_str = dhash_str + '1'
else:
dhash_str = dhash_str + '0'
result = ''
for i in range(0, 64, 4):
result += ''.join('%x' % int(dhash_str[i: i + 4], 2))
return result
# calculate the difference between hash1 and hash2
def campHash(hash1, hash2):
n = 0
# If the hash length is different, the comparison cannot be made, and -1 is returned.
if len(hash1) != len(hash2):
return -1
# If the hash length is same, traversing hash1 ahd hash2 for comparison.
for i in range(len(hash1)):
if hash1[i] != hash2[i]:
n = n + 1
return n
image1 = cv2.imread('/home/gpu/Project/dataProcess/bun/0017.jpg')
image2 = cv2.imread('/home/gpu/Project/dataProcess/bun/0013.jpeg')
hash1 = dhash(image1)
hash2 = dhash(image2)
distance_hash = campHash(hash1, hash2)
# if campHash > 10, it means that the two images are different classes.
image2_path = '/home/gpu/Project/dataProcess/bun/0012.jpeg'
if distance_hash > 10:
os.remove(image2_path)
```
### Spliting dataset
#### Generate the train.txt, test.txt, and classes.txt.
```
dataset_path = '/home/gpu/Project/dataProcess/'
def gengrateClass(dataset_path):
filelist = os.listdir(dataset_path)
for file_name in filelist:
if file_name.startswith('.'):
filelist.remove(file_name)
filelist.sort()
class_savePath = '/home/gpu/Project/dataProcess/meta/class.txt'
# If filename does not exist, it will be created automatically.
#'w' means to write data. The original data in the file will be cleared before writing!
with open(class_savePath,'w') as f:
for file_name in filelist:
f.write(file_name)
f.write('\n')
gengrateClass(dataset_path)
import math
def splitDataset(dataset_path):
filelist = os.listdir(dataset_path)
for file_name in filelist:
if file_name.startswith('.'):
filelist.remove(file_name)
filelist.sort()
train_savePath = '/home/gpu/Project/dataProcess/meta/train.txt'
test_savePath = '/home/gpu/Project/dataProcess/meta/test.txt'
for file_name in filelist:
image_path = dataset_path + file_name
image_list = os.listdir(image_path)
for image_name in image_list:
if image_name.startswith('.'):
image_list.remove(image_name)
image_size = len(image_list)
train_size = math.ceil(image_size * 0.75)
# If filename does not exist, it will be created automatically.
#'w' means to append data. The original data in the file will not be cleared.
with open(train_savePath,'a') as train:
for file_name in image_list[:train_size]:
train.write(file_name)
train.write('\n')
with open(test_savePath,'a') as test:
for file_name in image_list[train_size:]:
test.write(file_name)
test.write('\n')
splitDataset(dataset_path)
```
| github_jupyter |
```
import pymc3 as pm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import theano.tensor as tt
import theano
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
df = pd.read_csv('../datasets/bikes/hour.csv')
df
feature_cols = ['workingday', 'holiday', 'temp', 'atemp', 'hum', 'windspeed']
out_col = ['cnt']
df[out_col]
X = pm.floatX(df[feature_cols])
Y = pm.floatX(df[out_col].apply(np.log10))
n_hidden = X.shape[1]
with pm.Model() as nn_model:
w1 = pm.Normal('w1', mu=0, sd=1, shape=(X.shape[1], n_hidden))
w2 = pm.Normal('w2', mu=0, sd=1, shape=(n_hidden, 1))
b1 = pm.Normal('b1', mu=0, sd=1, shape=(n_hidden,))
b2 = pm.Normal('b2', mu=0, sd=1, shape=(1,))
a1 = pm.Deterministic('a1', tt.nnet.relu(tt.dot(X, w1) + b1))
a2 = pm.Deterministic('a2', tt.dot(a1, w2) + b2)
output = pm.Normal('likelihood', mu=a2, observed=Y)
with pm.Model() as three_layer_model:
w1 = pm.Normal('w1', mu=0, sd=1, shape=(X.shape[1], n_hidden))
w2 = pm.Normal('w2', mu=0, sd=1, shape=(n_hidden, n_hidden))
w3 = pm.Normal('w3', mu=0, sd=1, shape=(n_hidden, 1))
b1 = pm.Normal('b1', mu=0, sd=1, shape=(n_hidden,))
b2 = pm.Normal('b2', mu=0, sd=1, shape=(n_hidden,))
b3 = pm.Normal('b3', mu=0, sd=1, shape=(1,))
a1 = pm.Deterministic('a1', tt.nnet.relu(tt.dot(X, w1) + b1))
a2 = pm.Deterministic('a2', tt.nnet.relu(tt.dot(a1, w2) + b2))
a3 = pm.Deterministic('a3', tt.dot(a2, w3) + b3)
sd = pm.HalfCauchy('sd', beta=1)
output = pm.Normal('likelihood', mu=a3, sd=sd, observed=Y)
with pm.Model() as linreg_model:
w1 = pm.Normal('w1', mu=0, sd=1, shape=(X.shape[1], 1))
b1 = pm.Normal('b1', mu=0, sd=1, shape=(1,))
a1 = pm.Deterministic('a1', tt.dot(X, w1) + b1)
sd = pm.HalfCauchy('sd', beta=1)
output = pm.Normal('likelihood', mu=a1, sd=sd, observed=Y)
with linreg_model:
s = theano.shared(pm.floatX(1.1))
inference = pm.ADVI(cost_part_grad_scale=s, learning_rate=.01)
approx = pm.fit(200000, method=inference)
plt.plot(inference.hist)
with linreg_model:
trace = approx.sample(2000)
pm.traceplot(trace, varnames=['w1', 'b1'])
with linreg_model:
samps = pm.sample_ppc(trace)
samps['likelihood'].std(axis=0)
samps['likelihood'].mean(axis=0)
from sklearn.metrics import mean_squared_error as mse
mse(Y, samps['likelihood'].mean(axis=0))
plt.scatter(samps['likelihood'].mean(axis=0).squeeze(), Y.values)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Federated Learning for Text Generation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.15.0/docs/tutorials/federated_learning_for_text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.15.0/docs/tutorials/federated_learning_for_text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
**NOTE**: This colab has been verified to work with the [latest released version](https://github.com/tensorflow/federated#compatibility) of the `tensorflow_federated` pip package, but the Tensorflow Federated project is still in pre-release development and may not work on `master`.
This tutorial builds on the concepts in the [Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb) tutorial, and demonstrates several other useful approaches for federated learning.
In particular, we load a previously trained Keras model, and refine it using federated training on a (simulated) decentralized dataset. This is practically important for several reasons . The ability to use serialized models makes it easy to mix federated learning with other ML approaches. Further, this allows use of an increasing range of pre-trained models --- for example, training language models from scratch is rarely necessary, as numerous pre-trained models are now widely available (see, e.g., [TF Hub](https://www.tensorflow.org/hub)). Instead, it makes more sense to start from a pre-trained model, and refine it using Federated Learning, adapting to the particular characteristics of the decentralized data for a particular application.
For this tutorial, we start with a RNN that generates ASCII characters, and refine it via federated learning. We also show how the final weights can be fed back to the original Keras model, allowing easy evaluation and text generation using standard tools.
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
import collections
import functools
import os
import time
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
np.random.seed(0)
# Test the TFF is working:
tff.federated_computation(lambda: 'Hello, World!')()
```
## Load a pre-trained model
We load a model that was pre-trained following the TensorFlow tutorial
[Text generation using a RNN with eager execution](https://www.tensorflow.org/tutorials/sequences/text_generation). However,
rather than training on [The Complete Works of Shakespeare](http://www.gutenberg.org/files/100/100-0.txt), we pre-trained the model on the text from the Charles Dickens'
[A Tale of Two Cities](http://www.ibiblio.org/pub/docs/books/gutenberg/9/98/98.txt)
and
[A Christmas Carol](http://www.ibiblio.org/pub/docs/books/gutenberg/4/46/46.txt).
Other than expanding the vocabularly, we didn't modify the original tutorial, so this initial model isn't state-of-the-art, but it produces reasonable predictions and is sufficient for our tutorial purposes. The final model was saved with `tf.keras.models.save_model(include_optimizer=False)`.
We will use federated learning to fine-tune this model for Shakespeare in this tutorial, using a federated version of the data provided by TFF.
### Generate the vocab lookup tables
```
# A fixed vocabularly of ASCII chars that occur in the works of Shakespeare and Dickens:
vocab = list('dhlptx@DHLPTX $(,048cgkoswCGKOSW[_#\'/37;?bfjnrvzBFJNRVZ"&*.26:\naeimquyAEIMQUY]!%)-159\r')
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
```
### Load the pre-trained model and generate some text
```
def load_model(batch_size):
urls = {
1: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch1.kerasmodel',
8: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch8.kerasmodel'}
assert batch_size in urls, 'batch_size must be in ' + str(urls.keys())
url = urls[batch_size]
local_file = tf.keras.utils.get_file(os.path.basename(url), origin=url)
return tf.keras.models.load_model(local_file, compile=False)
def generate_text(model, start_string):
# From https://www.tensorflow.org/tutorials/sequences/text_generation
num_generate = 200
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
text_generated = []
temperature = 1.0
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
predictions = tf.squeeze(predictions, 0)
predictions = predictions / temperature
predicted_id = tf.random.categorical(
predictions, num_samples=1)[-1, 0].numpy()
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
# Text generation requires a batch_size=1 model.
keras_model_batch1 = load_model(batch_size=1)
print(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))
```
## Load and Preprocess the Federated Shakespeare Data
The `tff.simulation.datasets` package provides a variety of datasets that are split into "clients", where each client corresponds to a dataset on a particular device that might participate in federated learning.
These datasets provide realistic non-IID data distributions that replicate in simulation the challenges of training on real decentralized data. Some of the pre-processing of this data was done using tools from the [Leaf project](https://arxiv.org/abs/1812.01097) ([github](https://github.com/TalwalkarLab/leaf)).
```
train_data, test_data = tff.simulation.datasets.shakespeare.load_data()
```
The datasets provided by `shakespeare.load_data()` consist of a sequence of
string `Tensors`, one for each line spoken by a particular character in a
Shakespeare play. The client keys consist of the name of the play joined with
the name of the character, so for example `MUCH_ADO_ABOUT_NOTHING_OTHELLO` corresponds to the lines for the character Othello in the play *Much Ado About Nothing*. Note that in a real federated learning scenario
clients are never identified or tracked by ids, but for simulation it is useful
to work with keyed datasets.
Here, for example, we can look at some data from King Lear:
```
# Here the play is "The Tragedy of King Lear" and the character is "King".
raw_example_dataset = train_data.create_tf_dataset_for_client(
'THE_TRAGEDY_OF_KING_LEAR_KING')
# To allow for future extensions, each entry x
# is an OrderedDict with a single key 'snippets' which contains the text.
for x in raw_example_dataset.take(2):
print(x['snippets'])
```
We now use `tf.data.Dataset` transformations to prepare this data for training the char RNN loaded above.
```
# Input pre-processing parameters
SEQ_LENGTH = 100
BATCH_SIZE = 8
BUFFER_SIZE = 10000 # For dataset shuffling
# Construct a lookup table to map string chars to indexes,
# using the vocab loaded above:
table = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(
keys=vocab, values=tf.constant(list(range(len(vocab))),
dtype=tf.int64)),
default_value=0)
def to_ids(x):
s = tf.reshape(x['snippets'], shape=[1])
chars = tf.strings.bytes_split(s).values
ids = table.lookup(chars)
return ids
def split_input_target(chunk):
input_text = tf.map_fn(lambda x: x[:-1], chunk)
target_text = tf.map_fn(lambda x: x[1:], chunk)
return (input_text, target_text)
def preprocess(dataset):
return (
# Map ASCII chars to int64 indexes using the vocab
dataset.map(to_ids)
# Split into individual chars
.unbatch()
# Form example sequences of SEQ_LENGTH +1
.batch(SEQ_LENGTH + 1, drop_remainder=True)
# Shuffle and form minibatches
.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
# And finally split into (input, target) tuples,
# each of length SEQ_LENGTH.
.map(split_input_target))
```
Note that in the formation of the original sequences and in the formation of
batches above, we use `drop_remainder=True` for simplicity. This means that any
characters (clients) that don't have at least `(SEQ_LENGTH + 1) * BATCH_SIZE`
chars of text will have empty datasets. A typical approach to address this would
be to pad the batches with a special token, and then mask the loss to not take
the padding tokens into account.
This would complicate the example somewhat, so for this tutorial we only use full batches, as in the
[standard tutorial](https://www.tensorflow.org/tutorials/sequences/text_generation).
However, in the federated setting this issue is more significant, because many
users might have small datasets.
Now we can preprocess our `raw_example_dataset`, and check the types:
```
example_dataset = preprocess(raw_example_dataset)
print(example_dataset.element_spec)
```
## Compile the model and test on the preprocessed data
We loaded an uncompiled keras model, but in order to run `keras_model.evaluate`, we need to compile it with a loss and metrics. We will also compile in an optimizer, which will be used as the on-device optimizer in Federated Learning.
The original tutorial didn't have char-level accuracy (the fraction
of predictions where the highest probability was put on the correct
next char). This is a useful metric, so we add it.
However, we need to define a new metric class for this because
our predictions have rank 3 (a vector of logits for each of the
`BATCH_SIZE * SEQ_LENGTH` predictions), and `SparseCategoricalAccuracy`
expects only rank 2 predictions.
```
class FlattenedCategoricalAccuracy(tf.keras.metrics.SparseCategoricalAccuracy):
def __init__(self, name='accuracy', dtype=tf.float32):
super().__init__(name, dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.reshape(y_true, [-1, 1])
y_pred = tf.reshape(y_pred, [-1, len(vocab), 1])
return super().update_state(y_true, y_pred, sample_weight)
```
Now we can compile a model, and evaluate it on our `example_dataset`.
```
BATCH_SIZE = 8 # The training and eval batch size for the rest of this tutorial.
keras_model = load_model(batch_size=BATCH_SIZE)
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
# Confirm that loss is much lower on Shakespeare than on random data
loss, accuracy = keras_model.evaluate(example_dataset.take(5), verbose=0)
print(
'Evaluating on an example Shakespeare character: {a:3f}'.format(a=accuracy))
# As a sanity check, we can construct some completely random data, where we expect
# the accuracy to be essentially random:
random_guessed_accuracy = 1.0 / len(vocab)
print('Expected accuracy for random guessing: {a:.3f}'.format(
a=random_guessed_accuracy))
random_indexes = np.random.randint(
low=0, high=len(vocab), size=1 * BATCH_SIZE * (SEQ_LENGTH + 1))
data = collections.OrderedDict(
snippets=tf.constant(
''.join(np.array(vocab)[random_indexes]), shape=[1, 1]))
random_dataset = preprocess(tf.data.Dataset.from_tensor_slices(data))
loss, accuracy = keras_model.evaluate(random_dataset, steps=10, verbose=0)
print('Evaluating on completely random data: {a:.3f}'.format(a=accuracy))
```
## Fine-tune the model with Federated Learning
TFF serializes all TensorFlow computations so they can potentially be run in a
non-Python environment (even though at the moment, only a simulation runtime implemented in Python is available). Even though we are running in eager mode, (TF 2.0), currently TFF serializes TensorFlow computations by constructing the
necessary ops inside the context of a "`with tf.Graph.as_default()`" statement.
Thus, we need to provide a function that TFF can use to introduce our model into
a graph it controls. We do this as follows:
```
# Clone the keras_model inside `create_tff_model()`, which TFF will
# call to produce a new copy of the model inside the graph that it will
# serialize. Note: we want to construct all the necessary objects we'll need
# _inside_ this method.
def create_tff_model():
# TFF uses an `input_spec` so it knows the types and shapes
# that your model expects.
input_spec = example_dataset.element_spec
keras_model_clone = tf.keras.models.clone_model(keras_model)
return tff.learning.from_keras_model(
keras_model_clone,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
```
Now we are ready to construct a Federated Averaging iterative process, which we will use to improve the model (for details on the Federated Averaging algorithm, see the paper [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)).
We use a compiled Keras model to perform standard (non-federated) evaluation after each round of federated training. This is useful for research purposes when doing simulated federated learning and there is a standard test dataset.
In a realistic production setting this same technique might be used to take models trained with federated learning and evaluate them on a centralized benchmark dataset for testing or quality assurance purposes.
```
# This command builds all the TensorFlow graphs and serializes them:
fed_avg = tff.learning.build_federated_averaging_process(
model_fn=create_tff_model,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(lr=0.5))
```
Here is the simplest possible loop, where we run federated averaging for one round on a single client on a single batch:
```
state = fed_avg.initialize()
state, metrics = fed_avg.next(state, [example_dataset.take(5)])
print('loss={l:.3f}, accuracy={a:.3f}'.format(
l=metrics.train.loss, a=metrics.train.accuracy))
```
Now let's write a slightly more interesting training and evaluation loop.
So that this simulation still runs relatively quickly, we train on the same three clients each round, only considering two minibatches for each.
```
def data(client, source=train_data):
return preprocess(
source.create_tf_dataset_for_client(client)).take(5)
clients = ['ALL_S_WELL_THAT_ENDS_WELL_CELIA',
'MUCH_ADO_ABOUT_NOTHING_OTHELLO',
'THE_TRAGEDY_OF_KING_LEAR_KING']
train_datasets = [data(client) for client in clients]
# We concatenate the test datasets for evaluation with Keras.
test_dataset = functools.reduce(
lambda d1, d2: d1.concatenate(d2),
[data(client, test_data) for client in clients])
```
The initial state of the model produced by `fed_avg.initialize()` is based
on the random initializers for the Keras model, not the weights that were loaded,
since `clone_model()` does not clone the weights. To start training
from a pre-trained model, we set the model weights in the server state
directly from the loaded model.
```
NUM_ROUNDS = 5
# The state of the FL server, containing the model and optimization state.
state = fed_avg.initialize()
state = tff.learning.state_with_new_model_weights(
state,
trainable_weights=[v.numpy() for v in keras_model.trainable_weights],
non_trainable_weights=[
v.numpy() for v in keras_model.non_trainable_weights
])
def keras_evaluate(state, round_num):
# Take our global model weights and push them back into a Keras model to
# use its standard `.evaluate()` method.
keras_model = load_model(batch_size=BATCH_SIZE)
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[FlattenedCategoricalAccuracy()])
tff.learning.assign_weights_to_keras_model(keras_model, state.model)
loss, accuracy = keras_model.evaluate(example_dataset, steps=2, verbose=0)
print('\tEval: loss={l:.3f}, accuracy={a:.3f}'.format(l=loss, a=accuracy))
for round_num in range(NUM_ROUNDS):
print('Round {r}'.format(r=round_num))
keras_evaluate(state, round_num)
state, metrics = fed_avg.next(state, train_datasets)
print('\tTrain: loss={l:.3f}, accuracy={a:.3f}'.format(
l=metrics.train.loss, a=metrics.train.accuracy))
keras_evaluate(state, NUM_ROUNDS + 1)
```
With the default changes, we haven't done enough training to make a big difference, but if you train longer on more Shakespeare data, you should see a difference in the style of the text generated with the updated model:
```
# Set our newly trained weights back in the originally created model.
keras_model_batch1.set_weights([v.numpy() for v in keras_model.weights])
# Text generation requires batch_size=1
print(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))
```
## Suggested extensions
This tutorial is just the first step! Here are some ideas for how you might try extending this notebook:
* Write a more realistic training loop where you sample clients to train on randomly.
* Use "`.repeat(NUM_EPOCHS)`" on the client datasets to try multiple epochs of local training (e.g., as in [McMahan et. al.](https://arxiv.org/abs/1602.05629)). See also [Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb) which does this.
* Change the `compile()` command to experiment with using different optimization algorithms on the client.
* Try the `server_optimizer` argument to `build_federated_averaging_process` to try different algorithms for applying the model updates on the server.
* Try the `client_weight_fn` argument to to `build_federated_averaging_process` to try different weightings of the clients. The default weights client updates by the number of examples on the client, but you can do e.g. `client_weight_fn=lambda _: tf.constant(1.0)`.
| github_jupyter |
Parametric non Parametric inference
===================
Suppose you have a physical model of an output variable, which takes the form of a parametric model. You now want to model the random effects of the data by a non-parametric (better: infinite parametric) model, such as a Gaussian Process as described in [BayesianLinearRegression](../background/BayesianLinearRegression.ipynb). We can do inference in both worlds, the parameteric and infinite parametric one, by extending the features to a mix between
\begin{align}
p(\mathbf{y}|\boldsymbol{\Phi}, \alpha, \sigma) &= \int p(\mathbf{y}|\boldsymbol{\Phi}, \mathbf{w}, \sigma)p(\mathbf{w}|\alpha) \,\mathrm{d}\mathbf{w}\\
&= \langle\mathcal{N}(\mathbf{y}|\boldsymbol{\Phi}\mathbf{w}, \sigma^2\mathbf{I})\rangle_{\mathcal{N}(\mathbf{0}, \alpha\mathbf{I})}\\
&= \mathcal{N}(\mathbf{y}|\mathbf{0}, \alpha\boldsymbol{\Phi}\boldsymbol{\Phi}^\top + \sigma^2\mathbf{I})
\end{align}
Thus, we can maximize this marginal likelihood w.r.t. the hyperparameters $\alpha, \sigma$ by log transforming and maximizing:
\begin{align}
\hat\alpha, \hat\sigma = \mathop{\arg\max}_{\alpha, \sigma}\log p(\mathbf{y}|\boldsymbol{\Phi}, \alpha, \sigma)
\end{align}
So we will define a mixed inference model mixing parametric and non-parametric models together. One part is described by a paramtric feature space mapping $\boldsymbol{\Phi}\mathbf{w}$ and the other part is a non-parametric function $\mathbf{f}_\text{n}$. For this we define the underlying function $\mathbf{f}$ as
$$
\begin{align}
p(\mathbf{f}) &= p\left(
\underbrace{
\begin{bmatrix}
\delta(t-T)\\
\boldsymbol{\Phi}
\end{bmatrix}
}_{=:\mathbf{A}}
\left.
\begin{bmatrix}
\mathbf{f}_{\text{n}}\\
\mathbf{w}
\end{bmatrix}
\right|
\mathbf{0},
\mathbf{A}
\underbrace{
\begin{bmatrix}
\mathbf{K}_{\mathbf{f}} & \\
& \mathbf{K}_{\mathbf{w}}
\end{bmatrix}
}_{=:\boldsymbol{\Sigma}}
\mathbf{A}^\top
\right)\enspace,
\end{align}
$$
where $\mathbf{K}_{\mathbf{f}}$ is the covariance describing the non-parametric part $\mathbf{f}_\text{n}\sim\mathcal{N}(\mathbf{0}, \mathbf{K}_\mathbf{f})$ and $\mathbf{K}_{\mathbf{w}}$ is the covariance of the prior over $\mathbf{w}\sim\mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{K}_{\mathbf{w}})$.
Thus we can now predict the different parts and even the paramters $\mathbf{w}$ themselves using (Note: If someone is willing to write down the proper path to this, here a welcome and thank you very much. Thanks to Philipp Hennig for his ideas in this.)
$$
\begin{align}
p(\mathbf{f}|\mathbf{y}) &=
\mathcal{N}(\mathbf{f} |
\boldsymbol{\Sigma}\mathbf{A}^\top
\underbrace{
(\mathbf{A}\boldsymbol{\Sigma}\mathbf{A}^\top + \sigma^2\mathbf{I})^{-1}}_{=:\mathbf{K}^{-1}}\mathbf{y}, \boldsymbol{\Sigma}-\boldsymbol{\Sigma}\mathbf{A}^\top\mathbf{K}^{-1}\mathbf{A}\boldsymbol{\Sigma})
\\
p(\mathbf{w}|\mathbf{y}) &= \mathcal{N}(\mathbf{w} | \mathbf{K}_\mathbf{w}\boldsymbol{\Phi}^\top\mathbf{K}^{-1}\mathbf{y},
\mathbf{K}_{\mathbf{w}}-\mathbf{K}_{\mathbf{w}}\boldsymbol{\Phi}^\top\mathbf{K}^{-1}\boldsymbol{\Phi}\mathbf{K}_{\mathbf{w}}))
\\
p(\mathbf{f}_\text{n}|\mathbf{y}) &= \mathcal{N}(\mathbf{f}_\text{n}| \mathbf{K}_\mathbf{f}\mathbf{K}^{-1}\mathbf{y},
\mathbf{K}_{\mathbf{f}}-\mathbf{K}_{\mathbf{f}}\mathbf{K}^{-1}\mathbf{K}_{\mathbf{f}}))
\end{align}
$$
```
import GPy, numpy as np, pandas as pd
from GPy.kern import LinearSlopeBasisFuncKernel, DomainKernel, ChangePointBasisFuncKernel
%matplotlib inline
from matplotlib import pyplot as plt
```
We will create some data with a non-linear function, strongly driven by piecewise linear trends:
```
np.random.seed(12345)
x = np.random.uniform(0, 10, 40)[:,None]
x.sort(0)
starts, stops = np.arange(0, 10, 3), np.arange(1, 11, 3)
k_lin = LinearSlopeBasisFuncKernel(1, starts, stops, variance=1., ARD=1)
Phi = k_lin.phi(x)
_ = plt.plot(x, Phi)
```
We will assume the prior over $w_i\sim\mathcal{N}(0, 3)$ and a Matern32 structure in the non-parametric part. Additionally, we add a half parametric part, which is a periodic effect only active between $x\in[3,8]$:
```
k = GPy.kern.Matern32(1, .3)
Kf = k.K(x)
k_per = GPy.kern.PeriodicMatern32(1, variance=100, period=1)
k_per.period.fix()
k_dom = DomainKernel(1, 1., 5.)
k_perdom = k_per * k_dom
Kpd = k_perdom.K(x)
np.random.seed(1234)
alpha = np.random.gamma(3, 1, Phi.shape[1])
w = np.random.normal(0, alpha)[:,None]
f_SE = np.random.multivariate_normal(np.zeros(x.shape[0]), Kf)[:, None]
f_perdom = np.random.multivariate_normal(np.zeros(x.shape[0]), Kpd)[:, None]
f_w = Phi.dot(w)
f = f_SE + f_w + f_perdom
y = f + np.random.normal(0, .1, f.shape)
plt.plot(x, f_w)
_ = plt.plot(x, y)
# Make sure the function is driven by the linear trend, as there can be a difficulty in identifiability.
```
With this data, we can fit a model using the basis functions as paramtric part. If you want to implement your own basis function kernel, see GPy.kern._src.basis_funcs.BasisFuncKernel and implement the necessary parts. Usually it is enough to implement the phi(X) method, returning the higher dimensional mapping of inputs X.
```
k = (GPy.kern.Bias(1)
+ GPy.kern.Matern52(1)
+ LinearSlopeBasisFuncKernel(1, ARD=1, start=starts, stop=stops, variance=.1, name='linear_slopes')
+ k_perdom.copy()
)
k.randomize()
m = GPy.models.GPRegression(x, y, k)
m.checkgrad()
m.optimize()
m.plot()
x_pred = np.linspace(0, 10, 500)[:,None]
pred_SE, var_SE = m._raw_predict(x_pred, kern=m.kern.Mat52)
pred_per, var_per = m._raw_predict(x_pred, kern=m.kern.mul)
pred_bias, var_bias = m._raw_predict(x_pred, kern=m.kern.bias)
pred_lin, var_lin = m._raw_predict(x_pred, kern=m.kern.linear_slopes)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.Mat52), plot_data=False)
plt.plot(x, f_SE)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.mul), plot_data=False)
plt.plot(x, f_perdom)
m.plot_f(resolution=500, predict_kw=dict(kern=m.kern.linear_slopes), plot_data=False)
plt.plot(x, f_w)
w_pred, w_var = m.kern.linear_slopes.posterior_inf()
df = pd.DataFrame(w, columns=['truth'], index=np.arange(Phi.shape[1]))
df['mean'] = w_pred
df['std'] = np.sqrt(w_var.diagonal())
np.round(df, 2)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import pickle
import os
import time
from sklearn.metrics import r2_score, accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
from IPython.display import Image
import tensorflow as tf
from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.models import Sequential, load_model, Model
from keras.layers import Dense, LSTM, SimpleRNN, GRU, Input, Embedding, concatenate
from keras.utils import np_utils, to_categorical, plot_model
from keras.callbacks import ModelCheckpoint, TensorBoard, \
LearningRateScheduler
from keras.layers.wrappers import TimeDistributed
from keras.optimizers import Adam, RMSprop
from keras.regularizers import l2
from keras.backend.tensorflow_backend import set_session
from keras.preprocessing.sequence import pad_sequences
from keras.datasets import mnist
from keras.layers.core import RepeatVector
# import wrapper_data as wd
# import lstm_wrapper as lw
```
## Stacking RNNs
```
# Only have to specify input_shape at the initial layer
# Single scalar output
timesteps = 100
features = 3
model = Sequential()
model.add(LSTM(64, input_shape=(timesteps, features), return_sequences=True))
model.add(LSTM(64))
model.add(Dense(1))
# Scalar output every timestep
model2 = Sequential()
model2.add(LSTM(64, input_shape=(timesteps, features), return_sequences=True))
model2.add(LSTM(64, return_sequences=True),)
model2.add(TimeDistributed(Dense(1)))
model.summary()
model2.summary()
```
## Unequal length sequences
```
# Pad sequences with zeros to have equal sizes
padded_sequences = pad_sequences(list_of_sequences)
# Add a masking layer
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(LSTM(32))
```
## Keras in Tensorflow
* tf.contrib.keras
# One-to-many
```
from IPython.display import Image
Image(filename="/src/keras/Gerome/01_main/95_keras_practice/one_to_many.png", height=200, width=200)
# This model will encode an image into a vector.
vision_model = Sequential()
vision_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)))
vision_model.add(Conv2D(64, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(128, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Flatten())
vision_model.add(Dense(4096, activation='relu'))
# Now let's get a tensor with the output of our vision model:
image_input = Input(shape=(224, 224, 3))
encoded_image = vision_model(image_input)
repeat_encoded_img = RepeatVector(100)(encoded_image)
# Sentence encoding model
sentence_input = Input(shape=(100,), dtype='int32')
embedded_question = Embedding(input_dim=10000, output_dim=256, input_length=100)(sentence_input)
encoded_question = LSTM(256, return_sequences=True)(embedded_question)
# Merge both image features and sentences
merged = concatenate([encoded_question, repeat_encoded_img])
lstm_layer = LSTM(256, return_sequences=True)(merged)
# And let's train a logistic regression over 1000 words on top:
output = TimeDistributed(Dense(1000, activation='softmax'))(lstm_layer)
# This is our final model:
image_captioning_model = Model(inputs=[image_input, sentence_input], outputs=output)
model_img_path = '/src/keras/Gerome/01_main/95_keras_practice/RNN_image_captioning_model.png'
plot_model(image_captioning_model, to_file=model_img_path, show_shapes=True)
Image(model_img_path)
```
| github_jupyter |
# CST PTM Data Overview
The PTM data from CST has a significant amount of missing data and requires special consideration when normalizing. The starting data is ratio-level-data - where log2 ratios have been calculated from the cancerous cell lines compared to the non-cancerous 'Normal Pool' data from within the 'plex'. This data is under the lung_cellline_3_1_16 directory and each PTM type has its own '_combined_ratios.tsv' file.
This notebook will overview the ratio-level datat from the PTM types: phosphorylation, methylation, and acetylation. The figures in this notebook demonstrate that there is a systematic difference in the distributions of PTM measurements in the lung cancer cell lines regardless of PTMs with missing data are considered. The normalization procedures used to correct for this systematic bias are discussed in the [CST_PTM_Normalization_Overview](https://github.com/MaayanLab/CST_Lung_Cancer_Viz/blob/master/CST_PTM_Normalization_Overview.ipynb) notebook.
The systematic difference in average PTM ratios in the cell lines could be due to a number of factors:
* it could be biological in nature, e.g. some cell line have uniformly higher PTM levels than others
* some cell lines might have higher/lower metabolism rates which will result in differences in incorporation of heavy isotopes
* some cell lines might reproduce faster/slower during the time period where cells are exposed to heavy isotopes, which would result in differences in the population size of the different cell lines
In any case, it can be useful towards understanding the differences in cell line behavior to remove this systematic difference.
# Phosphorylation Data
I'll start by having a look at the phosphorylation data that can be found in
`lung_cellline_3_1_16/lung_cellline_phospho/lung_cellline_TMT_phospho_combined_ratios.tsv`
This file was made using the `process_latest_cst_data.py` script. First I'll make the necessary imports.
```
# imports and plotting defaults
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
from copy import deepcopy
# use clustergrammer module to load/process (source code in clustergrammer directory)
from clustergrammer import Network
```
Next, I'll load the phosphorylation ratio data and simplify the column names (to improve readability)
```
# load data data and export as pandas dataframe: inst_df
def load_data(filename):
'''
load data using clustergrammer and export as pandas dataframe
'''
net = deepcopy(Network())
net.load_file(filename)
tmp_df = net.dat_to_df()
inst_df = tmp_df['mat']
# simplify column names (remove categories)
col_names = inst_df.columns.tolist()
# simple_col_names = []
# for inst_name in col_names:
# simple_col_names.append(inst_name[0])
inst_df.columns = col_names
print(inst_df.shape)
ini_rows = inst_df.index.tolist()
unique_rows = list(set(ini_rows))
if len(ini_rows) > len(unique_rows):
print('found duplicate PTMs')
else:
print('did not find duplicate PTMs')
return inst_df
filename = '../lung_cellline_3_1_16/lung_cellline_phospho/' + \
'lung_cellline_TMT_phospho_combined_ratios.tsv'
inst_df = load_data(filename)
```
I loaded the phosphorylation tsv file using clustergrammer and exported it as a pandas dataframe. We can see that there are 5,798 unique phosphorylation sites measured in all 45 lung cancer cell lines.
### Missing Phosphorylation Data
However, there is also a large amount of missing data, e.g. no cell line has all 5798 phosphorylations mesaured. We can plot the number of measured phosphorylation sites (e.g. non-NaN values in the dataframe) below to get a sense of the amount of missing data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
print(type(inst_df))
```
In the above visualization I have ranked the cell lines based in increasing number of measurements. We can see that there is a pattern in the missing data. The 45 cell lines appear to be aranged into nine groups of 5 cell lines each. These groups correpond to the 9 'plexes', or 'batches', in which the cell lines were measured. Each plex measured one control, Normal Pool, and five cancer cell lines (note that some cell lines have been measured in more than one plex and these have their plex number appended to their name).
### Cell Line Phosphorylation Distributions
Since each cell line has a large number of measured phosphorylations (at least 1,500) we can reasonably expect that the distributions of phosphorylation levels in the cell lines will be similar. This is based on the assumption that biological variation is not systematic and should not result in consistently higher or lower measurements in the cell lines.
Below we plot the mean values (ratios) of all measured phosphorylations in each cell line and order the cell lines by their average phosphorylation levels in ascending order.
```
def plot_cl_boxplot_with_missing_data(inst_df):
'''
Make a box plot of the cell lines where the cell lines are ranked based
on their average PTM levels
'''
# get the order of the cell lines based on their mean
sorter = inst_df.mean().sort_values().index.tolist()
# reorder based on ascending mean values
sort_df = inst_df[sorter]
# box plot of PTM values ordered based on increasing mean
sort_df.plot(kind='box', figsize=(10,3), rot=90, ylim=(-8,8))
plot_cl_boxplot_with_missing_data(inst_df)
```
We can see that there is a significant difference in the mean phosphorylation level across the cell lines. These large differenecs in the cell line distributions lead us to believe that there is a systematic error in the measurements that needs to be corrected.
However, each cell line has a different subset of phosphorylations measured so to more fairly compare the cell lines we should only compare commonly measured phosphorylations.
Below we plot the mean values of phosphorylations that were measured in all cell lines.
```
def plot_cl_boxplot_no_missing_data(inst_df):
# get the order of the cell lines based on their mean
sorter = inst_df.mean().sort_values().index.tolist()
# reorder based on ascending mean values
sort_df = inst_df[sorter]
# transpose to get PTMs as columns
tmp_df = sort_df.transpose()
# keep only PTMs that are measured in all cell lines
ptm_num_meas = tmp_df.count()
ptm_all_meas = ptm_num_meas[ptm_num_meas == 45]
ptm_all_meas = ptm_all_meas.index.tolist()
print('There are ' + str(len(ptm_all_meas)) + ' PTMs measured in all cell lines')
# only keep ptms that are measured in all cell lines
# I will call this full_df as in no missing measurements
full_df = tmp_df[ptm_all_meas]
# transpose back to PTMs as rows
full_df = full_df.transpose()
full_df.plot(kind='box', figsize=(10,3), rot=90, ylim=(-8,8))
num_ptm_all_meas = len(ptm_all_meas)
plot_cl_boxplot_no_missing_data(inst_df)
```
From the above box plot we can see that there is a significant difference in the distributions of the cell lines even when we only consider phosphorylations that were measured in all cell lines (note that the cell lines are in the same order as the previous box plot). This indicates that this systematic differnce in average phosphorylation values is not caused by missing values.
Since we do not expect biological variation to cause this type of systematic difference between cell lines we can conclude that the large differences between cell lines are likely the result of systematic experimental error that should be corrected. Normalizing the data will be discussed [here](https://github.com/MaayanLab/CST_Lung_Cancer_Viz)
# Acetylation Data
I will perform the same overview on the acetylation data. There are 1,192 unique acetylations measured in the 45 cell lines.
```
filename = '../lung_cellline_3_1_16/lung_cellline_Ack/' + \
'lung_cellline_TMT_Ack_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Acetylation Data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Acetylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Distribution of Acetylation data that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
# Methylation Data
The methylation data has been broken up into Arginine and Lysine methylation.
## Arginine Methylation
There are 1,248 Arginine methylations measured in all 42 cell lines
```
filename = '../lung_cellline_3_1_16/lung_cellline_Rme1/' + \
'lung_cellline_TMT_Rme1_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Arginine Methylation Data
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Arginine Methylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Argining Methylation that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
## Lysine Methylation Data
There are 230 lysine methylations measured in all cell line
```
filename = '../lung_cellline_3_1_16/lung_cellline_Kme1/' + \
'lung_cellline_TMT_Kme1_combined_ratios.tsv'
inst_df = load_data(filename)
```
### Missing Lysine Methylation Data
Some cell lines have as few as 40 lysine methylations measured.
```
inst_df.count().sort_values().plot(kind='bar', figsize=(10,2))
```
### Cell Line Lysine Metylation Distributions
```
plot_cl_boxplot_with_missing_data(inst_df)
```
Lysine methylation that was measured in all cell lines
```
plot_cl_boxplot_no_missing_data(inst_df)
```
There were only 26 lysine methylations that were measured in all cell lines. We still see the bias in the average values across the cell lines.
# Conclusions
We see that the PTM measurements (phosphorylation, acetylation, and methylation) all show large differences in average behavior across the cell lines. Furthermore, the cell lines with the highest and lowest ratios are frequently the same: DMS153 is hte cell line with the lowest ratios and H661 is the cell line with the highest ratios in all cases. '
In other words, if we were to ask which cell line has the highest or lowest level of a particular PTM site we would almost always get the same cell line no matter which site we were interested in. Since this type of uniform and systematic difference between cell lines is not what we expect biologically we can conclude that the ratio data should be normalized in some way. The normalization procedure and its affects on cell line clustering are discussed in the notebook [CST_PTM_Normalization_Overview](https://github.com/MaayanLab/CST_Lung_Cancer_Viz/blob/master/CST_PTM_Normalization_Overview.ipynb) notebook.
| github_jupyter |
```
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from yahoo_fin import stock_info as si
from collections import deque
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
import os
import random
import multiprocessing
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
tf.keras.mixed_precision.experimental.set_policy(policy)
def create_model(sequence_length, units=256, cell=LSTM, n_layers=2, dropout=0.3,
loss="mean_absolute_error", optimizer="rmsprop", bidirectional=False,layer_activation="linear"):
model = Sequential()
for i in range(n_layers):
if i == 0:
# first layer
if bidirectional:
model.add(Bidirectional(cell(units, return_sequences=True), input_shape=(None, sequence_length)))
else:
model.add(cell(units, return_sequences=True, input_shape=(None, sequence_length)))
elif i == n_layers - 1:
# last layer
if bidirectional:
model.add(Bidirectional(cell(units, return_sequences=False)))
else:
model.add(cell(units, return_sequences=False))
else:
# hidden layers
if bidirectional:
model.add(Bidirectional(cell(units, return_sequences=True)))
else:
model.add(cell(units, return_sequences=True))
# add dropout after each layer
model.add(Dropout(dropout))
model.add(Dense(1, activation=layer_activation))
model.compile(loss=loss, metrics=["mean_absolute_error"], optimizer=optimizer)
return model
#def run_tensorflow():
# create these folders if they does not exist
# Window size or the sequence length
N_STEPS = 72
# Lookup step, 1 is the next day
#LOOKUP_STEP = int(run_dict[run]["LOOKUP_STEP"])
# test ratio size, 0.2 is 20%
TEST_SIZE = 0.3
# features to use
FEATURE_COLUMNS = ["close_0","ema_0","high_0","low_0","open_0","rsi_0","sma_0","volume_0","close_1","ema_1","high_1","low_1","open_1","rsi_1","sma_1","volume_1",
"close_2","ema_2","high_2","low_2","open_2","rsi_2","sma_2","volume_2","close_3","ema_3","high_3","low_3","open_3","rsi_3","sma_3","volume_3",
"close_4","ema_4","high_4","low_4","open_4","rsi_4","sma_4","volume_4","close_5","ema_5","high_5","low_5","open_5","rsi_5","sma_5","volume_5",
"close_6","ema_6","high_6","low_6","open_6","rsi_6","sma_6","volume_6","close_7","ema_7","high_7","low_7","open_7","rsi_7","sma_7","volume_7",
"close_8","ema_8","high_8","low_8","open_8","rsi_8","sma_8","volume_8"]
TARGET_COLUMNS = ["close_9","high_9","low_9","open_9"]
# date now
date_now = time.strftime("%Y-%m-%d")
### model parameters
N_LAYERS = 3
# LSTM cell
CELL = LSTM
# 256 LSTM neurons
UNITS = 1000
# 40% dropout
DROPOUT = 0.25
# whether to use bidirectional RNNs
BIDIRECTIONAL = True
### training parameters
# mean absolute error loss
# LOSS = "mae"
# huber loss
LOSS = "huber_loss"
OPTIMIZER = "adam"
BATCH_SIZE = 64
EPOCHS = 50
LAYER_ACTIVATION = "sigmoid"
# Stock market
ticker = "MIXED"
ticker_data_filename = os.path.join("data", f"{ticker}_{date_now}.csv")
# model name to save, making it as unique as possible based on parameters
model_name = f"{date_now}_{ticker}-{LOSS}-{OPTIMIZER}-{CELL.__name__}-{LAYER_ACTIVATION}-layers-{N_LAYERS}-units-{UNITS}"
if BIDIRECTIONAL:
model_name += "-b"
#----------------------------------------------------------------------------------------------------------#
#----------------------------------------------------------------------------------------------------------#
#----------------------------------------------------------------------------------------------------------#
#try:
if not os.path.isdir("results"):
os.mkdir("results")
if not os.path.isdir("logs"):
os.mkdir("logs")
if not os.path.isdir("data"):
os.mkdir("data")
# load the data
data = pd.read_csv('../data/processed/all_processed_10.csv')
# construct the model
model = create_model(N_STEPS, loss=LOSS, units=UNITS, cell=CELL, n_layers=N_LAYERS,
dropout=DROPOUT, optimizer=OPTIMIZER, bidirectional=BIDIRECTIONAL, layer_activation=LAYER_ACTIVATION)
# some tensorflow callbacks
checkpointer = ModelCheckpoint(os.path.join("results", model_name + ".h5"), save_weights_only=True, save_best_only=True, verbose=1)
tensorboard = TensorBoard(log_dir=os.path.join("logs", model_name))
X = data[FEATURE_COLUMNS]
y = data[TARGET_COLUMNS]
# convert to numpy arrays
X = np.array(X)
y = np.array(y)
# reshape X to fit the neural network
X = X.reshape((X.shape[0], 1, X.shape[1]))
# split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, shuffle=True)
history = model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(X_test, y_test),
callbacks=[checkpointer, tensorboard],
verbose=1)
model.save(os.path.join("results", model_name) + ".h5")
#except:
# print("There was an attempt.")
tf.keras.backend.clear_session()
```
| github_jupyter |
# Sublime Text
## Getting set up
### Laptop install Sublime Text (Done once per laptop)
1. Step one is to download and install [Sublime Text](https://www.sublimetext.com/3). Sidenote: You don't need to purchase a license, you can use it forever with all features in evaluate mode. If you purchase a license it follows you and you can install it on all your future laptops.
2. **Install Package Manager**: open Sublime Text, then open command palette (we will use this several times)
- CMD + SHIFT + P (Mac)
- CTRL + SHIFT + P (Windows)
start typing "install" (Sidenote: as you type it will auto-filter, you can always select with your mouse the one that you want, or if the one you want is the top highlighted match, just hit enter.)
## SQLBeautifier
```../examples/sqlbeautifier.sql```
## install sublimelinter
### proselint (markdown)
### shellcheck
From: https://github.com/koalaman/shellcheck
> The goals of ShellCheck are
> - To point out and clarify typical beginner's syntax issues that cause a shell to give cryptic error messages.
> - To point out and clarify typical intermediate level semantic problems that cause a shell to behave strangely and counter-intuitively.
> - To point out subtle caveats, corner cases and pitfalls that may cause an advanced user's otherwise working script to fail under future circumstances.
> See the [gallery of bad code](https://github.com/koalaman/shellcheck/blob/master/README.md#user-content-gallery-of-bad-code) for examples of what ShellCheck can help you identify!
## anaconda (not what you think!)
Automatically formats your code to be pep8 (or whatever variant you prefer). Should SVDS have an official style-guide for python?
## Others
- install BracketHighlighter
- install SidebarEnhancements
- install text pastry
- C-option N example
- install wordcount
- sublime-build
- tools > build
- install LaTeXTools (academic papers)
## rsub (subl)
Once you set up everything as below this is how you'll be able to edit files living on a server from the comfort of your laptop.
1. `ssh` into the Mothership by setting up the port forwarding (keeping this open)
2. Sublime Text open on your laptop
3. `subl whatever.py` and enjoy editing your text file on your laptop's Sublime Text (remember to hit save frequently!)
### Setting up remote Sublime Text editing
These instructions tell you how to set up your laptop and a server (mothership) so that you can edit files directly on the server by using Sublime Text on your laptop. You will have to make changes at different places and these instructions vary by what kind of laptop you have (windows/macOS).
Also, for complicated reasons, each laptop that connects to the mothership needs to have its own unique ports assigned to it. This applies to you if you have 2 laptops. So we'll start out assigning the following ports to people. For the rest of the instructions, you should replace {YOUR_PORT} with the following numbers (and choose the one assigned to you):
52687 # Free to assign
52688 # Free to assign
52689 # Free to assign
52690 # Free to assign
52691 # Free to assign
52692 # Free to assign
52693 # Free to assign
52694 # Free to assign
52695 # Free to assign
52696 # Free to assign
52698 # Default port
Again, we just arbitrarily assigned these (see the advanced notes section if you need to change this).
And where you see {MOTHERSHIP_IP_ADDRESS} replace with the correct edgenode IP address: at the time of writing this, the SVDS Node's IP was: 10.178.134.62
And where you see {USER} replace with your username `jbwhit` for example.
### Installing `rsub`
1. **Install `rsub`**: open command palette; type `install` (select option "Package Control: Install Package"); type `rsub` and select it. If you don't see it listed it's likely already installed. You can check by opening preferences and seeing if you have an rsub option.
2. Create a file on your laptop called `rsub.sublime-settings` in folder (find by clicking in Sublime Text): `Preferences>Browse Packages>User>`
The contents of the file -- remember to replace {YOUR_PORT} with your port:
```
/*
rsub default settings
*/
{
/*
rsub listen port
IMPORTANT: Use a different port for each machine.
*/
"port": {YOUR_PORT},
/*
rsub listen host
WARNING: it's NOT recommended to change this option,
use SSH tunneling instead.
*/
"host": "localhost"
}
```
### Laptop ssh port forwarding [Windows]
We recommend installing [Git Bash](https://git-scm.com/download/win) -- install and accept the default options.
Create a shortcut script to connect to the edgenode with the Sublime connection.
1. Start GitBash
2. Create a file called `sublime_port_forward` (or whatever you want it to be)
a. Navigate to your home directory on your Windows machine and create a new file (with Sublime Text if you want)!
3. Paste the following one line as the entire content of that file (replacing as required):
ssh -R {YOUR_PORT}:localhost:{YOUR_PORT} {USER}@{MOTHERSHIP_IP_ADDRESS}
Example: `ssh -R 52697:localhost:52697 jbwhit@IP.ADDRESS`
4. Save the file
### Setting up ssh port forwarding [MacOS]
1. Edit `~/.ssh/config` and update with relevant IP address {MOTHERSHIP_IP_ADDRESS} -- replace "{MOTHERSHIP_IP_ADDRESS}" with a number like {MOTHERSHIP_IP_ADDRESS}
```bash
Host rsub-svdsnode
HostName {MOTHERSHIP_IP_ADDRESS}
RemoteForward {YOUR_PORT} 127.0.0.1:{YOUR_PORT}
```
Setting up this config lets you type `ssh rsub-svdsnode` and you will SSH into the mothership. You can shorten this name to simply `rsub` or anything else in the Host section of the config. If you connect to multiple motherships (or edgenodes) simply create new rule by copy/pasting the three lines and filling in the relevant details.
### Set up Mothership (Done once)
These steps set up your account on the mothership.
1. Edit (or create) your `~/.bashrc`. Open with `vim ~/.bashrc` and add the following and **uncomment your port**:
```bash
export RMATE_HOST=localhost
# export RMATE_PORT=52694 #
# export RMATE_PORT=52695 #
# export RMATE_PORT=52696 #
```
### Running (what you do on a daily basis)
#### Windows
Since you've set up the script, you will be able to connect to the edgenode with the Sublime connection by simply running the following command after opening GitBash (remember the `.`):
```bash
. sublime_port_forward
```
And you (after entering your password) are logged into the Mothership. You will use prompt to open text files.
#### MacOS
Set up the port forwarding (you have to keep this open). You can do it the hard way:
```bash
ssh -R {YOUR_PORT}:localhost:{YOUR_PORT} {USER}@{MOTHERSHIP_IP_ADDRESS}
```
or the easier way (if you set up your ssh config file as above):
```bash
ssh rsub-svdsnode
```
Have Sublime Text running on your laptop -- this is where the file will appear when you run the `rsub` command on the Mothership.
### On the Mothership
Open an existing file (for example framework.cfg) that you'd like to edit in Sublime Text (or create a new one by naming it):
```bash
subl framework.cfg
```
And enjoy editing your text file on Sublime Text! It will sync the contents of the file when you save.
### FAQ and initial installation notes
You keep calling it `rsub` or `subl` but I keep seeing `rmate` everywhere -- what gives? The `rsub` command is using the utility originally created for TextMate, which was called using `rmate`. Since this is an update and uses Sublime Text, it's updated to `rsub`.
#### Ports
You shouldn't have to worry about this unless you are an admin or something has gone wrong. If you need to choose different ports or assign them, check that nothing is using them on the mothership that you want to use by running something like:
```bash
sudo netstat -plant | grep {YOUR_PORT}
```
and verifying that nothing's returned.
#### Installing rsub on mothership
Install rsub (this requires root -- can install locally if not on the edgenode)
```
sudo wget -O /usr/local/bin/subl https://raw.github.com/aurora/rmate/master/rmate
sudo chmod +x /usr/local/bin/subl
```
| github_jupyter |
<a href="https://colab.research.google.com/github/leehanchung/cs224w/blob/main/notebooks/XCS224W_Colab3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **CS224W - Colab 3**
In Colab 2 we constructed GNN models by using PyTorch Geometric's built in GCN layer, `GCNConv`. In this Colab we will go a step deeper and implement the **GraphSAGE** ([Hamilton et al. (2017)](https://arxiv.org/abs/1706.02216)) and **GAT** ([Veličković et al. (2018)](https://arxiv.org/abs/1710.10903)) layers directly. Then we will run and test our models on the CORA dataset, a standard citation network benchmark dataset.
Next, we will use [DeepSNAP](https://snap.stanford.edu/deepsnap/), a Python library assisting efficient deep learning on graphs, to split the graphs in different settings and apply dataset transformations.
Lastly, using DeepSNAP's transductive link prediction dataset spliting functionality, we will construct a simple GNN model for the task of edge property predition (link prediction).
**Note**: Make sure to **sequentially run all the cells in each section** so that the intermediate variables / packages will carry over to the next cell
Have fun and good luck on Colab 3 :)
# Device
We recommend using a GPU for this Colab.
Please click `Runtime` and then `Change runtime type`. Then set the `hardware accelerator` to **GPU**.
## Installation
```
# Install torch geometric
import os
if 'IS_GRADESCOPE_ENV' not in os.environ:
!pip uninstall torch-scatter --y
!pip uninstall torch-sparse --y
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
!pip install torch-geometric
!pip install -q git+https://github.com/snap-stanford/deepsnap.git
import torch_geometric
torch_geometric.__version__
```
# 1) GNN Layers
## Implementing Layer Modules
In Colab 2, we implemented a GCN model for node and graph classification tasks. However, for that notebook we took advantage of PyG's built in GCN module. For Colab 3, we provide a build upon a general Graph Neural Network Stack, into which we will be able to plugin our own module implementations: GraphSAGE and GAT.
We will then use our layer implemenations to complete node classification on the CORA dataset, a standard citation network benchmark. In this dataset, nodes correspond to documents and edges correspond to undirected citations. Each node or document in the graph is assigned a class label and features based on the documents binarized bag-of-words representation. Specifically, the Cora graph has 2708 nodes, 5429 edges, 7 prediction classes, and 1433 features per node.
## GNN Stack Module
Below is the implementation of a general GNN stack, where we can plugin any GNN layer, such as **GraphSage**, **GAT**, etc. This module is provided for you. Your implementations of the **GraphSage** and **GAT** layers will function as components in the GNNStack Module.
```
import torch
import torch_scatter
import torch.nn as nn
import torch.nn.functional as F
import torch_geometric.nn as pyg_nn
import torch_geometric.utils as pyg_utils
from torch import Tensor
from typing import Union, Tuple, Optional
from torch_geometric.typing import (OptPairTensor, Adj, Size, NoneType,
OptTensor)
from torch.nn import Parameter, Linear
from torch_sparse import SparseTensor, set_diag
from torch_geometric.nn.conv import MessagePassing
from torch_geometric.utils import remove_self_loops, add_self_loops, softmax
class GNNStack(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, args, emb=False):
super(GNNStack, self).__init__()
conv_model = self.build_conv_model(args.model_type)
self.convs = nn.ModuleList()
self.convs.append(conv_model(input_dim, hidden_dim))
assert (args.num_layers >= 1), 'Number of layers is not >=1'
for l in range(args.num_layers-1):
self.convs.append(conv_model(args.heads * hidden_dim, hidden_dim))
# post-message-passing
self.post_mp = nn.Sequential(
nn.Linear(args.heads * hidden_dim, hidden_dim), nn.Dropout(args.dropout),
nn.Linear(hidden_dim, output_dim))
self.dropout = args.dropout
self.num_layers = args.num_layers
self.emb = emb
def build_conv_model(self, model_type):
if model_type == 'GraphSage':
return GraphSage
elif model_type == 'GAT':
# When applying GAT with num heads > 1, you need to modify the
# input and output dimension of the conv layers (self.convs),
# to ensure that the input dim of the next layer is num heads
# multiplied by the output dim of the previous layer.
# HINT: In case you want to play with multiheads, you need to change the for-loop that builds up self.convs to be
# self.convs.append(conv_model(hidden_dim * num_heads, hidden_dim)),
# and also the first nn.Linear(hidden_dim * num_heads, hidden_dim) in post-message-passing.
return GAT
def forward(self, data):
x, edge_index, batch = data.x, data.edge_index, data.batch
for i in range(self.num_layers):
x = self.convs[i](x, edge_index)
x = F.relu(x)
x = F.dropout(x, p=self.dropout,training=self.training)
x = self.post_mp(x)
if self.emb == True:
return x
return F.log_softmax(x, dim=1)
def loss(self, pred, label):
return F.nll_loss(pred, label)
```
## Creating Our Own Message Passing Layer
Now let's start implementing our own message passing layers! Working through this part will help us become acutely familiar with the behind the scenes work of implementing Pytorch Message Passing Layers, allowing us to build our own GNN models. To do so, we will work with and implement 3 critcal functions needed to define a PyG Message Passing Layer: `forward`, `message`, and `aggregate`.
Before diving head first into the coding details, let us quickly review the key components of the message passing process. To do so, we will focus on a single round of messsage passing with respect to a single central node $x$. Before message passing, $x$ is associated with a feature vector $x^{l-1}$, and the goal of message passing is to update this feature vector as $x^l$. To do so, we implement the following steps: 1) each neighboring node $v$ passes its current message $v^{l-1}$ across the edge $(x, v)$ - 2) for the node $x$, we aggregate all of the messages of the neighboring nodes (for example through a sum or mean) - and 3) we transform the aggregated information by for example applying linear and non-linear transformations. Altogether, the message passing process is applied such that every node $u$ in our graph updates its embedding by acting as the central node $x$ in step 1-3 described above.
Now, we extending this process to that of a single message passing layer, the job of a message passing layer is to update the current feature representation or embedding of each node in a graph by propagating and transforming information within the graph. Overall, the general paradigm of a message passing layers is: 1) pre-processing -> 2) **message passing** / propagation -> 3) post-processing.
The `forward` fuction that we will implement for our message passing layer captures this execution logic. Namely, the `forward` function handles the pre and post-processing of node features / embeddings, as well as initiates message passing by calling the `propagate` function.
The `propagate` function encapsulates the message passing process! It does so by calling three important functions: 1) `message`, 2) `aggregate`, and 3) `update`. Our implementation will vary slightly from this, as we will not explicitly implement `update`, but instead place the logic for updating node embeddings after message passing and within the `forward` function. To be more specific, after information is propagated (message passing), we can further transform the node embeddings outputed by `propagate`. Therefore, the output of `forward` is exactly the node embeddings after one GNN layer.
Lastly, before starting to implement our own layer, let us dig a bit deeper into each of the functions described above:
1.
```
def propagate(edge_index, x=(x_i, x_j), extra=(extra_i, extra_j), size=size):
```
Calling `propagate` initiates the message passing process. Looking at the function parameters, we highlight a couple of key parameters.
- `edge_index` is passed to the forward function and captures the edge structure of the graph.
- `x=(x_i, x_j)` represents the node features that will be used in message passing. In order to explain why we pass the tuple `(x_i, x_j)`, we first look at how our edges are represented. For every edge $(i, j) \in \mathcal{E}$, we can differentiate $i$ as the source or central node ($x_{central}$) and j as the neighboring node ($x_{neighbor}$).
Taking the example of message passing above, for a central node $u$ we will aggregate and transform all of the messages associated with the nodes $v$ s.t. $(u, v) \in \mathcal{E}$ (i.e. $v \in \mathcal{N}_{u}$). Thus we see, the subscripts `_i` and `_j` allow us to specifcally differenciate features associated with central nodes (i.e. nodes recieving message information) and neighboring nodes (i.e. nodes passing messages).
This is definitely a somewhat confusing concept; however, one key thing to remember / wrap your head around is that depending on the perspective, a node $x$ acts as a central node or a neighboring node. In fact, in undirected graphs we store both edge directions (i.e. $(i, j)$ and $(j, i)$). From the central node perspective, `x_i`, x is collecting neighboring information to update its embedding. From a neighboring node perspective, `x_j`, x is passing its message information along the edge connecting it to a different central node.
- `extra=(extra_i, extra_j)` represents additional information that we can associate with each node beyond its current feature embedding. In fact, we can include as many additional parameters of the form `param=(param_i, param_j)` as we would like. Again, we highlight that indexing with `_i` and `_j` allows us to differentiate central and neighboring nodes.
The output of the `propagate` function is a matrix of node embeddings after the message passing process and has shape $[N, d]$.
2.
```
def message(x_j, ...):
```
The `message` function is called by propagate and constructs the messages from
neighboring nodes $j$ to central nodes $i$ for each edge $(i, j)$ in *edge_index*. This function can take any argument that was initially passed to `propagate`. Furthermore, we can again differentiate central nodes and neighboring nodes by appending `_i` or `_j` to the variable name, .e.g. `x_i` and `x_j`. Looking more specifically at the variables, we have:
- `x_j` represents a matrix of feature embeddings for all neighboring nodes passing their messages along their respective edge (i.e. all nodes $j$ for edges $(i, j) \in \mathcal{E}$). Thus, its shape is $[|\mathcal{E}|, d]$!
- In implementing GAT we will see how to access additional variables passed to propagate
Critically, we see that the output of the `message` function is a matrix of neighboring node embeddings ready to be aggregated, having shape $[|\mathcal{E}|, d]$.
3.
```
def aggregate(self, inputs, index, dim_size = None):
```
Lastly, the `aggregate` function is used to aggregate the messages from neighboring nodes. Looking at the parameters we highlight:
- `inputs` represents a matrix of the messages passed from neighboring nodes (i.e. the output of the `message` function).
- `index` has the same shape as `inputs` and tells us the central node that corresponding to each of the rows / messages $j$ in the `inputs` matrix. Thus, `index` tells us which rows / messages to aggregate for each central node.
The output of `aggregate` is of shape $[N, d]$.
For additional resources refer to the PyG documentation for implementing custom message passing layers: https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html
## GraphSage Implementation
For our first GNN layer, we will implement the well known GraphSage ([Hamilton et al. (2017)](https://arxiv.org/abs/1706.02216)) layer!
For a given *central* node $v$ with current embedding $h_v^{l-1}$, the message passing update rule to tranform $h_v^{l-1} \rightarrow h_v^l$ is as follows:
\begin{equation}
h_v^{(l)} = W_l\cdot h_v^{(l-1)} + W_r \cdot AGG(\{h_u^{(l-1)}, \forall u \in N(v) \})
\end{equation}
where $W_1$ and $W_2$ are learanble weight matrices and the nodes $u$ are *neighboring* nodes. Additionally, we use mean aggregation for simplicity:
\begin{equation}
AGG(\{h_u^{(l-1)}, \forall u \in N(v) \}) = \frac{1}{|N(v)|} \sum_{u\in N(v)} h_u^{(l-1)}
\end{equation}
One thing to note is that we're adding a **skip connection** to our GraphSage implementation through the term $W_l\cdot h_v^{(l-1)}$.
Before implementing this update rule, we encourage you to think about how different parts of the formulas above correspond with the functions outlined earlier: 1) `forward`, 2) `message`, and 3) `aggregate`. As a hint, we are given what the aggregation function is (i.e. mean aggregation)! Now the question remains, what are the messages passed by each neighbor nodes and when do we call the `propagate` function?
Note: in this case the message function or messages are actually quite simple. Additionally, remember that the `propagate` function encapsulates the operations of / the outputs of the combined `message` and `aggregate` functions.
Lastly, $\ell$-2 normalization of the node embeddings is applied after each iteration.
<font color='red'>For the following questions, DON'T refer to any existing implementations online.</font>
```
class GraphSage(MessagePassing):
def __init__(self, in_channels, out_channels, normalize = True,
bias = False, **kwargs):
super(GraphSage, self).__init__(**kwargs)
self.in_channels = in_channels
self.out_channels = out_channels
self.normalize = normalize
self.lin_l = None
self.lin_r = None
############################################################################
# TODO: Your code here!
# Define the layers needed for the message and update functions below.
# self.lin_l is the linear transformation that you apply to embedding
# for central node.
# self.lin_r is the linear transformation that you apply to aggregated
# message from neighbors.
# Our implementation is ~2 lines, but don't worry if you deviate from this.
self.lin_l = nn.Linear(self.in_channels, self.out_channels)
self.lin_r = nn.Linear(self.in_channels, self.out_channels)
############################################################################
self.reset_parameters()
def reset_parameters(self):
self.lin_l.reset_parameters()
self.lin_r.reset_parameters()
def forward(self, x, edge_index, size = None):
""""""
out = None
############################################################################
# TODO: Your code here!
# Implement message passing, as well as any post-processing (our update rule).
# 1. Call propagate function to conduct the message passing.
# 1.1 See the description of propagate above or the following link for more information:
# https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html
# 1.2 We will only use the representation for neighbor nodes (x_j), so by default
# we pass the same representation for central and neighbor nodes as x=(x, x).
# 2. Update our node embedding with skip connection.
# 3. If normalize is set, do L-2 normalization (defined in
# torch.nn.functional)
#
# Our implementation is ~5 lines, but don't worry if you deviate from this.
x_propagate = self.propagate(edge_index, x=(x, x), size=size)
x = self.lin_l(x) + x_propagate
if self.normalize:
x = F.normalize(x)
out = x
############################################################################
return out
def message(self, x_j):
out = None
############################################################################
# TODO: Your code here!
# Implement your message function here.
# Hint: Look at the formulation of the mean aggregation function, focusing on
# what message each neighboring node passes.
#
# Our implementation is ~1 lines, but don't worry if you deviate from this.
out = self.lin_r(x_j)
############################################################################
return out
def aggregate(self, inputs, index, dim_size = None):
out = None
# The axis along which to index number of nodes.
node_dim = self.node_dim
############################################################################
# TODO: Your code here!
# Implement your aggregate function here.
# See here as how to use torch_scatter.scatter:
# https://pytorch-scatter.readthedocs.io/en/latest/functions/scatter.html#torch_scatter.scatter
#
# Our implementation is ~1 lines, but don't worry if you deviate from this.
out = torch_scatter.scatter(inputs, index, dim=node_dim, reduce='mean')
############################################################################
return out
```
## GAT Implementation
Attention mechanisms have become the state-of-the-art in many sequence-based tasks such as machine translation and learning sentence representations. One of the major benefits of attention-based mechanisms is their ability to focus on the most relevant parts of the input to make decisions. In this problem, we will see how attention mechanisms can be used to perform node classification over graph-structured data through the usage of Graph Attention Networks (GATs) ([Veličković et al. (2018)](https://arxiv.org/abs/1710.10903)).
The building block of the Graph Attention Network is the graph attention layer, which is a variant of the aggregation function. Let $N$ be the number of nodes and $F$ be the dimension of the feature vector for each node. The input to each graph attentional layer is a set of node features: $\mathbf{h} = \{\overrightarrow{h_1}, \overrightarrow{h_2}, \dots, \overrightarrow{h_N}$\}, $\overrightarrow{h_i} \in R^F$. The output of each graph attentional layer is a new set of node features, which may have a new dimension $F'$: $\mathbf{h'} = \{\overrightarrow{h_1'}, \overrightarrow{h_2'}, \dots, \overrightarrow{h_N'}\}$, with $\overrightarrow{h_i'} \in \mathbb{R}^{F'}$.
We will now describe how this transformation is performed for each graph attention layer. First, a shared linear transformation parametrized by the weight matrix $\mathbf{W} \in \mathbb{R}^{F' \times F}$ is applied to every node.
Next, we perform self-attention on the nodes. We use a shared attention function $a$:
\begin{equation}
a : \mathbb{R}^{F'} \times \mathbb{R}^{F'} \rightarrow \mathbb{R}.
\end{equation}
that computes the attention coefficients capturing the importance of node $j$'s features to node $i$:
\begin{equation}
e_{ij} = a(\mathbf{W_l}\overrightarrow{h_i}, \mathbf{W_r} \overrightarrow{h_j})
\end{equation}
The most general formulation of self-attention allows every node to attend to all other nodes which drops all structural information. However, to utilize graph structure in the attention mechanisms, we use **masked attention**. In masked attention, we only compute attention coefficients $e_{ij}$ for nodes $j \in \mathcal{N}_i$ where $\mathcal{N}_i$ is some neighborhood of node $i$ in the graph.
To easily compare coefficients across different nodes, we normalize the coefficients across $j$ using a softmax function:
\begin{equation}
\alpha_{ij} = \text{softmax}_j(e_{ij}) = \frac{\exp(e_{ij})}{\sum_{k \in \mathcal{N}_i} \exp(e_{ik})}
\end{equation}
For this problem, our attention mechanism $a$ will be a single-layer feedforward neural network parametrized by a weight vectors $\overrightarrow{a} \in \mathbb{R}^{F'}$ and $\overrightarrow{a} \in \mathbb{R}^{F'}$, followed by a LeakyReLU nonlinearity (with negative input slope 0.2). Let $\cdot^T$ represent transposition and $||$ represent concatenation. The coefficients computed by our attention mechanism may be expressed as:
\begin{equation}
\alpha_{ij} = \frac{\exp\Big(\text{LeakyReLU}\Big(\overrightarrow{a_l}^T \mathbf{W_l} \overrightarrow{h_i} + \overrightarrow{a_r}^T\mathbf{W_r}\overrightarrow{h_j}\Big)\Big)}{\sum_{k\in \mathcal{N}_i} \exp\Big(\text{LeakyReLU}\Big(\overrightarrow{a_l}^T \mathbf{W_l} \overrightarrow{h_i} + \overrightarrow{a_r}^T\mathbf{W_r}\overrightarrow{h_k}\Big)\Big)}
\end{equation}
For the following questions, we denote `alpha_l` = $\alpha_l = [...,\overrightarrow{a_l}^T \mathbf{W_l} \overrightarrow{h_i},...] \in \mathcal{R}^n$ and `alpha_r` = $\alpha_r = [..., \overrightarrow{a_r}^T \mathbf{W_r} \overrightarrow{h_j}, ...] \in \mathcal{R}^n$.
At every layer of GAT, after the attention coefficients are computed for that layer, the aggregation function can be computed by a weighted sum of neighborhood messages, where weights are specified by $\alpha_{ij}$.
Now, we use the normalized attention coefficients to compute a linear combination of the features corresponding to them. These aggregated features will serve as the final output features for every node.
\begin{equation}
h_i' = \sum_{j \in \mathcal{N}_i} \alpha_{ij} \mathbf{W_r} \overrightarrow{h_j}.
\end{equation}
At this point, we have covered a lot of information! Before reading further about multi-head attention, we encourage you to go again through the excersize of thinking about what components of the attention mechanism correspond with the different funcitons: 1) `forward`, 2) `message`, and 3 `aggregate`.
- Hint 1: Our aggregation is very similar to that of GraphSage except now we are using sum aggregation
- Hint 2: The terms we aggregate over again represent the individual message that each neighbor node j sends. Thus, we see that $\alpha_{ij}$ is part of the message each node sends and is thus computed during the message step. This makes sense since an attention weight is associated with each edge in the graph.
- Hint 3: Look at the terms in the definition of $\alpha_{ij}$. What values do we want to pre-process and pass as parameters to the `propagate` function. The parameters of `message(..., x_j, alpha_j, alpha_i, ...)` should give a good hint.
### Multi-Head Attention
To stabilize the learning process of self-attention, we use multi-head attention. To do this we use $K$ independent attention mechanisms, or ``heads'' compute output features as in the above equations. Then, we concatenate these output feature representations:
\begin{equation}
\overrightarrow{h_i}' = ||_{k=1}^K \Big(\sum_{j \in \mathcal{N}_i} \alpha_{ij}^{(k)} \mathbf{W_r}^{(k)} \overrightarrow{h_j}\Big)
\end{equation}
where $||$ is concentation, $\alpha_{ij}^{(k)}$ are the normalized attention coefficients computed by the $k$-th attention mechanism $(a^k)$, and $\mathbf{W}^{(k)}$ is the corresponding input linear transformation's weight matrix. Note that for this setting, $\mathbf{h'} \in \mathbb{R}^{KF'}$.
```
class GAT(MessagePassing):
def __init__(self, in_channels, out_channels, heads = 2,
negative_slope = 0.2, dropout = 0., **kwargs):
super(GAT, self).__init__(node_dim=0, **kwargs)
self.in_channels = in_channels
self.out_channels = out_channels
self.heads = heads
self.negative_slope = negative_slope
self.dropout = dropout
self.lin_l = None
self.lin_r = None
self.att_l = None
self.att_r = None
############################################################################
# TODO: Your code here!
# Define the layers needed for the message functions below.
# self.lin_l is the linear transformation that you apply to embeddings
# BEFORE message passing.
#
# Pay attention to dimensions of the linear layers, since we're using
# multi-head attention.
# Our implementation is ~1 lines, but don't worry if you deviate from this.
self.lin_l = nn.Linear(self.in_channels, self.heads * self.out_channels)
############################################################################
self.lin_r = self.lin_l
############################################################################
# TODO: Your code here!
# Define the attention parameters \overrightarrow{a_l/r}^T in the above intro.
# You have to deal with multi-head scenarios.
# Use nn.Parameter instead of nn.Linear
# Our implementation is ~2 lines, but don't worry if you deviate from this.
self.att_l = nn.Parameter(torch.randn(heads, self.out_channels))
self.att_r = nn.Parameter(torch.randn(heads, self.out_channels))
############################################################################
self.reset_parameters()
def reset_parameters(self):
nn.init.xavier_uniform_(self.lin_l.weight)
nn.init.xavier_uniform_(self.lin_r.weight)
nn.init.xavier_uniform_(self.att_l)
nn.init.xavier_uniform_(self.att_r)
def forward(self, x, edge_index, size = None):
H, C = self.heads, self.out_channels
############################################################################
# TODO: Your code here!
# Implement message passing, as well as any pre- and post-processing (our update rule).
# 1. First apply linear transformation to node embeddings, and split that
# into multiple heads. We use the same representations for source and
# target nodes, but apply different linear weights (W_l and W_r)
# 2. Calculate alpha vectors for central nodes (alpha_l) and neighbor nodes (alpha_r).
# 3. Call propagate function to conduct the message passing.
# 3.1 Remember to pass alpha = (alpha_l, alpha_r) as a parameter.
# 3.2 See there for more information: https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html
# 4. Transform the output back to the shape of N * d.
# Our implementation is ~5 lines, but don't worry if you deviate from this.
# x_l dims: N x H x C
x_l = self.lin_l(x).view(-1, H, C)
# x_r dims: N x H x C
x_r = self.lin_r(x).view(-1, H, C)
# alpha_l dims: # 1 x H x C * N x H x C
alpha_l = self.att_l.unsqueeze(0) * x_l
# alpha_r dims: # 1 x H x C * N x H x C
alpha_r = self.att_r.unsqueeze(0) * x_r
out = self.propagate(edge_index, x = (x_l, x_r), alpha=(alpha_l, alpha_r))
out = out.view(-1, H*C)
############################################################################
return out
def message(self, x_j, alpha_j, alpha_i, index, ptr, size_i):
############################################################################
# TODO: Your code here!
# Implement your message function. Putting the attention in message
# instead of in update is a little tricky.
# 1. Calculate the final attention weights using alpha_i and alpha_j,
# and apply leaky Relu.
# 2. Calculate softmax over the neighbor nodes for all the nodes. Use
# torch_geometric.utils.softmax instead of the one in Pytorch.
# 3. Apply dropout to attention weights (alpha).
# 4. Multiply embeddings and attention weights. As a sanity check, the output
# should be of shape E * H * d.
# 5. ptr (LongTensor, optional): If given, computes the softmax based on
# sorted inputs in CSR representation. You can simply pass it to softmax.
# Our implementation is ~5 lines, but don't worry if you deviate from this.
alpha_ij = F.leaky_relu(alpha_i + alpha_j, negative_slope=self.negative_slope)
if ptr is None:
alpha_ij = softmax(alpha_ij, index)
else:
alpha_ij = softmax(alphaij, ptr)
alpha_ij = F.dropout(alpha_ij, p=self.dropout)
out = x_j * alpha_ij
############################################################################
return out
def aggregate(self, inputs, index, dim_size = None):
############################################################################
# TODO: Your code here!
# Implement your aggregate function here.
# See here as how to use torch_scatter.scatter: https://pytorch-scatter.readthedocs.io/en/latest/_modules/torch_scatter/scatter.html
# Pay attention to "reduce" parameter is different from that in GraphSage.
# Our implementation is ~1 lines, but don't worry if you deviate from this.
out = torch_scatter.scatter(inputs, index, dim=self.node_dim, reduce='sum')
############################################################################
return out
```
## Building Optimizers
This function has been implemented for you. **For grading purposes please use the default Adam optimizer**, but feel free to play with other types of optimizers on your own.
```
import torch.optim as optim
def build_optimizer(args, params):
weight_decay = args.weight_decay
filter_fn = filter(lambda p : p.requires_grad, params)
if args.opt == 'adam':
optimizer = optim.Adam(filter_fn, lr=args.lr, weight_decay=weight_decay)
elif args.opt == 'sgd':
optimizer = optim.SGD(filter_fn, lr=args.lr, momentum=0.95, weight_decay=weight_decay)
elif args.opt == 'rmsprop':
optimizer = optim.RMSprop(filter_fn, lr=args.lr, weight_decay=weight_decay)
elif args.opt == 'adagrad':
optimizer = optim.Adagrad(filter_fn, lr=args.lr, weight_decay=weight_decay)
if args.opt_scheduler == 'none':
return None, optimizer
elif args.opt_scheduler == 'step':
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=args.opt_decay_step, gamma=args.opt_decay_rate)
elif args.opt_scheduler == 'cos':
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=args.opt_restart)
return scheduler, optimizer
```
## Training and Testing
Here we provide you with the functions to train and test. **Please do not modify this part for grading purposes.**
```
import time
import networkx as nx
import numpy as np
import torch
import torch.optim as optim
from tqdm import trange
import pandas as pd
import copy
from torch_geometric.datasets import TUDataset
from torch_geometric.datasets import Planetoid
from torch_geometric.data import DataLoader
import torch_geometric.nn as pyg_nn
import matplotlib.pyplot as plt
def train(dataset, args):
print("Node task. test set size:", np.sum(dataset[0]['test_mask'].numpy()))
print()
test_loader = loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=False)
# build model
model = GNNStack(dataset.num_node_features, args.hidden_dim, dataset.num_classes,
args)
scheduler, opt = build_optimizer(args, model.parameters())
# train
losses = []
test_accs = []
best_acc = 0
best_model = None
for epoch in trange(args.epochs, desc="Training", unit="Epochs"):
total_loss = 0
model.train()
for batch in loader:
opt.zero_grad()
pred = model(batch)
label = batch.y
pred = pred[batch.train_mask]
label = label[batch.train_mask]
loss = model.loss(pred, label)
loss.backward()
opt.step()
total_loss += loss.item() * batch.num_graphs
total_loss /= len(loader.dataset)
losses.append(total_loss)
if epoch % 10 == 0:
test_acc = test(test_loader, model)
test_accs.append(test_acc)
if test_acc > best_acc:
best_acc = test_acc
best_model = copy.deepcopy(model)
else:
test_accs.append(test_accs[-1])
return test_accs, losses, best_model, best_acc, test_loader
def test(loader, test_model, is_validation=False, save_model_preds=False, model_type=None):
test_model.eval()
correct = 0
# Note that Cora is only one graph!
for data in loader:
with torch.no_grad():
# max(dim=1) returns values, indices tuple; only need indices
pred = test_model(data).max(dim=1)[1]
label = data.y
mask = data.val_mask if is_validation else data.test_mask
# node classification: only evaluate on nodes in test set
pred = pred[mask]
label = label[mask]
if save_model_preds:
print ("Saving Model Predictions for Model Type", model_type)
data = {}
data['pred'] = pred.view(-1).cpu().detach().numpy()
data['label'] = label.view(-1).cpu().detach().numpy()
df = pd.DataFrame(data=data)
# Save locally as csv
df.to_csv('CORA-Node-' + model_type + '.csv', sep=',', index=False)
correct += pred.eq(label).sum().item()
total = 0
for data in loader.dataset:
total += torch.sum(data.val_mask if is_validation else data.test_mask).item()
return correct / total
class objectview(object):
def __init__(self, d):
self.__dict__ = d
```
## Let's Start the Training!
We will be working on the CORA dataset on node-level classification.
This part is implemented for you. **For grading purposes, please do not modify the default parameters.** However, feel free to play with different configurations just for fun!
**Submit your best accuracy and loss on Gradescope.**
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
for args in [
{'model_type': 'GraphSage', 'dataset': 'cora', 'num_layers': 2, 'heads': 1, 'batch_size': 32, 'hidden_dim': 32, 'dropout': 0.5, 'epochs': 500, 'opt': 'adam', 'opt_scheduler': 'none', 'opt_restart': 0, 'weight_decay': 5e-3, 'lr': 0.01},
]:
args = objectview(args)
for model in ['GraphSage', 'GAT']:
args.model_type = model
# Match the dimension.
if model == 'GAT':
args.heads = 2
else:
args.heads = 1
if args.dataset == 'cora':
dataset = Planetoid(root='/tmp/cora', name='Cora')
else:
raise NotImplementedError("Unknown dataset")
test_accs, losses, best_model, best_acc, test_loader = train(dataset, args)
print("Maximum test set accuracy: {0}".format(max(test_accs)))
print("Minimum loss: {0}".format(min(losses)))
# Run test for our best model to save the predictions!
test(test_loader, best_model, is_validation=False, save_model_preds=True, model_type=model)
print()
plt.title(dataset.name)
plt.plot(losses, label="training loss" + " - " + args.model_type)
plt.plot(test_accs, label="test accuracy" + " - " + args.model_type)
plt.legend()
plt.show()
```
## Question 1.1: What is the maximum accuracy obtained on the test set for GraphSage? (10 points)
Running the cell above will show the results of your best model and save your best model's predictions to a file named *CORA-Node-GraphSage.csv*.
As we have seen before you can view this file by clicking on the *Folder* icon on the left side pannel. When you sumbit your assignment, you will have to download this file and attatch it to your submission.
## Question 1.2: What is the maximum accuracy obtained on test set for GAT? (10 points)
Running the training cell above will also save your best GAT model predictions as *CORA-Node-GAT.csv*.
When you sumbit your assignment, you will have to download this file and attatch it to your submission.
# 2) DeepSNAP Basics
In previous Colabs, we have seen graph class (NetworkX) and tensor (PyG) representations of graphs. The graph class `nx.Graph` provides rich analysis and manipulation functionalities, such as computing the clustering coefficient and PageRank vector for a graph. When working with PyG we were then introduced to tensor based representation of graphs (i.e. edge tensor `edge_index` and node attributes tensors `x` and `y`).
In this section, we present DeepSNAP, a package that combines the benifits of both graph representations and offers a full pipeline for GNN training / validation / and testing. Namely, DeepSNAP includes a graph class representation to allow for more efficient graph manipulation and analysis in addition to a tensor based representation for efficient message passing computation.
In general, [DeepSNAP](https://github.com/snap-stanford/deepsnap) is a Python library to assist efficient deep learning on graphs. DeepSNAP enables flexible graph manipulation, standard graph learning pipelines, heterogeneous graphs, and ovearll represents a simple graph learning API. In more detail:
1. DeepSNAP allows for sophisticated graph manipulations, such as feature computation, pretraining, subgraph extraction etc. during/before training.
2. DeepSNAP standardizes the pipelines for node, edge, and graph-level prediction tasks under inductive or transductive settings. Specifically, DeepSNAP removes previous non-trivial / repetative design choices left to the user, such as how to split datasets. DeepSNAP thus greatly saves repetitive often non-trivial coding efforts and enables fair model comparison.
3. Many real-world graphs are heterogeneous in nature (i.e. include different node types or edge types). However, most packages lack complete support for heterogeneous graphs, including data storage and flexible message passing. DeepSNAP provides an efficient and flexible heterogeneous graph that supports both node and edge heterogeneity.
In this next section, we will focus on working with DeepSNAP for graph manipulation and dataset splitting.
[DeepSNAP](https://github.com/snap-stanford/deepsnap) is a newly released project and it is still under development. If you find any bugs or have any improvement ideas, feel free to raise issues or create pull requests on the GitHub directly :)
## Setup
```
import torch
import networkx as nx
import matplotlib.pyplot as plt
from deepsnap.graph import Graph
from deepsnap.batch import Batch
from deepsnap.dataset import GraphDataset
from torch_geometric.datasets import Planetoid, TUDataset
from torch.utils.data import DataLoader
def visualize(G, color_map=None, seed=123):
if color_map is None:
color_map = '#c92506'
plt.figure(figsize=(8, 8))
nodes = nx.draw_networkx_nodes(G, pos=nx.spring_layout(G, seed=seed), \
label=None, node_color=color_map, node_shape='o', node_size=150)
edges = nx.draw_networkx_edges(G, pos=nx.spring_layout(G, seed=seed), alpha=0.5)
if color_map is not None:
plt.scatter([],[], c='#c92506', label='Nodes with label 0', edgecolors="black", s=140)
plt.scatter([],[], c='#fcec00', label='Nodes with label 1', edgecolors="black", s=140)
plt.legend(prop={'size': 13}, handletextpad=0)
nodes.set_edgecolor('black')
plt.show()
```
## DeepSNAP Graph
The `deepsnap.graph.Graph` class is the core class of DeepSNAP. It not only represents a graph in tensor format but also includes a graph object from a graph manipulation package.
Currently DeepSNAP supports [NetworkX](https://networkx.org/) and [Snap.py](https://snap.stanford.edu/snappy/doc/index.html) as back end graph manipulation packages.
In this Colab, we will focus on using NetworkX as the back end graph manipulation package.
### NetworkX to DeepSNAP
To begin, let us first work through converting a simple random NetworkX graph to a DeepSNAP graph.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
num_nodes = 100
p = 0.05
seed = 100
# Generate a networkx random graph
G = nx.gnp_random_graph(num_nodes, p, seed=seed)
# Generate some random node features and labels
node_feature = {node : torch.rand([5, ]) for node in G.nodes()}
node_label = {node : torch.randint(0, 2, ()) for node in G.nodes()}
# Set the random features and labels to G
nx.set_node_attributes(G, node_feature, name='node_feature')
nx.set_node_attributes(G, node_label, name='node_label')
# Print one node example
for node in G.nodes(data=True):
print(node)
break
color_map = ['#c92506' if node[1]['node_label'].item() == 0 else '#fcec00' for node in G.nodes(data=True)]
# Visualize the graph
visualize(G, color_map=color_map)
# Transform the networkx graph into the deepsnap graph
graph = Graph(G)
# Print out the general deepsnap graph information
print(graph)
# DeepSNAP will convert node attributes to tensors
# Notice the type of tensors
print("Node feature (node_feature) has shape {} and type {}".format(graph.node_feature.shape, graph.node_feature.dtype))
print("Node label (node_label) has shape {} and type {}".format(graph.node_label.shape, graph.node_label.dtype))
# DeepSNAP will also generate the edge_index tensor
print("Edge index (edge_index) has shape {} and type {}".format(graph.edge_index.shape, graph.edge_index.dtype))
# Different from only storing tensors, deepsnap graph also references to the networkx graph
# We will discuss why the reference will be helpful later
print("The DeepSNAP graph has {} as the internal manupulation graph".format(type(graph.G)))
```
### Tensor graph attributes
Similar to the native PyG tensor based representation, DeepSNAP includes a graph tensor based representation with three levels of graph attributes. In this example, we primarily have **node level** attributes including `node_feature` and `node_label`. The other two levels of attributes are **edge** and **graph** attributes. Similar to node level attributes, these attributes are prefixed by their respective type. For example, the features become `edge_feature` or `graph_feature` and labels becomes `edge_label` or `graph_label` etc.
### Graph Object
DeepSNAP additionally allows us to easily access graph information through the backend graph object and graph manipulation package.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Number of nodes
print("The random graph has {} nodes".format(graph.num_nodes))
# Number of edges
print("The random graph has {} edges".format(graph.num_edges))
```
### PyG to DeepSNAP
Lastly, DeepSNAP provides functionality to automatically transform a PyG dataset into a list of DeepSNAP graphs.
Here we transform the CORA dataset into a list with one DeepSNAP graph (i.e. the singular CORA graph).
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cora'
name = 'Cora'
# The Cora dataset
pyg_dataset= Planetoid(root, name)
# PyG dataset to a list of deepsnap graphs
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
# Get the first deepsnap graph (CORA only has one graph)
graph = graphs[0]
print(graph)
```
## Question 2.1: How many classes are in the CORA graph? How many features does each node have? (5 points)
```
def get_num_node_classes(graph):
# TODO: Implement a function that takes a deepsnap graph object
# and return the number of node classes of that graph.
num_node_classes = 0
############# Your code here #############
## (~1 line of code)
## Note
## 1. Colab autocomplete functionality might be useful
## 2. DeepSNAP documentation might be useful https://snap.stanford.edu/deepsnap/modules/graph.html
num_node_classes = graph.num_node_labels
##########################################
return num_node_classes
def get_num_node_features(graph):
# TODO: Implement a function that takes a deepsnap graph object
# and return the number of node features of that graph.
num_node_features = 0
############# Your code here #############
## (~1 line of code)
## Note
## 1. Colab autocomplete functionality might be useful
## 2. DeepSNAP documentation might be useful https://snap.stanford.edu/deepsnap/modules/graph.html
num_node_features = graph.num_node_features
##########################################
return num_node_features
if 'IS_GRADESCOPE_ENV' not in os.environ:
num_node_classes = get_num_node_classes(graph)
num_node_features = get_num_node_features(graph)
print("{} has {} classes".format(name, num_node_classes))
print("{} has {} features".format(name, num_node_features))
```
## DeepSNAP Dataset
Now, we will learn how to create DeepSNAP datasets. A `deepsnap.dataset.GraphDataset` contains a list of `deepsnap.graph.Graph` objects. In addition to the list of graphs, we specify what task the dataset will be used on, such as node level task (`task=node`), edge level task (`task=link_pred`) and graph level task (`task=graph`).
The GraphDataset class contains many other useful parameters that can be specified during initialization. If you are interested, you can take a look at the [documentation](https://snap.stanford.edu/deepsnap/modules/dataset.html#deepsnap-graphdataset).
As an example, we will first look at the COX2 dataset, which contains 467 graphs. In initializng our dataset, we convert the PyG dataset into its corresponding DeepSNAP dataset and specify the task to `graph`.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cox2'
name = 'COX2'
# Load the dataset through PyG
pyg_dataset = TUDataset(root, name)
# Convert to a list of deepsnap graphs
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
# Convert list of deepsnap graphs to deepsnap dataset with specified task=graph
dataset = GraphDataset(graphs, task='graph')
print(dataset)
```
## Question 2.2: What is the label of the graph with index 100? (5 points)
```
def get_graph_class(dataset, idx):
# TODO: Implement a function that takes a deepsnap dataset object,
# the index of a graph in the dataset, and returns the class/label
# of the graph (in integer).
label = -1
############# Your code here ############
## (~1 line of code)
## Notice
## 1. The graph label refers to a graph-level attribute
label = dataset[idx].graph_label
#########################################
return label
if 'IS_GRADESCOPE_ENV' not in os.environ:
graph_0 = dataset[0]
print(graph_0)
idx = 100
label = get_graph_class(dataset, idx)
print('Graph with index {} has label {}'.format(idx, label))
```
## Question 2.3: How many edges are in the graph with index 200? (5 points)
```
def get_graph_num_edges(dataset, idx):
# TODO: Implement a function that takes a deepsnap dataset object,
# the index of a graph in dataset, and returns the number of
# edges in the graph (in integer).
num_edges = 0
############# Your code here ############
## (~1 lines of code)
## Note
## 1. You can use the class property directly
num_edges = dataset[idx].num_edges
#########################################
return num_edges
if 'IS_GRADESCOPE_ENV' not in os.environ:
idx = 200
num_edges = get_graph_num_edges(dataset, idx)
print('Graph with index {} has {} edges'.format(idx, num_edges))
```
# 3) DeepSNAP Advanced
Now that we have learned the basics of DeepSNAP lets move on to some more advanced functionalities.
In this section we will use DeepSNAP for graph feature computation and transductive/inductive dataset splitting.
## Setup
```
import torch
import networkx as nx
import matplotlib.pyplot as plt
from deepsnap.graph import Graph
from deepsnap.batch import Batch
from deepsnap.dataset import GraphDataset
from torch_geometric.datasets import Planetoid, TUDataset
from torch.utils.data import DataLoader
```
## Data Split in Graphs
As discussed in (LECTURE REFERENCE), data splitting for graphs can be much harder than for CV or NLP.
In general, data splitting is divided into two settings, **inductive** and **transductive**.
## Inductive Split
In an inductive setting, we split a list of multiple graphs into disjoint training/valiation and test sets.
Here is an example of using DeepSNAP to inductively split a list of graphs for a graph level task (graph classification etc.):
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cox2'
name = 'COX2'
pyg_dataset = TUDataset(root, name)
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
# Here we specify the task as graph-level task such as graph classification
task = 'graph'
dataset = GraphDataset(graphs, task=task)
# Specify transductive=False (inductive)
dataset_train, dataset_val, dataset_test = dataset.split(transductive=False, split_ratio=[0.8, 0.1, 0.1])
print("COX2 train dataset: {}".format(dataset_train))
print("COX2 validation dataset: {}".format(dataset_val))
print("COX2 test dataset: {}".format(dataset_test))
```
## Transductive Split
In the transductive setting, the training /validation / test sets are all over the same graph. As discussed in (LECTURE REF), we consider a transductive setting when we do not need to generalize to new unseen graphs.
As an example, here we transductively split the CORA graph for a node level task, such as node classification.
(Notice that in DeepSNAP the default split setting is random (i.e. DeepSNAP randomly splits the e.g. nodes into train / val / test); however, you can also use a fixed split by specifying `fixed_split=True` when loading the dataset from PyG or changing the `node_label_index` directly).
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cora'
name = 'Cora'
pyg_dataset = Planetoid(root, name)
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
# Here we specify the task as node-level task such as node classification
task = 'node'
dataset = GraphDataset(graphs, task=task)
# Specify we want the transductive splitting
dataset_train, dataset_val, dataset_test = dataset.split(transductive=True, split_ratio=[0.8, 0.1, 0.1])
print("Cora train dataset: {}".format(dataset_train))
print("Cora validation dataset: {}".format(dataset_val))
print("Cora test dataset: {}".format(dataset_test))
print("Original Cora has {} nodes".format(dataset.num_nodes[0]))
# The nodes in each set can be find in node_label_index
print("After the split, Cora has {} training nodes".format(dataset_train[0].node_label_index.shape[0]))
print("After the split, Cora has {} validation nodes".format(dataset_val[0].node_label_index.shape[0]))
print("After the split, Cora has {} test nodes".format(dataset_test[0].node_label_index.shape[0]))
```
## Edge Level Split
Compared to node and graph level splitting, edge level splitting is a little bit tricky ;)
For edge level splitting we need to consider several different tasks:
1. Splitting positive edges into train / val / test datasets.
2. Sampling / re-sampling negative edges (i.e. edges not present in the graph).
3. Splitting edges into message passing and supervision edges.
With regard to point 3, for edge level data splitting we classify edges into two types. The first is `message passing` edges, edges that are used for message passing by our GNN. The second is `supervision`, edges that are used in the loss function for backpropagation. DeepSNAP allows for two different modes, where the `message passing` and `supervision` edges are either the same or disjoint.
### All Edge Splitting Mode
First, we explore the `edge_train_mode="all"` mode for edge level splitting, where the `message passing` and `supervision` edges are shared during training.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cora'
name = 'Cora'
pyg_dataset = Planetoid(root, name)
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
# Specify task as link_pred for edge-level task
task = 'link_pred'
# Specify the train mode, "all" mode is default for deepsnap dataset
edge_train_mode = "all"
dataset = GraphDataset(graphs, task=task, edge_train_mode=edge_train_mode)
# Transductive link prediction split
dataset_train, dataset_val, dataset_test = dataset.split(transductive=True, split_ratio=[0.8, 0.1, 0.1])
print("Cora train dataset: {}".format(dataset_train))
print("Cora validation dataset: {}".format(dataset_val))
print("Cora test dataset: {}".format(dataset_test))
```
In DeepSNAP, the indices of supervision edges are stored in the `edge_label_index` tensor and the corresponding edge labels are stored in `edge_label` tensor.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
print("Original Cora graph has {} edges".format(dataset[0].num_edges))
print()
print("Train set has {} message passing edge".format(dataset_train[0].edge_index.shape[1] // 2))
print("Train set has {} supervision (positive) edges".format(dataset_train[0].edge_label_index.shape[1] // 4))
print()
print("Validation set has {} message passing edge".format(dataset_val[0].edge_index.shape[1] // 2))
print("Validation set has {} supervision (positive) edges".format(dataset_val[0].edge_label_index.shape[1] // 4))
print()
print("Test set has {} message passing edge".format(dataset_test[0].edge_index.shape[1] // 2))
print("Test set has {} supervision (positive) edges".format(dataset_test[0].edge_label_index.shape[1] // 4))
```
**Specific things to note in `all` mode**:
* At training time: the supervision edges are the same as the training message passing edges.
* At validation time: the message passing edges are the training message passing edges and training supervision edges (still the training message passing edges in this case). However, we now include a set of unseen validation supervision edges that are disjoint from the training supervision edges.
* At test time: the message passing edges are the union of training message passing edges, training supervision edges, and validation supervision edges. The test supervision edges then disjoint from the training supervision edges and validation supervision edges.
* We exclude negative edges in this illustration. However, the attributes `edge_label` and `edge_label_index` naturally also include the negative supervision edges (by default the number of negative edges is the same as the number of positive edges, hence the divide by 4 above).
Now, that we have seen the basics of the `all` method for edge splitting, we will implement a function that checks whether two edge index tensors are disjoint and explore more edge splitting properties by using that function.
## Question 3: Implement a function that checks whether two edge_index tensors are disjoint (i.e. do not share any common edges). Then answer the True/False questions below. (5 points)
```
def edge_indices_disjoint(edge_index_1, edge_index_2):
# TODO: Implement this function that takes two edge index tensors,
# and returns whether these two edge index tensors are disjoint.
disjoint = None
############# Your code here ############
## (~5 lines of code)
## Note
## 1. Here disjoint means that there is no single edge belongs to both edge index tensors
## 2. You do not need to consider the undirected case. For example, if edge_index_1 contains
## edge (a, b) and edge_index_2 contains edge (b, a). We will treat them as disjoint in this
## function.
edge_index_1_np = edge_index_1.T.detach().cpu().numpy()
edge_index_2_np = edge_index_2.T.detach().cpu().numpy()
intercept = [x for x in set(tuple(x) for x in edge_index_1_np) & set(tuple(x) for x in edge_index_2_np)]
disjoint = len(intercept) == 0
#########################################
return disjoint
if 'IS_GRADESCOPE_ENV' not in os.environ:
num_train_edges = dataset_train[0].edge_label_index.shape[1] // 2
train_pos_edge_index = dataset_train[0].edge_label_index[:, :num_train_edges]
train_neg_edge_index = dataset_train[0].edge_label_index[:, num_train_edges:]
print("3.1 Training (supervision) positve and negative edges are disjoint = {}"\
.format(edge_indices_disjoint(train_pos_edge_index, train_neg_edge_index)))
num_val_edges = dataset_val[0].edge_label_index.shape[1] // 2
val_pos_edge_index = dataset_val[0].edge_label_index[:, :num_val_edges]
val_neg_edge_index = dataset_val[0].edge_label_index[:, num_val_edges:]
print("3.2 Validation (supervision) positve and negative edges are disjoint = {}"\
.format(edge_indices_disjoint(val_pos_edge_index, val_neg_edge_index)))
num_test_edges = dataset_test[0].edge_label_index.shape[1] // 2
test_pos_edge_index = dataset_test[0].edge_label_index[:, :num_test_edges]
test_neg_edge_index = dataset_test[0].edge_label_index[:, num_test_edges:]
print("3.3 Test (supervision) positve and negative edges are disjoint = {}"\
.format(edge_indices_disjoint(test_pos_edge_index, test_neg_edge_index)))
print("3.4 Test (supervision) positve and validation (supervision) positve edges are disjoint = {}"\
.format(edge_indices_disjoint(test_pos_edge_index, val_pos_edge_index)))
print("3.5 Validation (supervision) positve and training (supervision) positve edges are disjoint = {}"\
.format(edge_indices_disjoint(val_pos_edge_index, train_pos_edge_index)))
```
### Disjoint Edge Splitting Mode
Now we will look at a relatively more complex transductive edge split setting, the `edge_train_mode="disjoint"` mode in DeepSNAP. In this setting, the `message passing` and `supervision` edges are completely disjoint
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
edge_train_mode = "disjoint"
dataset = GraphDataset(graphs, task='link_pred', edge_train_mode=edge_train_mode)
orig_edge_index = dataset[0].edge_index
dataset_train, dataset_val, dataset_test = dataset.split(
transductive=True, split_ratio=[0.8, 0.1, 0.1])
train_message_edge_index = dataset_train[0].edge_index
train_sup_edge_index = dataset_train[0].edge_label_index
val_message_edge_index = dataset_val[0].edge_index
val_sup_edge_index = dataset_val[0].edge_label_index
test_message_edge_index = dataset_test[0].edge_index
test_sup_edge_index = dataset_test[0].edge_label_index
print("Original Cora graph has {} edges".format(dataset[0].num_edges))
print()
print("Train set has {} message passing edge".format(train_message_edge_index.shape[1] // 2))
print("Train set has {} supervision (positive) edges".format(train_sup_edge_index.shape[1] // 4))
print()
print("Validation set has {} message passing edge".format(val_message_edge_index.shape[1] // 2))
print("Validation set has {} supervision (positive) edges".format(val_sup_edge_index.shape[1] // 4))
print()
print("Test set has {} message passing edge".format(test_message_edge_index.shape[1] // 2))
print("Test set has {} supervision (positive) edges".format(test_sup_edge_index.shape[1] // 4))
```
**Specific things to note in `disjoint` mode**:
* At training time: the training supervision edges are disjoint from the training message passing edges.
* At validation time: the message passing edges are the union of training message passing edges and training supervision edges. The validation supervision edges are disjoint from both the training message passing and supervision edges.
* At test time: the message passing edges are the training message passing edges, training supervision edges, and validation supervision edges. The test supervision edges are disjoint from all the training and validation edges.
## Negative Edges
For edge level tasks, sampling negative edges is critical. Moreover, during each training iteration, we want to resample the negative edges.
Below we print the training and validation sets negative edges in two training iterations.
What we demonstrate is that the negative edges are only resampled during training.
```
if 'IS_GRADESCOPE_ENV' not in os.environ:
dataset = GraphDataset(graphs, task='link_pred', edge_train_mode="disjoint")
datasets = {}
follow_batch = []
datasets['train'], datasets['val'], datasets['test'] = dataset.split(
transductive=True, split_ratio=[0.8, 0.1, 0.1])
dataloaders = {
split: DataLoader(
ds, collate_fn=Batch.collate(follow_batch),
batch_size=1, shuffle=(split=='train')
)
for split, ds in datasets.items()
}
neg_edges_1 = None
for batch in dataloaders['train']:
num_edges = batch.edge_label_index.shape[1] // 2
neg_edges_1 = batch.edge_label_index[:, num_edges:]
print("First iteration training negative edges:")
print(neg_edges_1)
break
neg_edges_2 = None
for batch in dataloaders['train']:
num_edges = batch.edge_label_index.shape[1] // 2
neg_edges_2 = batch.edge_label_index[:, num_edges:]
print("Second iteration training negative edges:")
print(neg_edges_2)
break
neg_edges_1 = None
for batch in dataloaders['val']:
num_edges = batch.edge_label_index.shape[1] // 2
neg_edges_1 = batch.edge_label_index[:, num_edges:]
print("First iteration validation negative edges:")
print(neg_edges_1)
break
neg_edges_2 = None
for batch in dataloaders['val']:
num_edges = batch.edge_label_index.shape[1] // 2
neg_edges_2 = batch.edge_label_index[:, num_edges:]
print("Second iteration validation negative edges:")
print(neg_edges_2)
break
```
If you are interested in more graph splitting settings, please refer to the DeepSNAP dataset [documentation](https://snap.stanford.edu/deepsnap/modules/dataset.html).
## Graph Transformation and Feature Computation
The other core functionality of DeepSNAP is graph transformation / feature computation.
In DeepSNAP, we divide graph transformation / feature computation into two different types. The first includes transformations before training (e.g. transform the whole dataset before training directly), and the second includes transformations during training (transform batches of graphs).
Below is an example that uses the NetworkX back end to calculate the PageRank value for each node and subsequently transforms the node features by concatenating each nodes PageRank score (transform the dataset before training).
```
def pagerank_transform_fn(graph):
# Get the referenced networkx graph
G = graph.G
# Calculate the pagerank by using networkx
pr = nx.pagerank(G)
# Transform the pagerank values to tensor
pr_feature = torch.tensor([pr[node] for node in range(graph.num_nodes)], dtype=torch.float32)
pr_feature = pr_feature.view(graph.num_nodes, 1)
# Concat the pagerank values to the node feature
graph.node_feature = torch.cat([graph.node_feature, pr_feature], dim=-1)
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './tmp/cox2'
name = 'COX2'
pyg_dataset = TUDataset(root, name)
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
dataset = GraphDataset(graphs, task='graph')
print("Number of features before transformation: {}".format(dataset.num_node_features))
dataset.apply_transform(pagerank_transform_fn, update_tensor=False)
print("Number of features after transformation: {}".format(dataset.num_node_features))
```
## Question 4: Implement a transformation that adds the clustering coefficient of each node to its feature vector and then report the clustering coefficient of the node with index 3 in the graph with index 406 (5 points).
```
def cluster_transform_fn(graph):
# TODO: Implement a function that takes an deepsnap graph object and
# transform the graph by adding each node's clustering coefficient to its
# graph.node_feature representation
############# Your code here ############
## (~5 lines of code)
## Note
## 1. Compute the clustering coefficient value for each node and
## concat this value to the last dimension of graph.node_feature
# Get networkx graph
G = graph.G
# Calculate clustering coefficient/pagerank using networkx
pr = nx.algorithms.cluster.clustering(G)
# Transform pagerank value to tensor
pr_feature = torch.tensor([pr[node] for node in range(graph.num_nodes)], dtype=torch.float16)
pr_feature = pr_feature.view(graph.num_nodes, 1)
# concat pagerank values to the node features
graph.node_feature = torch.cat([graph.node_feature, pr_feature], dim=-1)
#########################################
if 'IS_GRADESCOPE_ENV' not in os.environ:
root = './cox2'
name = 'COX2'
pyg_dataset = TUDataset(root, name)
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
dataset = GraphDataset(graphs, task='graph')
# Transform the dataset
dataset.apply_transform(cluster_transform_fn, update_tensor=False)
node_idx = 3
graph_idx = 406
node_feature = dataset[graph_idx].node_feature
print("The node has clustering coefficient: {}".format(round(node_feature[node_idx][-1].item(), 2)))
```
### Final Thoughts
Apart from transforming the whole dataset before training, DeepSNAP can also transform the graph (usually sampled batches of graphs, `deepsnap.batch.Batch`) during each training iteration.
Also, DeepSNAP supports the synchronization of the transformation between the referenced graph objects and tensor representations. For example, you can just update the NetworkX graph object in the transform function and by specifying `update_tensor=True` the internal tensor representations will be automatically updated!
For more information, please refer to the DeepSNAP [documentation](https://snap.stanford.edu/deepsnap/).
# 4) Edge Level Prediction
From the last section, we learned how DeepSNAP trandsuctively splits edges for edge level tasks. For the last part of the notebook, we will use DeepSNAP and PyG together to implement a simple edge level prediction (link prediction) model!
Specifically, we will use a 2 layer GraphSAGE embedding model to generate node embeddings, and then compute link predictions through a dot product link prediction head. Namely, given an edge (u, v) with GNN feature embeddings $f_u$ and $f_v$, our link prediction head generates its link prediction as $f_u \cdot f_v$.
To give a brief intuition for this dot product link prediction model, we are learning a GNN that embedds nodes such that nodes that have an edge in the graph are closer within the embedding space than nodes that do not have an edge. The dot product provides a proxy for closeness in our embedding space where a high positive dot product indicates that two vectors are more closely aligned (the angle between the vectors is small), whereas a negative dot-product indicates that vectors are unaligned (the angle between the vectors is greater than 90).
```
import copy
import torch
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from deepsnap.graph import Graph
from deepsnap.batch import Batch
from deepsnap.dataset import GraphDataset
from torch_geometric.datasets import Planetoid, TUDataset
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torch_geometric.nn import SAGEConv
class LinkPredModel(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, num_classes, dropout=0.2):
super(LinkPredModel, self).__init__()
self.conv1 = SAGEConv(input_dim, hidden_dim)
self.conv2 = SAGEConv(hidden_dim, num_classes)
self.loss_fn = None
############# Your code here #############
## (~1 line of code)
## Note
## 1. Initialize the loss function to BCEWithLogitsLoss
self.loss_fn = nn.BCEWithLogitsLoss()
##########################################
self.dropout = dropout
def reset_parameters(self):
self.conv1.reset_parameters()
self.conv2.reset_parameters()
def forward(self, batch):
node_feature, edge_index, edge_label_index = batch.node_feature, batch.edge_index, batch.edge_label_index
############# Your code here #############
## (~6 line of code)
## Note
## 1. Feed the node feature into the first conv layer
## 2. Add a ReLU after the first conv layer
## 3. Add dropout after the ReLU (with probability self.dropout)
## 4. Feed the output to the second conv layer
## 5. Select the embeddings of the source nodes and destination nodes
## by using the edge_label_index and compute the similarity of each pair
## by dot product
x = self.conv1(node_feature, edge_index)
x = F.relu(x)
x = F.dropout(x, p=self.dropout)
x = self.conv2(x, edge_index)
x_src = x[edge_label_index[0]]
x_dst = x[edge_label_index[1]]
x_similarity = x_src * x_dst
pred = torch.sum(x_similarity, dim=-1)
##########################################
return pred
def loss(self, pred, link_label):
return self.loss_fn(pred, link_label)
from sklearn.metrics import *
def train(model, dataloaders, optimizer, args):
val_max = 0
best_model = model
for epoch in range(1, args["epochs"]):
for i, batch in enumerate(dataloaders['train']):
batch.to(args["device"])
############# Your code here #############
## (~6 lines of code)
## Note
## 1. Zero grad the optimizer
## 2. Compute loss and backpropagate
## 3. Update the model parameters
optimizer.zero_grad()
pred = model(batch)
loss = model.loss(pred, batch.edge_label.type_as(pred))
loss.backward()
optimizer.step()
##########################################
log = 'Epoch: {:03d}, Train: {:.4f}, Val: {:.4f}, Test: {:.4f}, Loss: {}'
score_train = test(model, dataloaders['train'], args)
score_val = test(model, dataloaders['val'], args)
score_test = test(model, dataloaders['test'], args)
print(log.format(epoch, score_train, score_val, score_test, loss.item()))
if val_max < score_val:
val_max = score_val
best_model = copy.deepcopy(model)
return best_model
def test(model, dataloader, args, save_model_preds=False):
model.eval()
score = 0
preds = None
labels = None
############# Your code here #############
## (~7 lines of code)
## Note
## 1. Loop through batches in the dataloader (Note for us there is only one batch!)
## 2. Feed the batch to the model
## 3. Feed the model output to sigmoid
## 4. Compute the ROC-AUC score by using sklearn roc_auc_score function
## Note: Look into flattening and converting torch tensors into numpy arrays
## 5. Edge labels are stored in batch.edge_label
## 6. Make sure to save your **numpy** model predictions as 'preds'
## and the **numpy** edge labels as 'labels'
# for batch in dataloader:
for batch in dataloaders['test']:
batch.to(args['device'])
preds = model(batch)
preds = torch.sigmoid(preds).cpu().detach().numpy()
labels = batch.edge_label.cpu().detach().numpy()
score += roc_auc_score(labels, preds)
score /= len(dataloaders['test'])
##########################################
if save_model_preds:
print ("Saving Link Classification Model Predictions")
print()
data = {}
data['pred'] = preds
data['label'] = labels
df = pd.DataFrame(data=data)
# Save locally as csv
df.to_csv('CORA-Link-Prediction.csv', sep=',', index=False)
return score
# Please don't change any parameters
args = {
"device" : 'cuda' if torch.cuda.is_available() else 'cpu',
"hidden_dim" : 128,
"epochs" : 200,
}
if 'IS_GRADESCOPE_ENV' not in os.environ:
pyg_dataset = Planetoid('./tmp/cora', 'Cora')
graphs = GraphDataset.pyg_to_graphs(pyg_dataset)
dataset = GraphDataset(
graphs,
task='link_pred',
edge_train_mode="disjoint"
)
datasets = {}
datasets['train'], datasets['val'], datasets['test']= dataset.split(
transductive=True, split_ratio=[0.85, 0.05, 0.1])
input_dim = datasets['train'].num_node_features
num_classes = datasets['train'].num_edge_labels
model = LinkPredModel(input_dim, args["hidden_dim"], num_classes).to(args["device"])
model.reset_parameters()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
dataloaders = {split: DataLoader(
ds, collate_fn=Batch.collate([]),
batch_size=1, shuffle=(split=='train'))
for split, ds in datasets.items()}
best_model = train(model, dataloaders, optimizer, args)
log = "Best Model Accuraies Train: {:.4f}, Val: {:.4f}, Test: {:.4f}"
best_train_roc = test(best_model, dataloaders['train'], args)
best_val_roc = test(best_model, dataloaders['val'], args)
best_test_roc = test(best_model, dataloaders['test'], args, save_model_preds=True)
print(log.format(best_train_roc, best_val_roc, best_test_roc))
```
## Question 4: What is the maximum ROC-AUC score you get for your best_model on test set? (13 points)
After training your model, download and submit your best model prediction file: *CORA-Link-Prediction.csv*.
As we have seen before you can view this file by clicking on the *Folder* icon on the left side pannel.
# Submission
You will need to submit four files on Gradescope to complete this notebook.
1. Your completed *XCS224W_Colab3.ipynb*. From the "File" menu select "Download .ipynb" to save a local copy of your completed Colab.
2. *CORA-Node-GraphSage.csv*
3. *CORA-Node-GAT.csv*
4. *CORA-Link-Prediction.csv*
Download the csv files by selecting the *Folder* icon on the left panel.
To submit your work, zip the files downloaded in steps 1-4 above and submit to gradescope. **NOTE:** DO NOT rename any of the downloaded files.
```
```
| github_jupyter |
```
!python3 -m pip freeze | grep xlrd
!python3 -m pip freeze | grep openpy
```
# Использование библиотеки pandas для анализа описаний уязвимостей из банка данных ФСТЭК
В статье демонстрируются возможности использования библиотеки pandas для работы с информацией из банка данных ФСТЭК (bdu.fstec.ru) об угрозах (thrlist.xlsx) и уязвимостях (vullist.xlsx)
Для работы можно использовать открытый ресурс google colab с уже предустановленным программным обеспечением.
pandas
xlrd
openpyxls
## Загрузка файлов с сайта <a name='load'></a>
## Содержание <a name='toc'></a>
<ul>
<a href='#load'>Загрузка данных с сайта</a>
</ul>
<ul>
<a href='#thrlist'>Анализ файла угроз thrlist.xlsx</a>
</ul>
<ul>
<a href='#vullist'>Анализ файла уязвимостей vullist.xlsx</a>
</ul>
<ul>
<a href='#refs'>Ссылки</a>
</ul>
## Загрузка файлов с сайта <a name='load'></a>
На официальном сайте ФСТЭК опубликованы списки угроз (thrlist.xlsx) и уязвимостей (vullist.xlsx) в эксель файлах.
Загрузим их.
```
!wget https://bdu.fstec.ru/files/documents/thrlist.xlsx
!wget https://bdu.fstec.ru/files/documents/vullist.xlsx
```
Убедимся, что файлы появились в локальной директории. Для этого можно вызвать стаднатрную команду linux для просмотра содержимого директория ls, для этого перед командой в ячейке надо поставить "!".
```
!ls
```
Загружаем библиотеку pandas для работы с таблицами.
```
import pandas as pd
```
Создаем DataFrame из xlsx файла (пропускаем одну строчку, чтобы корректно отображались заголовки)
```
df = pd.read_excel("./thrlist.xlsx", skiprows=1, engine="openpyxl")
```
Отобразим первые три строки датафрейма.
```
df.head(3)
```
Посмотрим размеры
```
df.shape
```
Посмотрим перечень названий столбцов
```
for i,col in enumerate(df.columns):
print(i+1,":",col)
```
Итак, мы загрузили описания угроз и уязвимостей с официального сайта ФСТЭК в виде xlsx файлов и на основе их данных создали табличные структуры pandas.DataFrame. Далее будем анализировать содержимое этих таблиц методами библиотеки pandas.
<a href='#toc'>Назад к оглавлению</a>
## Анализ файла угроз thrlist.xlsx <a name='thrlist'></a>
Посмотрим информацию о столбцах таблицы
```
df.info()
```
Из полученного описания видим, что в таблице 10 столбцов, четыре из них целочисленные, два имеют тип "datetime", остальные являются строками.
Также видим, что в столбце "Источник угрозы" есть два пустых значения (NaN).
Выведем только те строки, для который угроза связана с нарушением целостности
```
df[df['Нарушение целостности']==1]
```
Выведем только те строки, которые относятся к BIOS
```
df[df['Наименование УБИ'].apply(lambda x: x.find("BIOS"))!=-1]
```
Выведем угрозы за 2019 год
```
df[(df['Дата включения угрозы в БнД УБИ']>'2019-01-01')&(df['Дата включения угрозы в БнД УБИ']<'2020-01-01')]
```
При попытке отфильтровать содержимое таблицы, оставив только те строки, которые содержать слово "высоким" в столбце "Источник угрозы (характеристика и потенциал нарушителя)", возникает ошибка.
Причина в том, что в двух ячейках столбца "Источник угрозы (характеристика и потенциал нарушителя)" были два пустых значения (NaN - "Not a number", специальное обозначение для пустого значения).
Далее по тексту мы это обнаружили и заменили пустые значения строкой "не задано".
```
df['Источник угрозы (характеристика и потенциал нарушителя)'].isna().sum()
df['Источник угрозы (характеристика и потенциал нарушителя)'].fillna("не задано", inplace=True)
df['Источник угрозы (характеристика и потенциал нарушителя)'].isna().sum()
```
Теперь все работает, оставим только строчки с высоким потенциалом нарушителя.
```
df[df['Источник угрозы (характеристика и потенциал нарушителя)'].str.contains("высоким")]
```
Двойное условие: 1) BIOS в "Наименовании УБИ" и 2) Нарушение конфиденциальности
```
df_result = df[(df['Наименование УБИ'].apply(lambda x: x.find("BIOS"))!=-1)&(df['Нарушение конфиденциальности']==1)]
df_result
```
Запись результата в файл xls.
```
df_result.to_excel("name.xls")
!ls
```
Сохранить файл из виртуального диска Google Colab на локальный диск.
```
from google.colab import files
files.download("name.xls")
!ls
```
<a href="#toc">Назад к оглавлению</a>
## Анализ файла с описанием уязвимостей vullist.xlsx <a name><>
```
df2 = pd.read_excel("vullist.xlsx", skiprows=2)
df2
df2.shape
df2.columns
df2[df2['Вендор ПО'].apply(lambda x:x.find("D-Link"))!=-1]
df2['Вендор ПО'].unique().shape
df2['Вендор ПО'].value_counts()[:10].plot(kind='bar')
import matplotlib.pyplot as plt
plt.bar(x=range(10),height=df2['Вендор ПО'].value_counts()[:10])
plt.show()
df2[df2['Наименование уязвимости'].apply(lambda x:x.find("облач"))!=-1].shape
```
## Ссылки <a name='refs'></a>
- https://bdu.fstec.ru
- https://pandas.pydata.org
- https://github.com/yurichernyshov/Data-Science-Course-USURT/blob/master/lessons/100%20questions%20Pandas.ipynb
<a href='#toc'>Назад к оглавлению</a>
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# LightGBM: A Highly Efficient Gradient Boosting Decision Tree
This notebook will give you an example of how to train a LightGBM model to estimate click-through rates on an e-commerce advertisement. We will train a LightGBM based model on the Criteo dataset.
[LightGBM](https://github.com/Microsoft/LightGBM) is a gradient boosting framework that uses tree-based learning algorithms. It is designed to be distributed and efficient with the following advantages:
* Fast training speed and high efficiency.
* Low memory usage.
* Great accuracy.
* Support of parallel and GPU learning.
* Capable of handling large-scale data.
## Global Settings and Imports
```
import sys, os
sys.path.append("../../")
import numpy as np
import lightgbm as lgb
import papermill as pm
import pandas as pd
import category_encoders as ce
from tempfile import TemporaryDirectory
from sklearn.metrics import roc_auc_score, log_loss
import reco_utils.recommender.lightgbm.lightgbm_utils as lgb_utils
import reco_utils.dataset.criteo as criteo
print("System version: {}".format(sys.version))
print("LightGBM version: {}".format(lgb.__version__))
```
### Parameter Setting
Let's set the main related parameters for LightGBM now. Basically, the task is a binary classification (predicting click or no click), so the objective function is set to binary logloss, and 'AUC' metric, is used as a metric which is less effected by imbalance in the classes of the dataset.
Generally, we can adjust the number of leaves (MAX_LEAF), the minimum number of data in each leaf (MIN_DATA), maximum number of trees (NUM_OF_TREES), the learning rate of trees (TREE_LEARNING_RATE) and EARLY_STOPPING_ROUNDS (to avoid overfitting) in the model to get better performance.
Besides, we can also adjust some other listed parameters to optimize the results. [In this link](https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst), a list of all the parameters is shown. Also, some advice on how to tune these parameters can be found [in this url](https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters-Tuning.rst).
```
MAX_LEAF = 64
MIN_DATA = 20
NUM_OF_TREES = 100
TREE_LEARNING_RATE = 0.15
EARLY_STOPPING_ROUNDS = 20
METRIC = "auc"
SIZE = "sample"
params = {
'task': 'train',
'boosting_type': 'gbdt',
'num_class': 1,
'objective': "binary",
'metric': METRIC,
'num_leaves': MAX_LEAF,
'min_data': MIN_DATA,
'boost_from_average': True,
#set it according to your cpu cores.
'num_threads': 20,
'feature_fraction': 0.8,
'learning_rate': TREE_LEARNING_RATE,
}
```
## Data Preparation
Here we use CSV format as the example data input. Our example data is a sample (about 100 thousand samples) from [Criteo dataset](https://www.kaggle.com/c/criteo-display-ad-challenge). The Criteo dataset is a well-known industry benchmarking dataset for developing CTR prediction models, and it's frequently adopted as evaluation dataset by research papers. The original dataset is too large for a lightweight demo, so we sample a small portion from it as a demo dataset.
Specifically, there are 39 columns of features in Criteo, where 13 columns are numerical features (I1-I13) and the other 26 columns are categorical features (C1-C26).
```
nume_cols = ["I" + str(i) for i in range(1, 14)]
cate_cols = ["C" + str(i) for i in range(1, 27)]
label_col = "Label"
header = [label_col] + nume_cols + cate_cols
with TemporaryDirectory() as tmp:
all_data = criteo.load_pandas_df(size=SIZE, local_cache_path=tmp, header=header)
display(all_data.head())
```
First, we cut three sets (train_data (first 80%), valid_data (middle 10%) and test_data (last 10%)), cut from the original all data. <br>
Notably, considering the Criteo is a kind of time-series streaming data, which is also very common in recommendation scenario, we split the data by its order.
```
# split data to 3 sets
length = len(all_data)
train_data = all_data.loc[:0.8*length-1]
valid_data = all_data.loc[0.8*length:0.9*length-1]
test_data = all_data.loc[0.9*length:]
```
## Basic Usage
### Ordinal Encoding
Considering LightGBM could handle the low-frequency features and missing value by itself, for basic usage, we only encode the string-like categorical features by an ordinal encoder.
```
ord_encoder = ce.ordinal.OrdinalEncoder(cols=cate_cols)
def encode_csv(df, encoder, label_col, typ='fit'):
if typ == 'fit':
df = encoder.fit_transform(df)
else:
df = encoder.transform(df)
y = df[label_col].values
del df[label_col]
return df, y
train_x, train_y = encode_csv(train_data, ord_encoder, label_col)
valid_x, valid_y = encode_csv(valid_data, ord_encoder, label_col, 'transform')
test_x, test_y = encode_csv(test_data, ord_encoder, label_col, 'transform')
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
train_x.head()
```
### Create model
When both hyper-parameters and data are ready, we can create a model:
```
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params, categorical_feature=cate_cols)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_test = lgb.Dataset(test_x, test_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid,
categorical_feature=cate_cols)
```
Now let's see what is the model's performance:
```
test_preds = lgb_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
res_basic = {"auc": auc, "logloss": logloss}
print(res_basic)
pm.record("res_basic", res_basic)
```
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default"></script>
## Optimized Usage
### Label-encoding and Binary-encoding
Next, since LightGBM has a better capability in handling dense numerical features effectively, we try to convert all the categorical features in original data into numerical ones, by label-encoding [3] and binary-encoding [4]. Also due to the sequence property of Criteo, the label-encoding we adopted is executed one-by-one, which means we encode the samples in order, by the information of the previous samples before each sample (sequential label-encoding and sequential count-encoding). Besides, we also filter the low-frequency categorical features and fill the missing values by the mean of corresponding columns for the numerical features. (consulting `lgb_utils.NumEncoder`)
Specifically, in `lgb_utils.NumEncoder`, the main steps are as follows.
* Firstly, we convert the low-frequency categorical features to `"LESS"` and the missing categorical features to `"UNK"`.
* Secondly, we convert the missing numerical features into the mean of corresponding columns.
* Thirdly, the string-like categorical features are ordinal encoded like the example shown in basic usage.
* And then, we target encode the categorical features in the samples order one-by-one. For each sample, we add the label and count information of its former samples into the data and produce new features. Formally, for $i=1,2,...,n$, we add $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c) \cdot y}{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}$ as a new label feature for current sample $x_i$, where $c$ is a category to encode in current sample, so $(i-1)$ is the number of former samples, and $I(\cdot)$ is the indicator function that check the former samples contain $c$ (whether $x_j=c$) or not. At the meantime, we also add the count frequency of $c$, which is $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}{i-1}$, as a new count feature.
* Finally, based on the results of ordinal encoding, we add the binary encoding results as new columns into the data.
Note that the statistics used in the above process only updates when fitting the training set, while maintaining static when transforming the testing set because the label of test data should be considered as unknown.
```
label_col = 'Label'
num_encoder = lgb_utils.NumEncoder(cate_cols, nume_cols, label_col)
train_x, train_y = num_encoder.fit_transform(train_data)
valid_x, valid_y = num_encoder.transform(valid_data)
test_x, test_y = num_encoder.transform(test_data)
del num_encoder
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
```
### Training and Evaluation
```
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid)
test_preds = lgb_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
res_optim = {"auc": auc, "logloss": logloss}
print(res_optim)
pm.record("res_optim", res_optim)
```
## Model saving and loading
Now we finish the basic training and testing for LightGBM, next let's try to save and reload the model, and then evaluate it again.
```
with TemporaryDirectory() as tmp:
save_file = os.path.join(tmp, r'finished.model')
lgb_model.save_model(save_file)
loaded_model = lgb.Booster(model_file=save_file)
# eval the performance again
test_preds = loaded_model.predict(test_x)
auc = roc_auc_score(np.asarray(test_y.reshape(-1)), np.asarray(test_preds))
logloss = log_loss(np.asarray(test_y.reshape(-1)), np.asarray(test_preds), eps=1e-12)
print({"auc": auc, "logloss": logloss})
```
## Additional Reading
\[1\] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems. 3146–3154.<br>
\[2\] The parameters of LightGBM: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst <br>
\[3\] Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. 2018. CatBoost: gradient boosting with categorical features support. arXiv preprint arXiv:1810.11363 (2018).<br>
\[4\] Scikit-learn. 2018. categorical_encoding. https://github.com/scikit-learn-contrib/categorical-encoding<br>
| github_jupyter |
```
import os
datadir = "/Users/michielk/oxdata/P01/EM/Myrf_01/SET-B/B-NT-S10-2f_ROI_00/zws"
# pred_file = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs1_eed2_main.h5') # dataset: 'main'
pred_file = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs_main_vol00.h5') # dataset: 'main'
aff_file = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs_main_vol00_grad.h5') # dataset: 'main'
out_folder = os.path.join(datadir, 'zws_vol00_')
outname = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs_zws.h5') # dataset: 'main'
max_len = 300
# in conda root env
import h5py
import numpy as np
from wmem import utils
h5path_in = os.path.join(pred_file, 'main')
h5file_in, ds_in, elsize, axlab = utils.h5_load(h5path_in)
grad = np.array(np.absolute(np.gradient(ds_in[0,:,:,:], 1)))
h5file_in.close()
h5path_out = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs_main_vol00_absgrad.h5', 'main')
h5file_out, ds_out = utils.h5_write(None, grad.shape, grad.dtype,
h5path_out,
element_size_um=elsize,
axislabels=axlab)
ds_out[:] = grad
h5file_out.close()
from zwatershed import (partition_subvols,
eval_with_par_map,
eval_with_spark,
stitch_and_save,
merge_by_thresh)
partition_data = partition_subvols(aff_file, out_folder, max_len)
eval_with_spark(partition_data[0])
# NUM_WORKERS=4
# eval_with_par_map(partition_data[0], NUM_WORKERS)
stitch_and_save(partition_data, outname)
%matplotlib nbagg
import numpy as np
import sys
import time
import os
import h5py
import os.path as op
import matplotlib
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from multiprocessing import Pool
from itertools import product
from zwatershed import (partition_subvols,
eval_with_par_map,
stitch_and_save,
merge_by_thresh)
# from par_funcs import *
# sys.path.append('..')
cmap = matplotlib.colors.ListedColormap(np.vstack( ((0, 0, 0), np.random.rand(1e6, 3))) )
V = 20
datadir = "/Users/michielk/oxdata/P01/EM/Myrf_01/SET-B/B-NT-S10-2f_ROI_00/zws"
# zwsbase = os.path.join(datadir, "zws")
outname = os.path.join(datadir, 'B-NT-S10-2f_ROI_00ds7_probs_zws.h5') # dataset: 'main'
orig_file = h5py.File(aff_file,'r')
start = np.array([0, 0, 0])
stop = np.array([191, 301, 244])
fig, axs = plt.subplots(1, 3)
for i in range(0, 3):
orig = orig_file['main'][i, start[0]:stop[0], start[1]:stop[1], start[2]:stop[2]]
ax = axs[i]
cax = ax.imshow(orig[V,:,:], cmap=plt.get_cmap('Greys'))
cbar = fig.colorbar(cax, ax=ax, orientation='horizontal')
ax.set_title('orig{}'.format(i))
orig_file.close()
plt.show()
num, thresh = 0, 1000.0
fig, axs = plt.subplots(1, 2)
basic_file = h5py.File(os.path.join(datadir, 'zws_vol01_0_0_0_vol', 'basic.h5'),'r')
seg_init = np.array(basic_file['seg'])
rg_init = np.array(basic_file['rg'])
keeps = rg_init[:,0]<rg_init[:,1]
rg_init = rg_init[keeps,:]
seg_sizes_init = np.array(basic_file['counts'])
basic_file.close()
ax = axs[0]
ax.imshow(seg_init[V,:,:], cmap=cmap)
ax.set_title('seg_init')
f = h5py.File(outname, 'a')
s,e = f['starts'][num],f['ends'][num]
seg = f['seg'][s[0]:e[0]-3,s[1]:e[1]-3,s[2]:e[2]-3]
seg_sizes = np.array(f['seg_sizes'])
rg = np.array(f['rg_'+str(num)])
f.close()
ax = axs[1]
ax.imshow(seg[V,:,:], cmap=cmap)
ax.set_title('seg_after_stitching')
plt.show()
print "num_segs",len(np.unique(seg_init)),len(np.unique(seg))
print "rg lens",len(rg_init),len(rg)
num, thresh = 0, 0.5
seg_init_merged = merge_by_thresh(seg_init, seg_sizes_init, rg_init, thresh)
seg_merged = merge_by_thresh(seg, seg_sizes, rg, thresh)
fig, axs = plt.subplots(1, 2)
ax = axs[0]
ax.imshow(seg_init_merged[V,:,:], cmap=cmap)
ax.set_title('merged init')
ax = axs[1]
ax.imshow(seg_merged[V,:,:], cmap=cmap)
ax.set_title('merged')
plt.show()
print "num_segs",len(np.unique(seg_init)),len(np.unique(seg))
print "rg lens",len(rg_init),len(rg)
```
| github_jupyter |
## sigMF STFT on GPU and CPU
```
import os
import itertools
from sklearn.utils import shuffle
import torch, torchvision
import torch.nn as nn
import torch.nn.functional as d
import torch.optim as optim
import torch.nn.functional as F
import torch.nn.modules as mod
import torch.utils.data
import torch.utils.data as data
from torch.nn.utils.rnn import pack_padded_sequence
from torch.nn.utils.rnn import pad_packed_sequence
from torch.autograd import Variable
import numpy as np
import sys
import importlib
import time
import matplotlib.pyplot as plt
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torchvision.utils import save_image
import librosa
from scipy import signal
from scipy import stats
from scipy.special import comb
import matplotlib.pyplot as plt
import glob
import json
import pickle
from random import randint, choice
import random
from timeit import default_timer as timer
from torchaudio.functional import istft
from sklearn.decomposition import NMF
plt.style.use('default')
device = torch.device('cuda:0')
print('Torch version =', torch.__version__, 'CUDA version =', torch.version.cuda)
print('CUDA Device:', device)
print('Is cuda available? =',torch.cuda.is_available())
# %matplotlib notebook
# %matplotlib inline
```
#### Machine paths
```
# path = "/home/db/sigMF_ML/class3/data_voice/" # G7
# path_val = "/home/db/sigMF_ML/class3/val_data2/" # G7
# path_save = "/home/db/sigMF_ML/class3/" # G7
# path_doc = "/home/db/sigMF_ML/class3/data_dump2/" # G7
# path_temp = "/home/db/sigMF_ML/class3/temp2/" # G7
# path = "/home/david/sigMF_ML/class2/data2/" # GTX1080Ti
# path_save = "/home/david/sigMF_ML/class2/" # GTX1080Ti
# path_temp = "/home/david/sigMF_ML/class2/temp2/" # GTX1080Ti
# path_val = "/home/david/sigMF_ML/class2/val_data2/" # GTX1080Ti
# path_doc = "/home/david/sigMF_ML/class2/data_dump2/" # GTX1080Ti
# path = "/home/david/sigMF_ML/class2/data_voice/" # GTX1080Ti
# path = "/home/david/sigMF_ML/class2/E503 project/" # ace
# path_val = "/home/david/sigMF_ML/class2/val_data2/" # ace
path_save = "/home/david/sigMF_ML/class2/UDV_matrix/" # ace
# path_doc = "/home/david/sigMF_ML/class2/data_dump2/" # ace
# path_temp = "/home/david/sigMF_ML/class2/temp2/" # ace
path = "/home/david/sigMF_ML/class2/clean_speech/IQ_files/" # ace
# path = "/home/david/sigMF_ML2/class3/data/" # tensorbook
# path_val = "/home/david/sigMF_ML2/class3/val_data/" # tensorbook
# path_save = "/home/david/sigMF_ML2/class3/" # tensorbook
# path_doc = "/home/david/sigMF_ML2/class3/data_dump/" # tensorbook
# path_temp = "/home/david/sigMF_ML2/class3/temp/" # tensorbook
# path = "/home/david/sigMF_ML2/class3/data2/" # tensorbook
# path_val = "/home/david/sigMF_ML2/class3/val_data/" # tensorbook
# path_save = "/home/david/sigMF_ML2/class3/" # tensorbook
# path_doc = "/home/david/sigMF_ML2/class3/data_dump/" # tensorbook
# path_temp = "/home/david/sigMF_ML2/class3/temp/" # tensorbook
# path = "/home/david/sigMF_ML2/class3/data_voice/" # tensorbook
#path = "/content/" # colab
print(path)
```
#### reading sigmf meta data and encoder function
```
# START OF FUNCTIONS ****************************************************
def meta_encoder(meta_list, num_classes):
a = np.asarray(meta_list, dtype=int)
# print('a = ', a)
return a
def read_meta(meta_files):
meta_list = []
for meta in meta_files:
all_meta_data = json.load(open(meta))
meta_list.append(all_meta_data['global']["core:class"])
meta_list = list(map(int, meta_list))
return meta_list
def read_num_val(x):
x = len(meta_list_val)
return x
print(path)
os.chdir(path)
data_files = sorted(glob.glob('*.sigmf-data'))
meta_files = sorted(glob.glob('*.sigmf-meta'))
for meta in meta_files:
all_meta_data = json.load(open(meta))
print("file name = ", meta)
```
#### torch GPU Cuda stft
```
def gpu(db, n_fft):
I = db[0::2]
Q = db[1::2]
start = timer()
w = 512
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False)
I_stft = torch.stft(torch.tensor(I).cuda(), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
end = timer()
print(end - start)
torch.cuda.empty_cache()
return X_stft, I_stft, Q_stft
```
#### scipy CPU stft function
```
# def cpu(db, n_fft):
# t = len(db)
# db2 = db[0::]
# start = timer()
# db = db.astype(np.float32).view(np.complex64)
# Fs = 1e6
# I_t, I_f, Z = signal.stft(db, fs=Fs, nperseg=n_fft, return_onesided=False)
# Z = np.vstack([Z[n_fft//2:], Z[:n_fft//2]])
# end = timer()
# print(end - start)
# return Z
```
### GPU Timing
```
n_fft = 1000
for file in data_files:
db = np.fromfile(file, dtype="float32")
stft_gpu, I_stft, Q_stft = gpu(db, n_fft)
I_stft.shape, Q_stft.shape, stft_gpu.shape
Q_stft[497:504, 1, 0],Q_stft[497:504, 1, 1]
I_stft[497:504, 1, 0],I_stft[497:504, 1, 1]
stft_gpu[497:504, 1]
stft_gpu.shape
plt.figure(figsize=(9, 6))
fig3 = plt.figure()
plt.imshow(20*np.log10(np.abs(stft_gpu.cpu()+1e-8)), aspect='auto', origin='lower')
title = "Vodeson Original spectrum"
plt.title(title)
plt.xlabel('Time in bins')
plt.ylabel('Frequency bins(1Khz resolution)')
plt.minorticks_on()
# plt.yticks(np.arange(0,60, 6))
fig3.savefig('vod_full_spectrum.pdf', format="pdf")
plt.show()
```
### CPU Timing
```
# for file in data_files:
# db = np.fromfile(file, dtype="float32")
# stft_cpu = cpu(db, 1000)
# stft_cpu.shape
np.abs(stft_gpu.detach().cpu().numpy()[497:504, 1])
# np.abs(stft_cpu[497:504, 1])
```
### CPU load stft to Cuda Time
```
# start = timer()
# IQ_tensor = torch.tensor(np.abs(stft_cpu)).cuda()
# end = timer()
# print(end - start)
# torch.cuda.empty_cache()
# plt.imshow(20*np.log10(np.abs(stft_cpu)+1e-8), aspect='auto', origin='lower')
# plt.show()
```
#### GPU SVD
```
def udv_stft(I_stft,Q_stft):
start = timer()
U_I0, D_I0, V_I0 = torch.svd(I_stft[...,0])
U_I1, D_I1, V_I1 = torch.svd(I_stft[...,1])
U_Q0, D_Q0, V_Q0 = torch.svd(Q_stft[...,0])
U_Q1, D_Q1, V_Q1 = torch.svd(Q_stft[...,1])
end = timer()
print('SVD time: ',end - start)
return U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1
```
#### Inverse stft
```
def ISTFT(db, n_fft):# We are matching scipy.signal behavior (setting noverlap=frame_length - hop)
w = 512
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda()
start = timer()
Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
end = timer()
print('ISTFT time = ',end - start)
torch.cuda.empty_cache()
return Z
```
#### Re-combine UDV to approximate original signal
```
def udv(u, d, v, k): # like ----> np.matrix(U[:, :k]) * np.diag(D[:k]) * V[:k, :]
start = timer()
UD = torch.mul(u[:, :k], d[:k])
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:k, :])
end = timer()
print('UDV time: ',end - start)
return UDV
print(path_save)
os.chdir(path_save)
np.save('I_stft', I_stft.detach().cpu().numpy())
np.save('Q_stft', Q_stft.detach().cpu().numpy())
```
### Main function to run all sub function calls
```
def complete(I_stft,Q_stft, num, n_fft):
U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1 = udv_stft(I_stft,Q_stft)
torch.cuda.empty_cache()
print('UDV I0 shapes = ',U_I0.shape, D_I0.shape, V_I0.shape)
print('UDV I1 shapes = ',U_I1.shape, D_I1.shape, V_I1.shape)
print('UDV Q0 shapes = ', U_Q0.shape, D_Q0.shape, V_Q0.shape)
print('UDV Q1 shapes = ', U_Q1.shape, D_Q1.shape, V_Q1.shape)
# ------------ I0 ------------------------------------------------------
np.save('U_I0', U_I0[:, :num].detach().cpu().numpy())
np.save('D_I0', D_I0[:num].detach().cpu().numpy())
np.save('V_I0', V_I0[:num, :].detach().cpu().numpy())
# ------------ I1 ------------------------------------------------------
np.save('U_I1', U_I1[:, :num].detach().cpu().numpy())
np.save('D_I1', D_I1[:num].detach().cpu().numpy())
np.save('V_I1', V_I1[:num, :].detach().cpu().numpy())
# ------------ Q0 ------------------------------------------------------
np.save('U_Q0', U_Q0[:, :num].detach().cpu().numpy())
np.save('D_Q0', D_Q0[:num].detach().cpu().numpy())
np.save('V_Q0', V_Q0[:num, :].detach().cpu().numpy())
# ------------ Q1 ------------------------------------------------------
np.save('U_Q1', U_Q1[:, :num].detach().cpu().numpy())
np.save('D_Q1', D_Q1[:num].detach().cpu().numpy())
np.save('V_Q1', V_Q1[:num, :].detach().cpu().numpy())
# -----------------------------------------------------------------------
udv_I0 = udv(U_I0, D_I0, V_I0,num)
udv_I1 = udv(U_I1, D_I1, V_I1,num)
udv_Q0 = udv(U_Q0, D_Q0, V_Q0,num)
udv_Q1 = udv(U_Q1, D_Q1, V_Q1,num)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I = ISTFT(UDV_I, n_fft)
Q = ISTFT(UDV_Q, n_fft)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2)
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
return IQ_SVD
```
### Perform SVD on IQ stft data
```
num = 2 # number to reconstruct SVD matrix from
IQ_SVD = complete(I_stft,Q_stft, num, n_fft)
```
### Write reconstructed IQ file to file
```
from array import array
IQ_file = open("vod_clean_svd2", 'wb')
IQ_SVD.tofile(IQ_file)
IQ_file.close()
```
| github_jupyter |
```
import re
from typing import List
import pandas as pd
import numpy as np
from tqdm.notebook import tqdm
import optuna
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler, PolynomialFeatures
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from catboost import CatBoostClassifier
tqdm.pandas()
# XGB
# fillna числовых колонок как средние значения по соотв колонке,
# TENURE & REGION OneHotEncoded
# StScaler on whole dataset
# target endocding by region and tenure
#import data
train = pd.read_csv('./data/Train_folds.zip')
test= pd.read_csv('./data/Test.zip')
submission = pd.read_csv('./data/SampleSubmission.csv')
cat_cols = [
'REGION',
'TENURE',
'TOP_PACK'
]
num_cols = [
'MONTANT',
'FREQUENCE_RECH',
'REVENUE',
'ARPU_SEGMENT',
'FREQUENCE',
'DATA_VOLUME',
'ON_NET',
'ORANGE',
'TIGO',
'ZONE1',
'ZONE2',
'REGULARITY',
'FREQ_TOP_PACK',
]
target = 'CHURN'
mapping = {
'D 3-6 month': 1,
'E 6-9 month': 2,
'F 9-12 month': 3,
'G 12-15 month': 4,
'H 15-18 month': 5,
'I 18-21 month': 6,
'J 21-24 month': 7,
'K > 24 month': 8,
'OTHER': 9
}
train['TOP_PACK'] = train['TOP_PACK'].fillna('OTHER')
test['TOP_PACK'] = test['TOP_PACK'].fillna('OTHER')
train['TENURE'] = train['TENURE'].fillna('OTHER')
test['TENURE'] = test['TENURE'].fillna('OTHER')
train['TENURE'] = train['TENURE'].map(mapping)
test['TENURE'] = test['TENURE'].map(mapping)
train['REGION'] = train['REGION'].fillna('OTHER')
test['REGION'] = test['REGION'].fillna('OTHER')
for nc in tqdm(num_cols):
mean = train[nc].mean()
train[nc] = train[nc].fillna(mean)
test[nc] = test[nc].fillna(mean)
train.shape, test.shape
churn_by_tenure = pd.read_csv('./data/agg_by_tenure_churn.csv')
churn_by_tenure = churn_by_tenure.append(pd.DataFrame({'TENURE': [9], 'CHURN_mean': 0, 'CHURN_median': 0}))
train = pd.merge(train, churn_by_tenure[['TENURE', 'CHURN_mean']], left_on='TENURE', right_on='TENURE', how='left')
train = train.rename({'CHURN_mean': 'MEAN_CHURN_BY_TENURE'}, axis='columns')
test = pd.merge(test, churn_by_tenure[['TENURE', 'CHURN_mean']], left_on='TENURE', right_on='TENURE', how='left')
test = test.rename({'CHURN_mean': 'MEAN_CHURN_BY_TENURE'}, axis='columns')
train.shape, test.shape
churn_by_region = pd.read_csv('./data/agg_by_region_churn.csv')
vc = train[train['REGION'] == 'OTHER']['CHURN'].value_counts()
churn_by_region_mean = vc[1]/(vc[0]+vc[1])
churn_by_region = churn_by_region.append(pd.DataFrame({'REGION': ['OTHER'], 'CHURN_mean': churn_by_region_mean, 'CHURN_median': 0}))
train = pd.merge(train, churn_by_region[['REGION', 'CHURN_mean']], left_on='REGION', right_on='REGION', how='left')
train = train.rename({'CHURN_mean': 'MEAN_CHURN_BY_REGION'}, axis='columns')
test = pd.merge(test, churn_by_region[['REGION', 'CHURN_mean']], left_on='REGION', right_on='REGION', how='left')
test = test.rename({'CHURN_mean': 'MEAN_CHURN_BY_REGION'}, axis='columns')
train.shape, test.shape
# churn_by_top_pack = train[['TOP_PACK', 'CHURN']].groupby('TOP_PACK').agg({'CHURN': ['mean', 'median']})
# churn_by_top_pack.columns = ['_'.join(col).strip() for col in churn_by_top_pack.columns.values]
# churn_by_top_pack_mean = np.mean(train[train['TOP_PACK'] == 'OTHER']['CHURN'])
# churn_by_top_pack = churn_by_top_pack.reset_index()
# d = {
# 'TOP_PACK': ['OTHER'],
# 'CHURN_mean': [churn_by_top_pack_mean],
# 'CHURN_median': [0]
# }
# for tp in test['TOP_PACK'].unique():
# if tp not in churn_by_top_pack.index:
# d['TOP_PACK'].append(tp)
# d['CHURN_mean'].append(churn_by_top_pack_mean)
# d['CHURN_median'].append(0)
# churn_by_top_pack = churn_by_top_pack.append(pd.DataFrame(d))
# churn_by_top_pack.index = range(len(churn_by_top_pack))
# train = pd.merge(train, churn_by_top_pack[['TOP_PACK', 'CHURN_mean']], left_on='TOP_PACK', right_on='TOP_PACK', how='left')
# train = train.rename({'CHURN_mean': 'MEAN_CHURN_BY_TOP_PACK'}, axis='columns')
# test = pd.merge(test, churn_by_top_pack[['TOP_PACK', 'CHURN_mean']], left_on='TOP_PACK', right_on='TOP_PACK', how='left')
# test = test.rename({'CHURN_mean': 'MEAN_CHURN_BY_TOP_PACK'}, axis='columns')
# train.shape, test.shape
# train['TOP_PACK'] = train['TOP_PACK'].fillna('OTHER')
# test['TOP_PACK'] = test['TOP_PACK'].fillna('OTHER')
churn_by_top_pack = train[['TOP_PACK', 'CHURN']].groupby('TOP_PACK').agg({'CHURN': ['mean', 'median']})
churn_by_top_pack.columns = ['_'.join(col).strip() for col in churn_by_top_pack.columns.values]
churn_by_top_pack_mean = np.mean(train[train['TOP_PACK'] == 'OTHER']['CHURN'])
churn_by_top_pack = churn_by_top_pack.reset_index()
d = {
'TOP_PACK': [],
'CHURN_mean': [],
'CHURN_median': []
}
for tp in test['TOP_PACK'].unique():
if tp not in churn_by_top_pack['TOP_PACK'].unique():
d['TOP_PACK'].append(tp)
d['CHURN_mean'].append(churn_by_top_pack_mean)
d['CHURN_median'].append(0)
churn_by_top_pack = churn_by_top_pack.append(pd.DataFrame(d))
train = pd.merge(train, churn_by_top_pack[['TOP_PACK', 'CHURN_mean']], left_on='TOP_PACK', right_on='TOP_PACK', how='left')
train = train.rename({'CHURN_mean': 'MEAN_CHURN_BY_TOP_PACK'}, axis='columns')
test = pd.merge(test, churn_by_top_pack[['TOP_PACK', 'CHURN_mean']], left_on='TOP_PACK', right_on='TOP_PACK', how='left')
test = test.rename({'CHURN_mean': 'MEAN_CHURN_BY_TOP_PACK'}, axis='columns')
train.shape, test.shape
train.head()
useful_cols = [
'REGION',
'TENURE',
# 'MRG', # constant
'TOP_PACK', # wtf column
'MONTANT',
'FREQUENCE_RECH',
'REVENUE',
'ARPU_SEGMENT',
'FREQUENCE',
'DATA_VOLUME',
'ON_NET',
'ORANGE',
'TIGO',
'ZONE1',
'ZONE2',
'REGULARITY',
'FREQ_TOP_PACK',
'MEAN_CHURN_BY_TENURE',
'MEAN_CHURN_BY_REGION',
'MEAN_CHURN_BY_TOP_PACK'
]
for cat_col in cat_cols:
encoder = OneHotEncoder(handle_unknown='ignore')
unique_values = train[cat_col].unique()
one_hot_encoded_cols = [f'{cat_col}_{i}' for i in range(len(unique_values))]
ohe_df = pd.DataFrame(encoder.fit_transform(train[[cat_col]]).toarray(), columns=one_hot_encoded_cols)
ohe_df.index = train.index
train = train.drop(cat_col, axis=1)
train = pd.concat([train, ohe_df], axis=1)
print(f'[{cat_col}] xtrain transformed')
ohe_df = pd.DataFrame(encoder.transform(test[[cat_col]]).toarray(), columns=one_hot_encoded_cols)
ohe_df.index = test.index
test = test.drop(cat_col, axis=1)
test = pd.concat([test, ohe_df], axis=1)
print(f'[{cat_col}] xtest transformed')
useful_cols += one_hot_encoded_cols
useful_cols.remove(cat_col)
scaler = StandardScaler()
train[num_cols] = scaler.fit_transform(train[num_cols])
test[num_cols] = scaler.transform(test[num_cols])
poly = PolynomialFeatures(degree=3, interaction_only=True, include_bias=False)
train_poly = poly.fit_transform(train[num_cols])
test_poly = poly.fit_transform(test[num_cols])
poly_columns = [f'poly_{x.replace(" ", "__")}' for x in poly.get_feature_names(num_cols)] # [f"poly_{i}" for i in range(train_poly.shape[1])]
df_poly = pd.DataFrame(train_poly, columns=poly_columns, dtype=np.float32)
df_test_poly = pd.DataFrame(test_poly, columns=poly_columns, dtype=np.float32)
train = pd.concat([train, df_poly], axis=1)
test = pd.concat([test, df_test_poly], axis=1)
useful_cols += poly_columns
train.head()
sum(train.memory_usage())/1024/1024
def optimize_floats(df: pd.DataFrame) -> pd.DataFrame:
floats = df.select_dtypes(include=['float64']).columns.tolist()
df[floats] = df[floats].apply(pd.to_numeric, downcast='float')
return df
def optimize_ints(df: pd.DataFrame) -> pd.DataFrame:
ints = df.select_dtypes(include=['int64']).columns.tolist()
df[ints] = df[ints].apply(pd.to_numeric, downcast='integer')
return df
def optimize_objects(df: pd.DataFrame, datetime_features: List[str]) -> pd.DataFrame:
for col in df.select_dtypes(include=['object']):
if col not in datetime_features:
num_unique_values = len(df[col].unique())
num_total_values = len(df[col])
if float(num_unique_values) / num_total_values < 0.5:
df[col] = df[col].astype('category')
else:
df[col] = pd.to_datetime(df[col])
return df
def optimize(df: pd.DataFrame, datetime_features: List[str] = []):
return optimize_floats(optimize_ints(optimize_objects(df, datetime_features)))
train = optimize(train, [])
sum(train.memory_usage())/1024/1024
train.to_csv('./data/train.full.csv', index=None)
final_test_predictions = []
final_valid_predictions = {}
scores = []
for fold in tqdm(range(5), 'folds'):
xtrain = train[train['kfold'] != fold][useful_cols]
ytrain = train[train['kfold'] != fold][target]
xvalid = train[train['kfold'] == fold][useful_cols]
yvalid = train[train['kfold'] == fold][target]
valid_ids = train[train['kfold'] == fold]['user_id'].values.tolist()
xtest = test[useful_cols]
model = XGBClassifier(
n_estimators=7000,
n_jobs=-1,
random_state=42,
tree_method='gpu_hist',
gpu_id=0,
predictor="gpu_predictor",
# **{
# 'learning_rate': 0.021655316351235455,
# 'reg_lambda': 1.0883078718317323e-07,
# 'reg_alpha': 0.00015120241798978777,
# 'subsample': 0.7179552032665535,
# 'colsample_bytree': 0.7408152702492675,
# 'max_depth': 7
# }
**{
'learning_rate': 0.014461849398074727,
'reg_lambda': 0.08185850904776007,
'reg_alpha': 0.0001173486815850512,
'subsample': 0.7675905290878289,
'colsample_bytree': 0.2708299922996371,
'max_depth': 7
}
)
model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = model.predict_proba(xvalid)[:, 1]
test_preds = model.predict_proba(xtest)[:, 1]
final_test_predictions.append(test_preds)
final_valid_predictions.update(dict(zip(valid_ids, preds_valid)))
score = roc_auc_score(yvalid, preds_valid)
scores.append(score)
print(fold, score)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_1"]
final_valid_predictions.to_csv("./data/train_pred_1.csv", index=False)
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_test_predictions), axis=1)
sample_submission.columns = ["id", "pred_1"]
sample_submission.to_csv("./data/test_pred_1.csv", index=False)
# final_predictions = []
# scores = []
# for fold in tqdm(range(5), 'folds'):
# xtrain = train[train['kfold'] != fold][useful_cols]
# ytrain = train[train['kfold'] != fold][target]
# xvalid = train[train['kfold'] == fold][useful_cols]
# yvalid = train[train['kfold'] == fold][target]
# xtest = test[useful_cols]
# model = XGBClassifier(
# n_estimators=7000,
# n_jobs=-1,
# random_state=42,
# tree_method='gpu_hist',
# gpu_id=0,
# predictor="gpu_predictor",
# # **{'learning_rate': 0.02981286840846979,
# # 'reg_lambda': 2.1119486166373553e-06,
# # 'reg_alpha': 0.09652271602187434,
# # 'subsample': 0.2972622031653025,
# # 'colsample_bytree': 0.3291720075373176,
# # 'max_depth': 2}
# # **{'learning_rate': 0.03359830446697092,
# # 'reg_lambda': 0.0013493600461741606,
# # 'reg_alpha': 0.0002728448162129134,
# # 'subsample': 0.13373120583933554,
# # 'colsample_bytree': 0.1386996438938067,
# # 'max_depth': 7},
# **{
# 'learning_rate': 0.021655316351235455,
# 'reg_lambda': 1.0883078718317323e-07,
# 'reg_alpha': 0.00015120241798978777,
# 'subsample': 0.7179552032665535,
# 'colsample_bytree': 0.7408152702492675,
# 'max_depth': 7
# }
# )
# model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
# preds_valid = model.predict_proba(xvalid)[:, 1]
# test_preds = model.predict_proba(xtest)[:, 1]
# final_predictions.append(test_preds)
# score = roc_auc_score(yvalid, preds_valid)
# scores.append(score)
# print(fold, score)
# print(np.mean(scores), np.std(scores))
# 0.9314604358446612 0.000506497423655064
# xtrain = train[train['kfold'] != 1][useful_cols]
# print(len(xtrain.columns), len(set(xtrain.columns)))
# xtrain.columns.to_series()[np.isinf(xtrain).any()]
# xtrain[np.isinf(xtrain['poly_MONTANT__FREQUENCE_RECH__ZONE1'])][['MONTANT', 'FREQUENCE_RECH', 'ZONE1', 'poly_MONTANT__FREQUENCE_RECH__ZONE1']]
xtrain[np.isinf(xtrain['poly_MONTANT__REVENUE__ARPU_SEGMENT'])][['MONTANT', 'REVENUE', 'ARPU_SEGMENT', 'poly_MONTANT__REVENUE__ARPU_SEGMENT']]
# train[train['MEAN_CHURN_BY_TOP_PACK'].isna()][['MEAN_CHURN_BY_TOP_PACK', 'CHURN']]
train[[col for col in train.columns if not col.startswith('poly') and not col.startswith('TOP_PACK_') and not col.startswith('REGION_') and not col.startswith('TENURE_')]]
sample_submission.sample(7)
preds = np.mean(np.column_stack(final_predictions), axis=1)
submission = pd.read_csv('./data/SampleSubmission.csv')
submission.CHURN = preds
submission.to_csv('./data/submission-xgb-proba-poly-features.csv', index=False)
final_test_predictions = []
final_valid_predictions = {}
scores = []
for fold in tqdm(range(5), 'folds'):
xtrain = train[train['kfold'] != fold][useful_cols]
ytrain = train[train['kfold'] != fold][target]
xvalid = train[train['kfold'] == fold][useful_cols]
yvalid = train[train['kfold'] == fold][target]
valid_ids = train[train['kfold'] == fold]['user_id'].values.tolist()
xtest = test[useful_cols]
lgb_model = LGBMClassifier(
n_estimators=7000,
n_jobs=-1,
random_state=42,
# **{
# 'learning_rate': 0.03881855209002591,
# 'reg_lambda': 0.009591673857338072,
# 'reg_alpha': 0.5065599259874649,
# 'subsample': 0.4016863186957058,
# 'colsample_bytree': 0.9360889506340332,
# 'max_depth': 4
# }
**{
'learning_rate': 0.029253877255476443,
'reg_lambda': 16.09426889606859,
'reg_alpha': 0.014354120473120952,
'subsample': 0.43289663848783977,
'colsample_bytree': 0.5268279718406376,
'max_depth': 6}
)
lgb_model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = lgb_model.predict_proba(xvalid)[:, 1]
test_preds = lgb_model.predict_proba(xtest)[:, 1]
final_test_predictions.append(test_preds)
final_valid_predictions.update(dict(zip(valid_ids, preds_valid)))
score = roc_auc_score(yvalid, preds_valid)
scores.append(score)
print(fold, score)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_2"]
final_valid_predictions.to_csv("./data/train_pred_2.csv", index=False)
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_test_predictions), axis=1)
sample_submission.columns = ["id", "pred_2"]
sample_submission.to_csv("./data/test_pred_2.csv", index=False)
sample_submission.sample(7)
final_test_predictions = []
final_valid_predictions = {}
scores = []
for fold in tqdm(range(5), 'folds'):
xtrain = train[train['kfold'] != fold][useful_cols]
ytrain = train[train['kfold'] != fold][target]
xvalid = train[train['kfold'] == fold][useful_cols]
yvalid = train[train['kfold'] == fold][target]
valid_ids = train[train['kfold'] == fold]['user_id'].values.tolist()
xtest = test[useful_cols]
cb_model = CatBoostClassifier(
n_estimators=1000,
random_state=42,
**{
'objective': 'CrossEntropy',
'colsample_bylevel': 0.054208119366927966,
'depth': 12,
'boosting_type': 'Ordered',
'bootstrap_type': 'Bernoulli',
'subsample': 0.9494580379034286
}
)
cb_model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = cb_model.predict_proba(xvalid)[:, 1]
test_preds = cb_model.predict_proba(xtest)[:, 1]
final_test_predictions.append(test_preds)
final_valid_predictions.update(dict(zip(valid_ids, preds_valid)))
score = roc_auc_score(yvalid, preds_valid)
scores.append(score)
print(fold, score)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_3"]
final_valid_predictions.to_csv("./data/train_pred_3.csv", index=False)
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_test_predictions), axis=1)
sample_submission.columns = ["id", "pred_3"]
sample_submission.to_csv("./data/test_pred_3.csv", index=False)
sample_submission.sample(7)
final_test_predictions = []
final_valid_predictions = {}
scores = []
# del scgb_model
for fold in tqdm(range(5), 'folds'):
xtrain = train[train['kfold'] != fold][useful_cols]
ytrain = train[train['kfold'] != fold][target]
xvalid = train[train['kfold'] == fold][useful_cols]
yvalid = train[train['kfold'] == fold][target]
valid_ids = train[train['kfold'] == fold]['user_id'].values.tolist()
xtest = test[useful_cols]
scgb_model = GradientBoostingClassifier(
n_estimators=100,
random_state=42,
verbose=1,
max_features=0.1
# **{
# 'objective': 'CrossEntropy',
# 'colsample_bylevel': 0.054208119366927966,
# 'depth': 12,
# 'boosting_type': 'Ordered',
# 'bootstrap_type': 'Bernoulli',
# 'subsample': 0.9494580379034286
# }
)
scgb_model.fit(xtrain, ytrain)
preds_valid = scgb_model.predict_proba(xvalid)[:, 1]
test_preds = scgb_model.predict_proba(xtest)[:, 1]
final_test_predictions.append(test_preds)
final_valid_predictions.update(dict(zip(valid_ids, preds_valid)))
score = roc_auc_score(yvalid, preds_valid)
scores.append(score)
print(fold, score)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_4"]
final_valid_predictions.to_csv("./data/train_pred_4.csv", index=False)
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_test_predictions), axis=1)
sample_submission.columns = ["id", "pred_4"]
sample_submission.to_csv("./data/test_pred_4.csv", index=False)
sample_submission.sample(7)
final_test_predictions = []
final_valid_predictions = {}
scores = []
for fold in tqdm(range(5), 'folds'):
xtrain = train[train['kfold'] != fold][useful_cols]
ytrain = train[train['kfold'] != fold][target]
xvalid = train[train['kfold'] == fold][useful_cols]
yvalid = train[train['kfold'] == fold][target]
valid_ids = train[train['kfold'] == fold]['user_id'].values.tolist()
xtest = test[useful_cols]
rf_model = RandomForestClassifier(
random_state=42,
n_jobs=-1,
verbose=1,
**{
'max_depth': 15,
'max_features': 'auto',
'class_weight': 'balanced_subsample'
}
)
rf_model.fit(xtrain, ytrain)
preds_valid = rf_model.predict_proba(xvalid)[:, 1]
test_preds = rf_model.predict_proba(xtest)[:, 1]
final_test_predictions.append(test_preds)
final_valid_predictions.update(dict(zip(valid_ids, preds_valid)))
score = roc_auc_score(yvalid, preds_valid)
scores.append(score)
print(fold, score)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_5"]
final_valid_predictions.to_csv("./data/train_pred_5.csv", index=False)
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_test_predictions), axis=1)
sample_submission.columns = ["id", "pred_5"]
sample_submission.to_csv("./data/test_pred_5.csv", index=False)
sample_submission.sample(7)
df = train.copy() # pd.read_csv('./data/Train_folds.zip')
df_test = test.copy() # pd.read_csv('./data/Test.zip')
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
df1 = pd.read_csv("./data/train_pred_1.csv")
df2 = pd.read_csv("./data/train_pred_2.csv")
df3 = pd.read_csv("./data/train_pred_3.csv")
df4 = pd.read_csv("./data/train_pred_4.csv")
df5 = pd.read_csv("./data/train_pred_5.csv")
df6 = pd.read_csv("./data/train_pred_6.csv")
df_test1 = pd.read_csv("./data/test_pred_1.csv")
df_test2 = pd.read_csv("./data/test_pred_2.csv")
df_test3 = pd.read_csv("./data/test_pred_3.csv")
df_test4 = pd.read_csv("./data/test_pred_4.csv")
df_test5 = pd.read_csv("./data/test_pred_5.csv")
df_test6 = pd.read_csv("./data/test_pred_6.csv")
df = df.merge(df1, left_on='user_id', right_on="id", how="left")
df = df.merge(df2, left_on='user_id', right_on="id", how="left")
df = df.merge(df3, left_on='user_id', right_on="id", how="left")
df = df.merge(df4, left_on='user_id', right_on="id", how="left")
df = df.merge(df5, left_on='user_id', right_on="id", how="left")
df = df.merge(df6, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test1, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test2, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test3, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test4, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test5, left_on='user_id', right_on="id", how="left")
df_test = df_test.merge(df_test6, left_on='user_id', right_on="id", how="left")
df.head()
df_test.head()
df[["pred_1", "pred_2", "pred_3", "pred_4", "pred_5", "pred_6", 'CHURN']]
sorted(dict(zip(lgb_model.feature_name_, lgb_model.feature_importances_)).items(), key=lambda x: -x[1])
useful_features = ['pred_1', 'pred_2', 'pred_3', 'pred_4', 'pred_5', 'pred_6']
df_test = df_test[useful_features]
final_predictions = []
scores = []
for fold in range(5):
xtrain = df[df.kfold != fold].reset_index(drop=True)
xvalid = df[df.kfold == fold].reset_index(drop=True)
xtest = df_test.copy()
ytrain = xtrain['CHURN']
yvalid = xvalid['CHURN']
xtrain = xtrain[useful_features]
xvalid = xvalid[useful_features]
model = LogisticRegression()
# model = SGDClassifier(random_state=42, loss='modified_huber')
model.fit(xtrain, ytrain)
preds_valid = model.predict_proba(xvalid)[:, 1]
test_preds = model.predict_proba(xtest)[:, 1]
final_predictions.append(test_preds)
score = roc_auc_score(yvalid, preds_valid)
print(fold, score)
scores.append(score)
print(np.mean(scores), np.std(scores))
# 0 0.9315283221655729
# 1 0.9322252323181413
# 2 0.9313247129395837
# 3 0.9318919085786139
# 4 0.9307662596698618
# 0.9315472871343549 0.0004976497673210968
# 0.9303098065651516 0.0005268336328890778
# 0.9301957664933731 0.0004690483101817313
sample_submission = pd.read_csv('./data/SampleSubmission.csv')
sample_submission['CHURN'] = np.mean(np.column_stack(final_predictions), axis=1)
sample_submission.to_csv("./data/submission-blending-7-predict-proba-logreg-poly-with-randforest-balanced-and-scgbd-and-nn.csv", index=False)
sample_submission.sample(7)
df_test.to_csv('./data/test_stack.csv', index=None)
df.to_csv('./data/train_stack.csv', index=None)
df[useful_features].corrwith(df['CHURN'])
import optuna
def run(trial):
fold = 0
learning_rate = trial.suggest_float("learning_rate", 1e-2, 0.25, log=True)
reg_lambda = trial.suggest_loguniform("reg_lambda", 1e-8, 100.0)
reg_alpha = trial.suggest_loguniform("reg_alpha", 1e-8, 100.0)
subsample = trial.suggest_float("subsample", 0.1, 1.0)
colsample_bytree = trial.suggest_float("colsample_bytree", 0.1, 1.0)
max_depth = trial.suggest_int("max_depth", 1, 7)
xtrain = train[train.kfold != fold].reset_index(drop=True)
xvalid = train[train.kfold == fold].reset_index(drop=True)
ytrain = xtrain['CHURN']
yvalid = xvalid['CHURN']
xtrain = xtrain[useful_cols]
xvalid = xvalid[useful_cols]
model = XGBClassifier(
random_state=42,
n_estimators=7000,
tree_method='gpu_hist',
gpu_id=0,
predictor="gpu_predictor",
learning_rate=learning_rate,
reg_lambda=reg_lambda,
reg_alpha=reg_alpha,
subsample=subsample,
colsample_bytree=colsample_bytree,
max_depth=max_depth,
)
model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = model.predict_proba(xvalid)[:, 1]
score = roc_auc_score(yvalid, preds_valid)
return score
max_study = optuna.create_study(direction="maximize")
max_study.optimize(run, n_trials=10)
max_study.best_params
import optuna
def run(trial):
fold = 0
learning_rate = trial.suggest_float("learning_rate", 1e-2, 0.25, log=True)
reg_lambda = trial.suggest_loguniform("reg_lambda", 1e-8, 100.0)
reg_alpha = trial.suggest_loguniform("reg_alpha", 1e-8, 100.0)
subsample = trial.suggest_float("subsample", 0.1, 1.0)
colsample_bytree = trial.suggest_float("colsample_bytree", 0.1, 1.0)
max_depth = trial.suggest_int("max_depth", 1, 7)
xtrain = train[train.kfold != fold].reset_index(drop=True)
xvalid = train[train.kfold == fold].reset_index(drop=True)
ytrain = xtrain['CHURN']
yvalid = xvalid['CHURN']
xtrain = xtrain[useful_cols]
xvalid = xvalid[useful_cols]
model = LGBMClassifier(
random_state=42,
n_estimators=7000,
learning_rate=learning_rate,
reg_lambda=reg_lambda,
reg_alpha=reg_alpha,
subsample=subsample,
colsample_bytree=colsample_bytree,
max_depth=max_depth,
)
model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = model.predict_proba(xvalid)[:, 1]
score = roc_auc_score(yvalid, preds_valid)
return score
lgb_study = optuna.create_study(direction="maximize")
lgb_study.optimize(run, n_trials=10)
lgb_study.best_params
import optuna
def run_cb(trial):
fold = 0
param = {
"objective": trial.suggest_categorical("objective", ["Logloss", "CrossEntropy"]),
"colsample_bylevel": trial.suggest_float("colsample_bylevel", 0.01, 0.1),
"depth": trial.suggest_int("depth", 1, 12),
"boosting_type": trial.suggest_categorical("boosting_type", ["Ordered", "Plain"]),
"bootstrap_type": trial.suggest_categorical(
"bootstrap_type", ["Bayesian", "Bernoulli", "MVS"]
),
# "used_ram_limit": "3gb",
}
if param["bootstrap_type"] == "Bayesian":
param["bagging_temperature"] = trial.suggest_float("bagging_temperature", 0, 10)
elif param["bootstrap_type"] == "Bernoulli":
param["subsample"] = trial.suggest_float("subsample", 0.1, 1)
xtrain = train[train.kfold != fold].reset_index(drop=True)
xvalid = train[train.kfold == fold].reset_index(drop=True)
ytrain = xtrain['CHURN']
yvalid = xvalid['CHURN']
xtrain = xtrain[useful_cols]
xvalid = xvalid[useful_cols]
cb_model = CatBoostClassifier(**param)
cb_model.fit(xtrain, ytrain, early_stopping_rounds=100, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = cb_model.predict_proba(xvalid)[:, 1]
score = roc_auc_score(yvalid, preds_valid)
return score
cb_study = optuna.create_study(direction="maximize")
cb_study.optimize(run_cb, n_trials=100, timeout=600)
print("Number of finished trials: {}".format(len(cb_study.trials)))
print("Best trial:")
trial = cb_study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
cb_study.best_params
import optuna
def run_rf(trial: optuna.Trial):
fold = 0
params = {
'max_depth': trial.suggest_int('rf_max_depth', 2, 32, log=True),
'max_features': trial.suggest_categorical('rf_max_features', ["auto", "sqrt", "log2"]),
'class_weight': trial.suggest_categorical('rf_class_weight', ['balanced', 'balanced_subsample', None])
}
xtrain = train[train.kfold != fold].reset_index(drop=True)
xvalid = train[train.kfold == fold].reset_index(drop=True)
ytrain = xtrain['CHURN']
yvalid = xvalid['CHURN']
xtrain = xtrain[useful_cols]
xvalid = xvalid[useful_cols]
rf_model = RandomForestClassifier(
n_estimators=100,
n_jobs=-1,
random_state=42,
verbose=1,
**params)
rf_model.fit(xtrain, ytrain)
preds_valid = rf_model.predict_proba(xvalid)[:, 1]
score = roc_auc_score(yvalid, preds_valid)
return score
rf_study = optuna.create_study(direction="maximize")
rf_study.optimize(run_rf, n_trials=100, timeout=600)
print("Number of finished trials: {}".format(len(rf_study.trials)))
print("Best trial:")
trial = rf_study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
rf_study.best_params
# vc = train['CHURN'].value_counts()
# vc[0], vc[1]*4.33
```
| github_jupyter |
# Heat transfer for pipes
```
"""
importing the necessary libraries, do not modify
"""
%matplotlib inline
from IPython.display import clear_output
import schemdraw as schem
import schemdraw.elements as e
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.constants as sc
import sympy as sym
```
<img src="figures/fig_08_08.jpg" alt="my awesome sketch" width=75% >
<i>Fig. 1: Illustration of internal convection.</i>
The above sketch illustrates the focus of this notebook: How to quantify the heat transfer between a pipe, in which a fluid flows, and its surroundings. The heat transfer from the outer surface of the pipe to the outer flow is to defined in the previous chapter, external convection. In the following, this notebook establishes the tools necessary to solve the internal convection problem.
## Entry flow and fully developed internal flow
<img src="figures/fig_08_01.jpg" alt="my awesome sketch" width=100% >
<i>Fig. 2: Pipe flow nomenclature.</i>
### Python module
For internal flow, the module is loaded as:
```
from Libraries import HT_internal_convection as intconv
```
As an example, consider the flow of water in a pipe of diameter $D=10$ cm, length $L=10$m. The water thermodynamic properties are estimated at $T_f=50^\circ$C. The bulk velocity is $U_m=2$m/s.
```
from Libraries import thermodynamics as thermo
T_f = 50 #C
waterflow = thermo.Fluid('water',T_f,"C")
L_pipe = 10. #m
D_pipe = 0.1 #m
Um_pipe = 2 #m/s
?intconv.PipeFlow
pipe = intconv.PipeFlow(D= D_pipe, L=L_pipe,
rho=waterflow.rho, nu=waterflow.nu, Um=Um_pipe)
```
<img src="figures/fig_08_03.jpg" alt="my awesome sketch" width=100% >
<i> Fig. 3. Friction factor in pipe flow as a function of Re and relative surface roughness.</i>
A uniform flow entering a pipe (Fig. 2) first experiences streamwise variation of velocity to accommodate the wall boundary conditions. A boundary layer, of thickness $\delta$, forms on the wall and grows until its edge reaches the pipe centerline. This region is the hydrodynamic entrance region. Beyond that point, the flow becomes fully developed, which means that
<ul>
<li> In the laminar regime, the velocity profile is only a function of $r$,</li>
<li> In the turbulent regime, the <b>mean</b> velocity profile is only a function of $r$.</li>
</ul>
Friction drag or the force exerted by the flow onto the pipe wall governs the pressure gradient necessary to generate a desired flowrate. Calculation of the friction drag leads to the design of the mechanical force creating the pressure gradient. In fully developed (laminar or turbulent) regimes, the pressure gradient may be determined by
<p class='alert alert-danger'>
$$
-\frac{\Delta\overline{P}}{L}=f\,\frac{1}{D}\,\frac{\rho U_m^2}{2}
$$
</p>
where $D=2R$ and $L$ are the diameter and length of the pipe, respectively, and $f$ is the <b>friction factor</b>. The bulk velocity or average velocity is
<p class='alert alert-info'>
$$
U_m=\frac{\dot{m}}{\rho A_c}
$$
</p>
where $\dot{m}$ is the mass flux
$$
\dot{m}=\int_0^{2\pi}\int_0^R\rho \overline{u}(r)\,r\,dr d\theta=2\pi\int_0^R\rho \overline{u}(r)\,r\,dr
$$
and $A_c=\pi R^2$
The Reynolds number of the flow is based on the bulk velocity and pipe diameter:
<p class='alert alert-danger'>
$$
Re_D=\frac{\rho U_mD}{\mu}=\frac{4\dot{m}}{\pi D\mu}
$$
</p>
The friction factor in the laminar regime is rigorously derived:
$$
f = \frac{64}{Re_D}
$$
</p>
and is valid up to the critical Reynolds number $Re_{D,c}$, which in most pipe is around 2,000. Be aware that in certain research facilities, the flow can remain laminar for Reynolds numbers up to 10,000. The Reynolds 2,000 is not absolute, universal property, but is the best guess from most engineering applications.
Beyond the critical Reynolds number, $f$ is a function of the roughness to diameter ratio $\varepsilon=e/D$ (e is typically the standard deviation of the roughness height) and the Reynolds number. A trustworthy empirical correlation is the Colebrook formula:
<p class='alert alert-danger'>
$$
\frac{1}{\sqrt{f}}=-2\log_{10}\left[\frac{\varepsilon}{3.7}+\frac{2.51}{Re_D\sqrt{f}}\right]
$$
</p>
which is solved below for a range of relative roughness $\varepsilon$.
Often there is a need to determine the pump or blower power $P$ necessary to move the flow at a prescribed pressure drop:
<p class='alert alert-danger'>
$$
P=\frac{\dot{m}}{\rho}\Delta p= \underbrace{(\Delta p)A_c}_\text{force}\cdot U_m
$$
</p>
### Example of functions
Going back to our library, let's explore how to determine some of the properties defined above:
Reynolds number:
```
print("Re= %1.2e" %pipe.Re)
```
Mass flow rate:
```
print("mass flowrate= %1.1f kg/s" %pipe.mdot)
```
Compute the friction factor:
```
# pipe.f_turbulent()
pipe.f_laminar()
print("f= %1.5f" %pipe.f)
```
The mean pressure gradient is:
```
print("-dP/dx= %1.0f Pa/m" %pipe.dPdx)
```
## Heat transfer by internal convection
The temperature is expected to vary both in the streamwise direction and in the radial direction. To reduce the complexity of the problem, we define the mean temperature as:
$$
T_m=\frac{1}{\dot{m}C_p}\int_{A_c}\rho\,u\,C_p\, T\,dA_c
$$
where $\dot{m}$ is the mass flow rate, $rho$ and $C_p$ are the density and specific heat of the fluid and $A_c$ is the cross-sectional area of the pipe.
The local heat flux may be now expressed as:
$$
q_s''=h(T_s-T_m)
$$
where $h$ is the <b>local</b> convection heat transfer coefficient and $T_s$ is the surface temperature on the inner wall of the pipe. The variation of temperature in the <b>fully developed</b> flow can be shown to be governed by the following ODE:
<p class='alert alert-info'>
$$
\frac{dT_m}{dx}=\frac{P}{\dot{m}C_p}h(T_s-T_m)
$$
</p>
where $P$ is the perimeter of the pipe.
If the local heat flux is maintained constant over the length of the pipe $L$, the total heat rate is
<p class='alert alert-danger'>
$$
q_\text{conv}=(PL)q_s''\, \text{$q_s''=$constant}
$$
</p>
and the streamwise distribution of the mean temperature is linear:
$$
T_m(x)=T_{m,i}+\frac{q_s''P}{\dot{m}C_p}x,\, \text{$q_s''=$constant}
$$
For the case of constant wall temperature $T_s$, the temperature distribution is the solution of the above ODE, thus of exponential nature. For practical applications, you most always need to compute the overall heat transfer and the outlet mean temperature $T_{m,o}$. The integration of the above ODE for $x=0$ to $x=L$ yields
<p class='alert alert-danger'>
$$
\frac{T_s-T_{m,o}}{T_s-T_{m,i}}=\exp\left(-\frac{PL}{\dot{m}C_p}\overline{h}\right),\, \text{$T_s=$constant}
$$
</p>
where
$$
\overline{h}=\frac{1}{L}\int_0^L h(x)dx
$$
If you must compute the mean temperature at $x$ an integration from $0$ to $x$ yields
<FONT FACE="courier" style="color:blue">T_mx_Ts_constant(T_s,T_mi,P,L,mdot,Cp,hbar,x)</FONT>
<p class='alert alert-danger'>
$$
\frac{T_s-T_{m}(x)}{T_s-T_{m,i}}=\exp\left(-\frac{PL}{\dot{m}C_p}\overline{h}_x\right),\, \text{$T_s=$constant}
$$
</p>
where
$$
\overline{h}_x=\frac{1}{L}\int_0^x h(x')dx'
$$
The computation of the total heat transfer rate can be shown to write:
<p class='alert alert-danger'>
$$
q_\text{conv}=\overline{h}(PL)\Delta T_\text{lm},\, \text{$T_s=$constant}
$$
</p>
with the log mean temperature
<FONT FACE="courier" style="color:blue">log_mean_temperature(T_s,T_o,T_i)</FONT>
<p class='alert alert-danger'>
$$
\Delta T_\text{lm}=\cfrac{T_{m,i}-T_{m,o}}{\ln\left(\cfrac{T_s-T_{m,o}}{T_s-T_{m,i}}\right)}
$$
</p>
In many problem, $T_s$ is not defined but the outside ambient temperature $T_\infty$, the thermal conductivity of the pipe is known. One needs to determine the total resistance of the system $R_\text{tot}$, which requires calculating the heat transfer coefficient of the forced or natural convection, occuring on the outside of the pipe, the radiation coefficient if needed, the thermal resistance due by conduction within the pipe, which may include multiple components in the presence of insulation for example, and the internal convection heat transfer coefficient (to be defined below). In such cases, the variation of temperature between inlet and outlet becomes:
<FONT FACE="courier" style="color:blue">T_mo_T_infty(T_infty,T_mi,P,L,mdot,Cp,R_tot)</FONT>
<p class='alert alert-danger'>
$$
\frac{T_\infty-T_{m,o}}{T_\infty-T_{m,i}}=\exp\left(-\frac{1}{\dot{m}C_pR_\text{tot}}\right)
$$
</p>
and the total heat transfer rate is
<p class='alert alert-danger'>
$$
q=\frac{\Delta T_\text{lm}}{R_\text{tot}}
$$
</p>
The equations derived in this cell enable:
<ul>
<li> The computation of the internal convection heat transfer coefficient if $T_{m,i}$ and $T_{m,o}$ are known.</li>
<li> The computation of $T_{m,i}$ or $T_{m,o}$ if one is known and $\overline{h}$ is known </li>
<li> The computation of the required mass flux to achieve given $T_{m,i}$ and $T_{m,o}$, albeit through an iterative process</li>
</ul>
## Correlations for convection heat transfer coefficients in internal pipe flows
Here we detailed only the correlations for fully developed flows. For laminar flows, the nusselt numbers are constant, thus the library <FONT FACE="courier" style="color:blue">HT_internal_convection</FONT> provides directly $\overline{h}$:
<FONT FACE="courier" style="color:blue">laminar_isoflux() </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=4.36,\, \text{$q_s''=$constant}
$$
</p>
<FONT FACE="courier" style="color:blue">laminar_isothermal() </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=4.36,\, \text{$q_s''=$constant}
$$
</p>
```
pipe.laminar_isoflux()
print("Nu= %1.2f for laminar isoflux" %pipe.Nu)
pipe.laminar_isothermal()
print("Nu= %1.2f for laminar isothermal" %pipe.Nu)
```
In turbulent flows, there is a choice of correlations:
<FONT FACE="courier" style="color:blue">Dittus_Boelter(Re,Pr,mode) </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=0.023Re^{4/5}Pr^n
$$
</p>
with mode being either <FONT FACE="courier" style="color:blue">'cooling'</FONT> or <FONT FACE="courier" style="color:blue">'heating'</FONT>
```
pipe.Dittus_Boelter(mode='cooling',Pr=waterflow.Pr)
print("Nu= %1.0f for cooling" %pipe.Nu)
pipe.Dittus_Boelter(mode='heating',Pr=waterflow.Pr)
print("Nu= %1.0f for heating" %pipe.Nu)
```
<FONT FACE="courier" style="color:blue">Sieder_Tate(Re,Pr,mu,mu_s) </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=0.027Re^{4/5}Pr^{1/3}\left(\cfrac{\mu}{\mu_s}\right)^{0.14}
$$
```
T_s = 75 #C
watersurface = thermo.Fluid('water',thermo.C2K(T_s))
pipe.Sieder_Tate(mu=waterflow.mu,mu_s=watersurface.mu,Pr=waterflow.Pr)
print("Nu= %1.0f" %pipe.Nu)
```
<FONT FACE="courier" style="color:blue">Gnielinski(Re,Pr,f) </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=\frac{(f/8)(Re-1000)Pr}{1+12.7(f/8)^{1/2}(Pr^{2/3}-1)}
$$
</p>
```
pipe.Gnielinski(f=pipe.f, Pr=waterflow.Pr)
print("Nu= %1.0f" %pipe.Nu)
```
<FONT FACE="courier" style="color:blue">Skupinski(Re,Pr) </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=4.82+0.0185\left(Re\,Pr\right)^{0.827},\, \text{$q_s''=$constant}
$$
</p>
```
pipe.Skupinski(Pr=waterflow.Pr)
print("Nu= %1.0f" %pipe.Nu)
```
<FONT FACE="courier" style="color:blue">Seban(Re,Pr) </FONT>
<p class='alert alert-danger'>
$$
Nu=\frac{hD}{k}=5.0+0.025\left(Re\,Pr\right)^{0.8},\, \text{$T_s=$constant}
$$
</p>
```
pipe.Seban(Pr=waterflow.Pr)
print("Nu= %1.0f" %pipe.Nu)
```
## Natural convection around cylinder
<img src="figures/fig_09_08.jpg" alt="my awesome sketch" width=75% >
<i>Fig. 4: Illustration of the flow induced by natural convection around a cylinder. Insert shows the angular distribution of the local Nu.</i>
In a fluid entirely at rest, a heated surface transfers its heat via pure conduction. Natural convection is the enhanced heat transfer between a body of fluid at rest (at infinity) and a heated surface through the creation of a convective flow driven by buoyancy forces. Fig. 4 illustrates a natural convection flow occuring around a cylinder. The fluid at the bottom of the cylinder $\theta=0$ becomes buoyant through heat transfer between the cylinder and the fluid and rises along the surface of the cylinder. This process creates two boundary layers that merge at $\theta = \pi$ to create a vertical jet-like flow, also called a plume. Plumes are characteristic flows of natural convection, i.e. they are found irrespective of the geometry of the heated object.
The library is called in the following way:
```
from Libraries import HT_natural_convection as natconv
```
The non-dimensional numbers relevant to natural convection are:
the Grashof number
<FONT FACE="courier" style="color:blue">Grashof(g,beta,DT,D,nu) </FONT>
<p class='alert alert-danger'>
$$
Gr = \frac{g\beta(\Delta T)D^3}{\nu^2}
$$
</p>
and the Rayleigh number
<FONT FACE="courier" style="color:blue">Rayleigh(g,beta,DT,D,nu,alpha) </FONT>
<p class='alert alert-danger'>
$$
Ra = Gr.Pr= \frac{g\beta(\Delta T)D^3}{\nu\alpha}
$$
</p>
where $g$ is the gravity magnitude, $\beta$ is the volumetric thermal expansion coefficient at a given pressure $p$
$$
\beta = -\frac{1}{\rho}\left(\frac{\partial\rho}{\partial T}\right)_p
$$
$\Delta T$ is the absolute temperature difference between the heated surface temperature $T_s$ and the fluid temperature at infinity $T_\infty$, $\Delta T= \vert T_s-T_\infty\vert$, $D$ is the characteristic length of the system (here the diameter) and $\nu$ and $\alpha$ are the kinematic viscosity and the thermal diffusivity, both of dimensions $\text{m$^2$/s}$.
Note that for the ideal gas law
$$
p =\rho \frac{R}{M}T\text{ or } \rho = \frac{p}{\frac{R}{M}T}
$$
thus the expansion coefficient is
<p class='alert alert-info'>
$$
\beta = \frac{1}{T}\text{ for an ideal gas, $T$ in K}
$$
</p>
For a liquid, $\beta$ must be interpolated from a table. All thermodynamics quantities involved are to be defined at the film temperature which is the arithmetic mean
<p class='alert alert-info'>
$$
T_f=\frac{T_s+T_\infty}{2}
$$
</p>
```
#air
T_infty = 10#C
T_s = 50#C
D = 0.1#m
T_f = (T_s+T_infty)/2
airflow = thermo.Fluid('air',T_f,"C")
Gr= natconv.Gr(beta=airflow.beta,D=D,DT=T_s-T_infty,nu=airflow.nu)
print('Natural convection Gr= %1.2e'%Gr)
Ra= natconv.Ra(alpha=airflow.alpha,beta=airflow.beta,D=D,DT=T_s-T_infty,nu=airflow.nu)
print('Natural convection Ra= %1.2e'%Ra)
```
The Grashof and Rayleigh number quantify the ratio of buoyancy to viscous forces. When they are large enough, a convective flow sets in and the heat transfer increases in comparison to pure conduction. The Nusselt number, ratio of convective to conduction heat transfer (i.e. $>1$ in the presence of a convection flow) is typically a power law of the Rayleigh number. In the case of the flow around a cylinder with isothermal surface temperature, there are two correlations:
<FONT FACE="courier" style="color:blue">Morgan(Ra) </FONT>
<p class='alert alert-danger'>
$$
\overline{Nu}=\frac{\overline{h}D}{k}=C\,Ra^n
$$
</p>
<FONT FACE="courier" style="color:blue">Churchill-Chu(Ra,Pr) </FONT>
<p class='alert alert-danger'>
$$
\overline{Nu}=\frac{\overline{h}D}{k}=\left[0.60+\frac{0.387Ra^{1/6}}{\left[1+\left(\frac{0.559}
{Pr}\right)^{9/16}\right]^{8/27}}
\right]^2
$$
</p>
Both are valid for $Ra\leq10^{12}$. The Nusselt is averaged over the perimeter of the cylinder to account for the angular variation of heat transfer discussed earlier. The heat transfer from natural convection from a heated cylinder of diameter $D$ and length $L$ is
<p class='alert alert-info'>
$$
q=\overline{h}(\pi DL)(T_s-T_\infty)=\frac{1}{R_\text{th,conv}}(T_s-T_\infty)
$$
</p>
where $R_\text{th,conv}$ may computed with <FONT FACE="courier" style="color:blue">R_th_convection(h,A)</FONT>
```
airnatconv = natconv.HorizontalCylinder(correlation='Morgan',Ra=Ra)
print("Morgan correlation: Nu= %1.2f" %airnatconv.Nu)
airnatconv = natconv.HorizontalCylinder(correlation='Churchill-Chu',Ra=Ra,Pr=airflow.Pr)
print("Churchill-Chu correlation: Nu= %1.2f" %airnatconv.Nu)
font = {'family' : 'serif',
#'color' : 'black',
'weight' : 'normal',
'size' : 14,
}
from matplotlib.ticker import FormatStrFormatter
plt.rc('font', **font)
N = 100
Ra = np.logspace(5,12,N)
Nu_Morgan = np.zeros(N)
Nu_ChurchillChu = np.zeros(N)
Pr = 1.0
for i in range(N):
flow = natconv.HorizontalCylinder(correlation='Morgan',Ra=Ra[i])
Nu_Morgan[i] = flow.Nu
flow = natconv.HorizontalCylinder(correlation='Churchill-Chu',Ra=Ra[i],Pr=Pr)
Nu_ChurchillChu[i] = flow.Nu
plt.loglog(Ra,Nu_Morgan, label = r"Morgan",lw = 2)
plt.loglog(Ra,Nu_ChurchillChu, label = r"Churchill-Chu", lw= 2)
plt.xlabel(r"$Ra$")
plt.ylabel(r"$Nu$")
plt.legend(loc=3, bbox_to_anchor=[0., 1.01], ncol=2, shadow=False, fancybox=True)
plt.show()
plt.plot(Ra,np.abs(Nu_Morgan-Nu_ChurchillChu)/Nu_ChurchillChu,lw = 2)
plt.xlabel(r"$Ra$")
plt.ylabel(r"$\vert Nu_{M}-Nu_{CC}\vert/Nu_{CC}$")
plt.show()
```
## Assignment
<ol>
<li> Read this entire notebook. Using the textbook, add restrictions and range of validity for the above correlations when applicable. Add the entry length Nu correlation for laminar flow</li>
<li> Add a section on entrance flow</li>
<li> How should the entrance flow region be treated in turbulent flows?</li>
<li>Solve 8.31, 8.36, 8.43</li>
</ol>
### 8.31
<img src="figures/probun_08_07.jpg" alt="my awesome sketch" width=50% >
To cool a summer home without using a vapor-compression refrigeration cycle, air is routed through a plastic pipe ($k=0.15\text{ W/m.K}$, $D_i=0.15\text{ m}$, $D_o=0.17\text{ m}$) that is submerged in an adjoini
ng body of water. The water temperature is nominally at $T_\infty= 17^\circ\text{C}$, and a convection coefficient of $h_o\approx 1500\text{ W/m$^2$. K}$ is maintained at the outer surface of the pipe.
If air from the home enters the pipe at a temperature of $T_{m,i}= 29^\circ\text{C}$ and a volumetric flow rate of $\dot{\forall}_i= 0.025\text{ m$^3$/s}$, what pipe length $L$ is needed to provide a discharge temperature of $T_{m,o}=21^\circ\text{C}$? What is the fan power required
to move the air through this length of pipe if its inner surface is smooth?
#### Solution
The length of the pipe is the given by solving
$$
\frac{T_\infty-T_{m,o}}{T_\infty-T_{m,i}}=\exp\left(-\frac{1}{\dot{m}C_pR_\text{tot}}\right)
$$
for the target outlet temperature $T_{m,o}$. First, assuming 1D, steady convection on the outside of the pipe, we must solve for $R'_{tot}$. Since
$$
R_{tot}=\frac{R'_{tot}}{L}
$$
the pipe length is
$$
L=-\dot{m}C_pR'_\text{tot}\ln\frac{T_\infty-T_{m,o}}{T_\infty-T_{m,i}}
$$
```
from Libraries import HT_thermal_resistance as res
Rp = []
Rp.append(res.Resistance("$R'_{conv,i}$","W/m"))
Rp.append(res.Resistance("$R'_{cond,pipe}$","W/m"))
Rp.append(res.Resistance("$R'_{conv,o}$","W/m"))
d = schem.Drawing()
d.add(e.DOT, label = r"$T_{m,i}$")
d.add(e.RES, d = 'right', label = Rp[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = Rp[1].name)
d.add(e.DOT, label = r"$T_{s,o}$")
d.add(e.RES, d='right', label = Rp[2].name)
d.add(e.DOT, label="$T_\infty$")
L1 = d.add(e.LINE, toplabel = "$q'$", endpts = [[-2.25, 0], [-0.25, 0]])
d.labelI(L1, arrowofst = 0)
d.draw()
from Libraries import thermodynamics as thermo
from Libraries import HT_internal_convection as intconv
k_pipe = 0.15 #W/m.K
Di = 0.15 #m
Do = 0.17 #m
T_infty = 17. #C
h_o = 1500 #W/m^2.K
T_mi = 29 #C
T_mo = 21 #C
Qdot = 0.025 #m^3/s
T_m = (T_mi + T_mo)/2
airi = thermo.Fluid('air',T_mi,"C")
airm = thermo.Fluid('air', T_m,"C")
airflow = intconv.PipeFlow(D=Di, L = 1., mdot = airi.rho*Qdot, nu = airm.nu, rho = airi.rho)
airflow.Dittus_Boelter(mode='cooling',Pr=airm.Pr)
print("Re=%.0f" %airflow.Re)
print("Nu=%.0f" %airflow.Nu)
hbar_i = airflow.Nu*airm.k/Di
print("hbar,i=%.2f W/m^2.K" %hbar_i)
Rp[0].convection(hbar_i,np.pi*Di)
Rp[1].cond_cylinder(k = k_pipe,ra=Di,rb=Do,L=1)
Rp[2].convection(h_o,A=np.pi*Do)
Rptot = 0
for i in range(3):
Rptot += Rp[i].R
# def L_given_other_params(T_infty,T_mo,T_mi,mdot,Cp,Rptot):
# return -mdot*Cp*Rptot*np.log((T_infty -T_mo)/(T_infty - T_mi))
L = intconv.L_given_other_params(T_infty,T_mo,T_mi,airi.rho*Qdot,airm.Cp,Rptot)
print("Length needed to achieve T_mo=%.0f C is %.1f m" %(T_mo,L))
from Libraries import HT_natural_convection as natconv
T_f = (T_infty + T_m)/2
water = thermo.Fluid("water",T_f,"C")
Ra = natconv.Ra(beta=water.beta,DT=T_m - T_infty, D=Do,nu=water.nu,alpha = water.alpha)
print("Ra=%.2e" %(Ra))
waterconv = natconv.HorizontalCylinder("Churchill-Chu",Ra,water.Pr)
print("Nu=%.0f" %waterconv.Nu)
print("For natural convection, h_o=%.0f W/m^2.K" %(waterconv.Nu*water.k/Do))
# waterforced = extconv.CircularCylinder()
```
This little exercise demonstrates that natural convection does not achieve the cooling capacity assumed in the problem ($h_o=1500\mathrm{W}/\mathrm{m}^2.K$)
```
from Libraries import HT_natural_convection as natconv
?natconv.HorizontalCylinder
```
### 8.36
Hot water at mean temperature $T_m=50\text{$^\circ$C}$ is routed from one building in which it is generated to an adjoining building in which it is used for space heating. Transfer between the buildings occurs in a steel pipe ($k=60\text{ W/m.K}$) of $100 \text{ mm}$ outside diameter and 8-mm wall thickness. During the winter, representative environmental conditions involve air at $T_\infty= -5^\circ \mathrm{C}$ and $V_\infty=3\text{ m/s}$ in cross flow over the pipe.
Using the Churchill Bernstein and Dittus Boehler correlations, calculate the total heat transfer rate <b>per unit length</b> $q'$, the daily energy cost $Q'=q'\times 24\text{ h/d}$ per meter and the cost per day and per meter assuming an electricity cost of $\text{\$}0.05\text{/kW.h}$.
**FYI:** This is the Churchill-Bernstein correlation which you can call with the `from Libraries import HT_external_convection as extconv` `airflow=extconv.CircularCylinder('Churchill-Bernstein',Re,Pr)`
$$
Nu_D = \frac{hD}{k_f}=0.3+\frac{0.62Re_D^{1/2}Pr^{1/3}}{\left[1+\left(\frac{0.4}{Pr}\right)^{2/3}\right]^{1/4}}\left[1+\left(\frac{Re_D}{282,000}\right)^{5/8}\right]^{4/5}
$$
<img src="figures/PB8.36-sketch.png" alt="my awesome sketch" width=100% >
The heat transfer problem in any cross sectional area of the pipe is
$$
q' = \frac{T_m - T _\infty}{R'_{tot}}
$$
with
$$
R'_{tot}= R'_{conv,int} + R'_{cond,p}+R'_{conv,ext}
$$
We must find the convection coefficients $h_{int}$ and $h_{ext}$, using the appropriate correlations.
```
Tm = 50 #C
Um = 0.5 #m/s
Di = 0.084 #m
Do = 0.1 #m
kp = 60 #W/m.K
T_infty = -5 #C
U_infty = 3 #m/s
from Libraries import HT_thermal_resistance as res
Rp = []
Rp.append(res.Resistance("$R'_{conv,int}","W/m"))
Rp.append(res.Resistance("$R'_{cond,p}","W/m"))
Rp.append(res.Resistance("$R'_{conv,ext}","W/m"))
# internal convection
from Libraries import thermodynamics as thermo
from Libraries import HT_internal_convection as intconv
water = thermo.Fluid('water',Tm,"C")
pipeflow = intconv.PipeFlow(D=Di,L=1,Um=Um,nu=water.nu)
print("Re_D_pipe= %.0f" %pipeflow.Re)
pipeflow.Dittus_Boelter(mode='cooling',Pr=water.Pr)
hint = pipeflow.Nu*water.k/Di
print("hint=%.1f W/m^2.K" %hint)
Rp[0].convection(h=hint,A=np.pi*Di)
# conduction
Rp[1].cond_cylinder(k=kp,ra=Di,rb=Do,L=1.)
# external convection
#guess for surface temperature at D=Do
T_so = 49.21 #C
T_f = (T_infty + T_so)/2
air = thermo.Fluid('air',T_f,"C")
Re_air = U_infty * Do/air.nu
# print(Re_air)
from Libraries import HT_external_convection as extconv
airflow = extconv.CircularCylinder('Churchill-Bernstein',Re_air,air.Pr)
hext = airflow.Nu*air.k/Do
print("hext=%.1f W/m^2.K" %hext)
Rp[2].convection(h=hext,A=np.pi*Do)
# total thermal resistance
Rptot = 0.
for i in range(3):
Rptot += Rp[i].R
qp = (Tm - T_infty)/Rptot
print("Heat rate per unit length: %.0f W/m" %qp)
#New estimate of T_so
T_so = T_infty + qp*Rp[2].R
print("New T_so = %.2f C" %T_so)
Tm = 50 #C
Um = 0.5 #m/s
Di = 0.084 #m
Do = 0.1 #m
kp = 60 #W/m.K
T_infty = -5 #C
U_infty = 3 #m/s
from Libraries import HT_thermal_resistance as res
Rp = []
Rp.append(res.Resistance("$R'_{conv,int}","W/m"))
Rp.append(res.Resistance("$R'_{cond,p}","W/m"))
Rp.append(res.Resistance("$R'_{conv,ext}","W/m"))
# internal convection
from Libraries import thermodynamics as thermo
from Libraries import HT_internal_convection as intconv
water = thermo.Fluid('water',Tm,"C")
pipeflow = intconv.PipeFlow(D=Di,L=1,Um=Um,nu=water.nu)
print("Re_D_pipe= %.0f" %pipeflow.Re)
pipeflow.Dittus_Boelter(mode='cooling',Pr=water.Pr)
hint = pipeflow.Nu*water.k/Di
print("hint=%.1f W/m^2.K" %hint)
Rp[0].convection(h=hint,A=np.pi*Di)
# conduction
Rp[1].cond_cylinder(k=kp,ra=Di,rb=Do,L=1.)
# external convection
# initial guess for surface temperature at D=Do
T_so = 0. #C
errT = np.inf
iteration = 0
while (errT > 1.0) and (iteration < 10):
iteration += 1
T_so_old = T_so
T_f = (T_infty + T_so)/2
air = thermo.Fluid('air',T_f,"C")
Re_air = U_infty * Do/air.nu
# print(Re_air)
from Libraries import HT_external_convection as extconv
airflow = extconv.CircularCylinder('Churchill-Bernstein',Re_air,air.Pr)
hext = airflow.Nu*air.k/Do
print("hext=%.1f W/m^2.K" %hext)
Rp[2].convection(h=hext,A=np.pi*Do)
# total thermal resistance
Rptot = 0.
for i in range(3):
Rptot += Rp[i].R
qp = (Tm - T_infty)/Rptot
print("Heat rate per unit length: %.0f W/m" %qp)
#New estimate of T_so
T_so = T_infty + qp*Rp[2].R
print("New T_so = %.2f C" %T_so)
errT = abs(T_so - T_so_old)
print("errT=%.3e" %errT)
Qp = qp*1e-3*24
print("Daily energy loss: %.3f kW.h/d/m" %Qp)
Cp = Qp * 0.05
print("Cost: $%.3f /m.d " %Cp)
```
### 8.42
Atmospheric air enters a $10\text{ m}$-long, $150\text{ mm}$-diameter uninsulated heating duct at $60\text{$^\circ$C}$ and $0.04\text{ kg/s}$. The duct surface temperature is approximately constant at $Ts=15\text{$^\circ$C}$.
(a) What are the outlet air temperature, the heat rate q, and pressure drop $\Delta p$ for these conditions?
(b) To illustrate the tradeoff between heat transfer rate and pressure drop considerations, calculate $q$ and $\Delta p$ for diameters in the range from $0.1$ to $0.2\text{ m}$. In your analysis, maintain the total surface area,
$A_s=\pi DL$, at the value computed for part (a). Plot $q$, $\Delta p$, and $L$ as a function of the duct diameter.
```
Tm = 50 #C
Um = 0.5 #m/s
Di = 0.084 #m
Do = 0.1 #m
kp = 60 #W/m.K
T_infty = -5 #C
U_infty = 3 #m/s
from Libraries import HT_thermal_resistance as res
Rp = []
Rp.append(res.Resistance("$R'_{conv,int}","W/m"))
Rp.append(res.Resistance("$R'_{cond,p}","W/m"))
Rp.append(res.Resistance("$R'_{conv,ext}","W/m"))
# internal conduction
from Libraries import HT_internal_convection as intconv
water = thermo.Fluid('water',Tm,"C")
pipeflow = intconv.PipeFlow(D=Di,L=1,Um=Um,nu=water.nu)
print(pipeflow.Re,water.Pr)
pipeflow.Dittus_Boelter(mode='cooling',Pr=water.Pr)
print(pipeflow.Nu*water.k/Di)
Rp[0].convection(h=pipeflow.Nu*water.k/Di,A=np.pi*Di)
#conduction
Rp[1].cond_cylinder(k=kp,ra=Di,rb=Do)
# external convection
from Libraries import HT_external_convection as extconv
T_so = 49.2
T_fo = (T_infty + T_so)/2
air = thermo.Fluid('air',T_fo,"C")
Re_air = U_infty*Do/air.nu
airflow = extconv.CircularCylinder('Churchill-Bernstein',Re_air,air.Pr)
Rp[2].convection(airflow.Nu*air.k/Do,np.pi*Do)
print(airflow.Nu*air.k/Do)
Rptot = 0
for i in range(3):
Rptot += Rp[i].R
print(Rp[i].R)
qp = (Tm - T_infty)/Rptot
print(qp)
T_so_1 = T_infty + qp*Rp[2].R
print(T_so_1)
```
| github_jupyter |
### Dataset
Lets Load the dataset. We shall use the following datasets:
Features are in: "sido0_train.mat"
Labels are in: "sido0_train.targets"
```
from scipy.io import loadmat
import numpy as np
X = loadmat(r"/Users/rkiyer/Desktop/teaching/CS6301/jupyter/data/sido0_matlab/sido0_train.mat")
y = np.loadtxt(r"/Users/rkiyer/Desktop/teaching/CS6301/jupyter/data/sido0_matlab/sido0_train.targets")
# Statistics of the Dense Format of X
X = X['X'].todense()
print(X.shape)
```
### Logistic Regression Definition
Lets use the Logistic Regression definition we previously used
```
def LogisticLoss(w, X, y, lam):
# Computes the cost function for all the training samples
m = X.shape[0]
Xw = np.dot(X,w)
yT = y.reshape(-1,1)
yXw = np.multiply(yT,Xw)
f = np.sum(np.logaddexp(0,-yXw)) + 0.5*lam*np.sum(np.multiply(w,w))
gMul = 1/(1 + np.exp(yXw))
ymul = -1*np.multiply(yT, gMul)
g = np.dot(ymul.reshape(1,-1),X) + lam*w.reshape(1,-1)
g = g.reshape(-1,1)
return [f, g]
```
### Barzelia Borwein step length
Lets invoke BB Step Length Gradient Descent
```
from numpy import linalg as LA
def gdBB(funObj,w,maxEvals,alpha,gamma,X,y,lam, verbosity, freq):
[f,g] = funObj(w,X,y,lam)
funEvals = 1
funVals = []
f_old = f
g_old = g
funVals.append(f)
numBackTrack = 0
while(1):
wp = w - alpha*g
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
backtrack = 0
if funEvals > 2:
g_diff = g - g_old
alpha = -alpha*np.dot(g_old.T, g_diff)[0,0]/np.dot(g_diff.T, g_diff)[0,0]
while fp > f - gamma*alpha*np.dot(g.T, g):
alpha = alpha*alpha*np.dot(g.T, g)[0,0]/(2*(fp + np.dot(g.T, g)[0,0]*alpha - f))
wp = w - alpha*g
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
numBackTrack = numBackTrack + 1
f_old = f
g_old = g
w = wp
f = fp
g = gp
optCond = LA.norm(g, np.inf)
if ((verbosity > 0) and (funEvals % freq == 0)):
print(funEvals,alpha,f,optCond)
if (optCond < 1e-2):
break
if (funEvals >= maxEvals):
break
return (funVals,numBackTrack)
[nSamples,nVars] = X.shape
w = np.zeros((nVars,1))
(funV1,numBackTrack) = gdBB(LogisticLoss,w,250,1,1e-4,X,y,1,1,10)
print(len(funV1))
print("Number of Backtrackings = " + str(numBackTrack))
```
### Conjugate Gradient Descent
Nonlinear Conjugate Gradient Descent
```
from numpy import linalg as LA
def gdCG(funObj,w,maxEvals,alpha,gamma,X,y,lam, verbosity, freq):
[f,g] = funObj(w,X,y,lam)
funEvals = 1
funVals = []
f_old = f
g_old = g
funVals.append(f)
numBackTrack = 0
d = g
while(1):
wp = w - alpha*d
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
backtrack = 0
if funEvals > 2:
alpha = min(1,2*(f_old - f)/np.dot(g.T, g)[0,0])
beta = np.dot(g.T, g)[0,0]/np.dot(g_old.T, g_old)[0,0]
d = g + beta*d
else:
d = g
while fp > f - gamma*alpha*np.dot(g.T, d)[0,0]:
alpha = alpha*alpha*np.dot(g.T, d)[0,0]/(2*(fp + np.dot(g.T, d)[0,0]*alpha - f))
wp = w - alpha*d
[fp,gp] = funObj(wp,X,y,lam)
funVals.append(f)
funEvals = funEvals+1
numBackTrack = numBackTrack + 1
f_old = f
g_old = g
w = wp
f = fp
g = gp
optCond = LA.norm(g, np.inf)
if ((verbosity > 0) and (funEvals % freq == 0)):
print(funEvals,alpha,f,optCond)
if (optCond < 1e-2):
break
if (funEvals >= maxEvals):
break
return (funVals,numBackTrack)
[nSamples,nVars] = X.shape
w = np.zeros((nVars,1))
(funV1,numBackTrack) = gdCG(LogisticLoss,w,250,1,1e-4,X,y,1,1,10)
print(len(funV1))
print("Number of Backtrackings = " + str(numBackTrack))
```
| github_jupyter |
# Singleton Networks
```
import qualreas as qr
import os
import copy
qr_path = os.path.join(os.getenv('PYPROJ'), 'qualreas')
alg_dir = os.path.join(qr_path, "Algebras")
```
## Make a Test Network
```
test1_net_dict = {
'name': 'Network Copy Test #1',
'algebra': 'Extended_Linear_Interval_Algebra',
'description': 'Testing/Developing network copy functionality',
'nodes': [
['U', ['ProperInterval', 'Point']],
['V', ['ProperInterval', 'Point']],
['W', ['ProperInterval']],
['X', ['Point']]
],
'edges': [
['U', 'V', 'B'],
['U', 'W', 'M'],
['W', 'V', 'O'],
['X', 'W', 'D']
]
}
test2_net_dict = {
'name': 'Network Copy Test #2',
'algebra': 'Extended_Linear_Interval_Algebra',
'description': 'Testing/Developing network copy functionality',
'nodes': [
['X', ['ProperInterval']],
['Y', ['ProperInterval']],
['Z', ['ProperInterval']]
],
'edges': [
['X', 'Y', 'B'],
['Y', 'Z', 'B']
]
}
test1_net = qr.Network(algebra_path=alg_dir, network_dict=test1_net_dict)
test2_net = qr.Network(algebra_path=alg_dir, network_dict=test2_net_dict)
test1_net.propagate()
test1_net.summary(show_all=False)
test2_net.propagate()
test2_net.summary(show_all=False)
```
## Test Changing Constraint on an Edge
Look at all the edge contraints
```
for eg in test1_net.edges:
print(test1_net.edges[eg[0], eg[1]]['constraint'])
```
Grab the Head (src) and Tail (tgt) of the 3rd edge, above.
```
src, tgt = list(test1_net.edges)[2]
test1_net.edges[src,tgt]['constraint']
```
Change the constraint and look at the result on the edge & its converse.
```
test1_net.set_constraint(src, tgt, test1_net.algebra.relset('D|M|FI'))
test1_net.edges[src,tgt]['constraint']
test1_net.edges[tgt,src]['constraint']
```
## Test Copy Network
```
test1_net_copy = test1_net.copy()
#test1_net_copy = qr.copy(test1_net)
test1_net_copy.summary()
test1_net_copy.propagate()
test1_net_copy.summary(show_all=False)
done = []
result = []
for eg in test1_net_copy.edges:
src = eg[0]; tgt = eg[1]
srcID = src.name; tgtID = tgt.name
if not (src, tgt) in done:
cons = test1_net_copy.edges[src, tgt]['constraint']
print(srcID, tgtID, cons)
if len(cons) > 1:
result.append((srcID, tgtID, cons))
done.append((tgt, src))
rels = []
for rel in result[0][2]:
rels.append(rel)
rels
foo = [1, 2, 3]
a = foo.pop()
a
foo
def _all_realizations_aux(in_work, result):
if len(in_work) == 0:
print("DONE")
return result
else:
print("Get next net in work")
next_net = in_work.pop()
if finished(next_net):
print(" This one's finished")
result.append(next_net)
_all_realizations_aux(in_work, result)
else:
print(" Expanding net")
_all_realizations_aux(in_work + expand(next_net), result)
def expand(net):
expansion = []
for src, tgt in net.edges:
edge_constraint = net.edges[src, tgt]['constraint']
if len(edge_constraint) > 1:
print("--------")
print(f"Edge Constraint: {edge_constraint}")
for rel in edge_constraint:
print(f" Relation: {rel}")
net_copy = net.copy()
src_node, tgt_node, _ = net_copy.get_edge(src.name, tgt.name, return_names=False)
net_copy.set_constraint(src_node, tgt_node, net_copy.algebra.relset(rel))
expansion.append(net_copy)
print(f" Expansion: {expansion}")
break
return expansion
def finished(net):
"""Returns True if all constraints are singletons."""
answer = True
for src, tgt in net.edges:
edge_constraint = net.edges[src, tgt]['constraint']
if len(edge_constraint) > 1:
answer = False
break
return answer
x = _all_realizations_aux([test1_net_copy], list())
len(x)
foo = expand(test1_net)
foo
foo[0].summary(show_all=False)
foo[1].summary(show_all=False)
foo[2].summary(show_all=False)
finished(test1_net)
finished(test2_net)
```
| github_jupyter |
```
import pandas
import numpy as np
import sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
import glob
```
# San Francisco State University
## Software Engineering Team Assessment and Prediction (SETAP) Project Machine Learning Training Data File Version 0.7
====================================================================
# Copyright 2000-2017 by San Francisco State University, Dragutin Petkovic, and Marc Sosnick-Perez.
# CONTACT
-------
## Professor Dragutin Petkovic: petkovic@sfsu.edu
# LICENSE
-------
This data is released under the Creative Commons Attribution-
NonCommercial 4.0 International license. For more information,
please see
http://creativecommons.org/licenses/by-nc/4.0/legalcode.
The research that has made this data possible has been funded in
part by NSF grant NSF-TUES1140172.
YOUR FEEDBACK IS WELCOME
------------------------
We are interested in how this data is being used. If you use it in
a research project, we would like to know how you are using the
data. Please contact us at petkovic@sfsu.edu.
# FILES INCLUDED IN DISTRIBUTION PACKAGE
==================================
More data about the SETAP project, data collection, and description
and use of machine learning to analyze the data can be found in the
following paper:
D. Petkovic, M. Sosnick-Perez, K. Okada, R. Todtenhoefer, S. Huang,
N. Miglani, A. Vigil: "Using the Random Forest Classifier to Assess
and Predict Student Learning of Software Engineering Teamwork".
Frontiers in Education FIE 2016, Erie, PA, 2016
See DATA DESCRIPTION below for more information about the data. The
README file (which you are reading) contains project information
such as data collection techniques, data organization and field
naming convention. In addition to the README file, the archive
contains a number of .csv files. Each of these CSV files contains
data aggregated by team from the project (see below), paired with
that team's outcome for either the process or product component of
the team's evaluation. The files are named using the following
convention:
setap[Process|Product]T[1-11].csv
For example, the file setapProcessT5.csv contains the data for all
teams for time interval 5, paired with the outcome data for the
Process component of the team's evaluation.
Detailed information about the exact format of the .csv file may be
found in the csv files themselves.
# DATA DESCRIPTION
====================================================================
The following is a detailed description of the data contained in the
accompanying files.
### INTRODUCTION
------------
The data contained in these files were collected over a period of
several semesters from students engaged in software engineering
classes at San Francisco State University (class sections of CSC
640, CSC 648 and CSC 848). All students consented to this data
being shared for research purposes provided no uniquely identifiable
information was contained in the distributed files. The information
was collected through various means, with emphasis being placed on
the collection of objective, quantifiable information. For more
information on the data collection procedures, please see the paper
referenced above.
### PRIVACY
-------
The data contained in this file does not contain any information
which may be individually traced to a particular student who
participated in the study.
# BRIEF DESCRIPTION OF DATA SOURCES AND DERIVATIONS
-------------------------------------------------
SAMs (Student Activity Measure) are collected for each student team
member during their participation in a software engineering class.
Student teams work together on a final class project, and comprise
5-6 students. Teams that are made up of students from only one
school are labeled local teams. Teams made up of students from more
than one school are labeled global teams. SAMs are collected from:
weekly timecards, instructor observations, and software engineering
tool usage logs. SAMs are then aggregated by team and time interval
(see next section) into TAMs (Team Activity Measure). Outcomes are
determined at the end of the semester through evaluation of student
team work in two categories: software engineering process (how well
the team applied best software engineering practices), and software
engineering product (the quality of the finished product the team
produced). Thus for each team, two outcomes are determined, process
and product, respectively. Outcomes are classified into two class
grades, A or F. A represents teams that are at or above
expectations, F represents teams that are below expectations or need
attention. For more information, please see the paper referenced
above.
The SE process and SE product outcomes represent ML training classes# and are to be considered separately, e.g. one should train ML for SE
process separately from training for SE product.
```
path ='data/SETAP PRODUCT DATA'
allFiles = glob.glob(path + "/*.csv")
frame = pandas.DataFrame()
list_ = []
for file_ in allFiles:
df = pandas.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pandas.concat(list_)
data = pandas.DataFrame.from_csv("data/SETAP PRODUCT DATA/setapProductT1.csv")
# full_data=True will let explore the whole dataset (T1-T11)
full_data = True
if (full_data):
data = frame
labels = data['productLetterGrade']
features = data.drop('productLetterGrade', axis=1)
#Drop certain features
if (full_data):
features = features.drop([col for col in features.columns if 'Total' in col], axis=1)
features = features.drop([col for col in features.columns if 'Count' in col], axis=1)
features = features.drop([col for col in features.columns if 'Student' in col], axis=1)
#features = features.drop('femaleTeamMembersPercent', axis=1)
# Rename strings in data to appropriate integers, labels to booleans
mapping = {'F': False, 'A': True}
features_mapping = {'M': 0, 'F' : 1, 'Global': 0, 'Local': 1}
features = pandas.DataFrame(features)
labels = pandas.DataFrame(labels)
labels = labels.applymap(lambda s: mapping.get(s) if s in mapping else s)
#features.dropna(axis='columns', how='any', inplace=True)
features.fillna(1, inplace=True)
features = features.applymap(lambda s: features_mapping.get(s) if s in features_mapping else s)
X_train, X_test, y_train, y_test = train_test_split(features, labels, random_state=1, train_size=0.4)
rfc = RandomForestClassifier(n_estimators= 1000, max_features=0.25, max_depth=50, oob_score=True, n_jobs=-1)
rfc.fit(X_train, y_train.values.ravel())
print ('Accuracy score: ' + str(round(rfc.score(X_test, y_test.values.ravel()),3)*100) + '%')
import matplotlib.pyplot as plt
n_features = len(features.columns)
plt.figure(figsize=(5,n_features/5))
plt.barh(range(n_features), rfc.feature_importances_, align='center')
plt.yticks(np.arange(n_features), features.columns)
plt.xlabel('Full Feature Importance of ' + str(n_features) + ' features')
plt.ylabel('Feature')
plt.show()
features.columns[np.argmax(rfc.feature_importances_)]
print ( "Top important features:")
count = 1
for string in features.columns[rfc.feature_importances_.argsort()[-6:][::-1]] :
print(str(count) + '. ' + string )
count+=1
print("Full dataset test set accuracy")
pandas.crosstab(y_test['productLetterGrade'], rfc.predict(X_test), rownames=['Actual'], colnames=['Predicted'])
#pandas.crosstab(labels['productLetterGrade'], rfc.predict(features), rownames=['Actual'], colnames=['Predicted'])
#Drop certain features
if (full_data):
data = pandas.DataFrame.from_csv("data/SETAP PRODUCT DATA/setapProductT1.csv")
labels = data['productLetterGrade']
features = data.drop('productLetterGrade', axis=1)
features = features.drop([col for col in features.columns if 'Total' in col], axis=1)
features = features.drop([col for col in features.columns if 'Count' in col], axis=1)
features = features.drop([col for col in features.columns if 'Student' in col], axis=1)
#features = features.drop('femaleTeamMembersPercent', axis=1)
# Rename strings in data to appropriate integers, labels to booleans
mapping = {'F': False, 'A': True}
features_mapping = {'M': 0, 'F' : 1, 'Global': 0, 'Local': 1}
features = pandas.DataFrame(features)
labels = pandas.DataFrame(labels)
labels = labels.applymap(lambda s: mapping.get(s) if s in mapping else s)
#features.dropna(axis='columns', how='any', inplace=True)
features.fillna(1, inplace=True)
features = features.applymap(lambda s: features_mapping.get(s) if s in features_mapping else s)
print ('T1 Accuracy score: ' + str(round(rfc.score(features, labels.values.ravel()),3)*100) + '%')
pandas.crosstab(labels['productLetterGrade'], rfc.predict(features), rownames=['Actual'], colnames=['Predicted'])
data = pandas.DataFrame.from_csv("data/SETAP PRODUCT DATA/setapProductT2.csv")
labels = data['productLetterGrade']
features = data.drop('productLetterGrade', axis=1)
#Drop certain features
if (full_data):
features = features.drop([col for col in features.columns if 'Total' in col], axis=1)
features = features.drop([col for col in features.columns if 'Count' in col], axis=1)
features = features.drop([col for col in features.columns if 'Student' in col], axis=1)
#features = features.drop('femaleTeamMembersPercent', axis=1)
# Rename strings in data to appropriate integers, labels to booleans
mapping = {'F': False, 'A': True}
features_mapping = {'M': 0, 'F' : 1, 'Global': 0, 'Local': 1}
features = pandas.DataFrame(features)
labels = pandas.DataFrame(labels)
labels = labels.applymap(lambda s: mapping.get(s) if s in mapping else s)
#features.dropna(axis='columns', how='any', inplace=True)
features.fillna(1, inplace=True)
features = features.applymap(lambda s: features_mapping.get(s) if s in features_mapping else s)
print ('T2 Accuracy score: ' + str(round(rfc.score(features, labels.values.ravel()),3)*100) + '%')
pandas.crosstab(labels['productLetterGrade'], rfc.predict(features), rownames=['Actual'], colnames=['Predicted'])
data = pandas.DataFrame.from_csv("data/SETAP PRODUCT DATA/setapProductT3.csv")
labels = data['productLetterGrade']
features = data.drop('productLetterGrade', axis=1)
#Drop certain features
if (full_data):
features = features.drop([col for col in features.columns if 'Total' in col], axis=1)
features = features.drop([col for col in features.columns if 'Count' in col], axis=1)
features = features.drop([col for col in features.columns if 'Student' in col], axis=1)
#features = features.drop('femaleTeamMembersPercent', axis=1)
# Rename strings in data to appropriate integers, labels to booleans
mapping = {'F': False, 'A': True}
features_mapping = {'M': 0, 'F' : 1, 'Global': 0, 'Local': 1}
features = pandas.DataFrame(features)
labels = pandas.DataFrame(labels)
labels = labels.applymap(lambda s: mapping.get(s) if s in mapping else s)
#features.dropna(axis='columns', how='any', inplace=True)
features.fillna(1, inplace=True)
features = features.applymap(lambda s: features_mapping.get(s) if s in features_mapping else s)
print ('T3 Accuracy score: ' + str(round(rfc.score(features, labels.values.ravel()),3)*100) + '%')
pandas.crosstab(labels['productLetterGrade'], rfc.predict(features), rownames=['Actual'], colnames=['Predicted'])
data = pandas.DataFrame.from_csv("data/SETAP PRODUCT DATA/setapProductT6.csv")
labels = data['productLetterGrade']
features = data.drop('productLetterGrade', axis=1)
#Drop certain features
if (full_data):
features = features.drop([col for col in features.columns if 'Total' in col], axis=1)
features = features.drop([col for col in features.columns if 'Count' in col], axis=1)
features = features.drop([col for col in features.columns if 'Student' in col], axis=1)
#features = features.drop('femaleTeamMembersPercent', axis=1)
# Rename strings in data to appropriate integers, labels to booleans
mapping = {'F': False, 'A': True}
features_mapping = {'M': 0, 'F' : 1, 'Global': 0, 'Local': 1}
features = pandas.DataFrame(features)
labels = pandas.DataFrame(labels)
labels = labels.applymap(lambda s: mapping.get(s) if s in mapping else s)
#features.dropna(axis='columns', how='any', inplace=True)
features.fillna(1, inplace=True)
features = features.applymap(lambda s: features_mapping.get(s) if s in features_mapping else s)
print ('T3 Accuracy score: ' + str(round(rfc.score(features, labels.values.ravel()),3)*100) + '%')
pandas.crosstab(labels['productLetterGrade'], rfc.predict(features), rownames=['Actual'], colnames=['Predicted'])
```
| github_jupyter |
# Estadísticos principales
- Esperanzas, varianza y ley débil de los grandes números
- Variables aleatorias especiales
## Esperanza
La esperanza o valor esperado de una v.a. $X$ se denota $E[X]$ y se calcula como:
$\begin{array}{ll}
E[X] =
\left\{\begin{array}{ll} \sum_i x_i P(X=x_i) & si\,X\, discreta\\
\int x f_X(x)dx & si\,X\, continua\\
\end{array} \right .\\
\end{array}$
Consideremos $g$ una función a valores reales, entonces:
$\begin{array}{lll}
E[g(X)] & = &
\left\{\begin{array}{ll} \sum_i g(x_i) P(X=x_i) & si\,X\, discreta\\
\int g(x) f_X(x)dx & si\,X\, continua\\
\end{array}\right .\\
\end{array}$
Para el caso especial de $g(x) = x^n$ se define el n-ésimo momento de X como:
$\begin{array}{lll}
E[X^n] & = &
\left\{\begin{array}{ll} \sum_i x_i^n P(X=x_i) & si\,X\, discreta\\
\int x^n f_X(x)dx & si\,X\, continua\\
\end{array}\right .\\
\end{array}$
La esperanza es el primer momento y se denota $\mu$.
**Propiedades**
Sean $a,b \in \cal{R}$ entonces:
$\begin{array}{lll}
E[aX+b] & = & aE[X] + b \\
E[X + Y] & = & E[X] + E[Y]\\
\end{array}$
## Varianza y covarianza
La varianza mide la variación de la v.a. entorno a la esperanza o media $\mu$, y se define como
$\begin{equation}
\begin{array}{ll}
Var(X) = E[(X-\mu)^2] = E[X^2] - \mu^2
\end{array}
\end{equation}$
Se cumple que:
$\begin{equation}
\begin{array}{ll}
Var(aX+b) = a^2 Var(X)
\end{array}
\end{equation}$
Se define además la desviación estándar $\sigma = \sqrt{Var(X)}$
La covarianza mide la relación (lineal) que hay entre dos v.a. $X$ e $Y$. Si denotamos $\mu_X = E[X]$ y $ \mu_Y= E[Y]$ entonces:
$\begin{equation}
\begin{array}{lll}
Cov(X,Y) & = & E[(X-\mu_X)(Y-\mu_y)]
\end{array}
\end{equation}$
La correlación es una medida normalizada:
$\begin{equation}
\begin{array}{lll}
Corr(X,Y) & = & \frac{Cov(X,Y)}{Var(X) Var(Y)}
\end{array}
\end{equation}$
**Propiedades**
$\begin{array}{lll}
Cov(X,Y) & = & Cov(Y,X) \\
Cov(X,X) & = & Var(X)\\
Cov(X+Z,Y) & = & Cov(X,Y) + Cov(Z,Y)\\
Cov(\sum_i \limits X_i,Y) & = & \sum_i \limits Cov(X_i,Y)\\
Var(X+Y) & = & Var(X) + Var(Y) + 2Cov(X,Y)\\
Var(\sum_i \limits X_i) & = &
\sum_i \limits Var(X_i) + \sum_i \limits \sum_{j\neq i} \limits Cov(X_i,X_j)
\end{array}$
## Otros estadísticos
$\begin{array}{lll}
\text{ Asimetría (skewness) } & = & \frac{E[(X-\mu)^3]}{\sigma^3} = \frac{E[X^3]-3\mu\sigma^2 - \mu^3}{\sigma^3}\\
&&\\
\text{ Curtosis }& = &\frac{E[(X-\mu)^4]}{\sigma^4} = \frac{E[X^4] - 4\mu E[X^3] + 6\mu^2\sigma^2 + 3\mu^4}{\sigma^4}\\
\end{array}$
| github_jupyter |
```
import os
import pickle
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
import seaborn as sns
import numpy as np
import pandas as pd
plt.style.use("dark_background")
%matplotlib inline
DATA_PATH = "/nethome/san37/Workspace/semit/data"
UMAP_PATH = "/localscratch/san37/semit/SUIT/UMAP"
```
# Loading Data
```
with open(os.path.join(DATA_PATH, 'mnist_full.pkl'), 'rb') as f:
mnist_full = pickle.load(f)
mnist_x_train = mnist_full["x_train"]
mnist_y_train = mnist_full["y_train"]
with open(os.path.join(DATA_PATH, 'kannada_semi_1pct.pkl'), 'rb') as f:
kannada_semi = pickle.load(f)
kannada_x_train_labeled = kannada_semi["x_train_labeled"]
kannada_y_train_labeled = kannada_semi["y_train_labeled"]
kannada_x_train_unlabeled = kannada_semi["x_train_unlabeled"]
kannada_y_train_unlabeled = kannada_semi["y_train_unlabeled"]
kannada_x_train = np.concatenate((kannada_x_train_labeled, kannada_x_train_unlabeled), axis=0)
kannada_y_train = np.concatenate((kannada_y_train_labeled, kannada_y_train_unlabeled), axis=0)
```
# Loading pre-generated UMAP/tSNE embedding
```
mnist_umap = np.load(os.path.join(UMAP_PATH, 'mnist_tsne.npy'))
kannada_umap = np.load(os.path.join(UMAP_PATH, 'kannada_tsne.npy'))
```
# Visualize Embedding
```
fig, ax = plt.subplots(figsize=(12, 12))
cm = plt.cm.get_cmap('tab20')
for i in range(10):
indexing = (mnist_y_train == i)
vis_points = mnist_umap[indexing]
ax.scatter(vis_points[:, 0], vis_points[:, 1], s=5, alpha=0.3, c=[cm.colors[i]], label='M{}'.format(i))
mean_loc = np.mean(vis_points, axis=0)
mean_class_img = np.squeeze(np.mean(mnist_x_train[indexing], axis=0))
offset_img = OffsetImage(mean_class_img, zoom=1, cmap='gray')
ab = AnnotationBbox(offset_img, mean_loc, xycoords='data', frameon=True, pad=0.0)
ax.add_artist(ab)
for i in range(10):
indexing = (kannada_y_train == i)
vis_points = kannada_umap[indexing]
ax.scatter(vis_points[:, 0], vis_points[:, 1], s=5, alpha=0.3, c=[cm.colors[10 + i]], label='K{}'.format(i))
mean_class_img = np.squeeze(np.mean(kannada_x_train[indexing], axis=0))
mean_loc = np.mean(vis_points, axis=0)
offset_img = OffsetImage(mean_class_img, zoom=1, cmap='gray')
ab = AnnotationBbox(offset_img, mean_loc, xycoords='data', frameon=True, pad=0.0)
ax.add_artist(ab)
lgnd = ax.legend(loc="upper left", ncol=2, scatterpoints=1, fontsize=12, title="MNIST (M) / Kannada (K)")
for handle in lgnd.legendHandles:
#handle.set_title('ABD')
handle.set_sizes([40])
handle.set_alpha(1)
ax.set_xlim(left=-47)
ax.axis('off')
plt.tight_layout()
fig.savefig(os.path.join(UMAP_PATH, "tsne_plot.png"), format='png', bbox_inches='tight')
```
| github_jupyter |
Date: 2/09/2018
Version: 1.0
Environment: Python 3.6.1 and Jupyter notebook
Libraries used: Main libraries used for assignment:
* re (for regular expression, included in Anaconda Python 3.6)
* sys (to display system version, included in Anaconda Python 3.6)
* nltk (for text processing, included in Anaconda Python 3.6)
* pathlib (to set the document directory in order to read files, included in Anaconda Python 3.6)
* nltk.tokenize (for tokenize and mwetokenize process, included in Anaconda Python 3.6)
* os (for changing file directory, included in Anaconda Python 3.6)
* nltk.util (for bigrams, included in Anaconda Python 3.6)
* nltk.probability (for calculating term frequency of tokens, included in Anaconda Python 3.6)
* warnings (to ignore any warnings thrown whiel execution, included in Anaconda Python 3.6)
* nltk.probability (for calculating term frequency of tokens, included in Anaconda Python 3.6)
* nltk.stem (for stemming of tokens, included in Anaconda Python 3.6)
* pandas(for creating dataframes, included in Anaconda Python 3.6)
* matplotlib(for plotting dataframes, included in Anaconda Python 3.6)
## Introduction:
This analysis consists of parsing of 219/250 resumes, cleaning the resumes' files by removing stopwords, characters less than 3length, most frequent and less frequent words.
Once the tokens are generated, bigrams and unigrams are merged to form final vocab list.
Then count of each tokens present in each resume is printed to files to form the vector matrix, which further helps for text processing.
## Import libraries
```
# Importing libraries for assessment 1 - Task 2
import re
import sys
import nltk
from pathlib import Path
from nltk.tokenize import RegexpTokenizer
import os
from nltk.util import ngrams
from nltk.probability import *
import warnings
warnings.filterwarnings('ignore')
from nltk.tokenize import MWETokenizer
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#Printing system version
print (sys.version_info)
```
### 1) Identify the 250 resumes file numbers assigned to me.
```
#Function used to clean the data while reading the files
def clean_data_fun(data):
data=re.sub('[^\s!-~]', '', data) #removes extra unwanted characters
data = re.sub('[%s]' % re.escape("\n\|\/"), '', data) # removes punctuations
data = re.sub('\s+', ' ', data)# removes extra whitespace
return data #returns cleaned data
#This step extracts the required 250 resumes' names assigned to me from the folder.
#From the directory fetch the resumes numbers assigned to me
os.chdir('C:/Users/vikra/Desktop/Python/wrangling/Assignment/A-1/Task2')
with open('student_dataset.txt','r') as input_file:
resume_student_dataset=input_file.read()
input_file.close()
# Once the file is opened, read the file inorder to find the resume's file numbers
resume_student_dataset=clean_data_fun(resume_student_dataset) #clean the data set if any unwanted characters present
resume_student_dataset=resume_student_dataset.split(" ") #extract individual file string using str.split()
resume_student_dataset=list(set(resume_student_dataset)) #fetch unique file numbers by applying set.
print("Result :Total resumes to be extracted-",len(resume_student_dataset),"out of 250 resumes") #print result
resume_student_dataset.sort() #The final list which contains the 219 file numbers to be extracted.
print("\n",resume_student_dataset)
#Next step: from previous step I got the numbers, now mapping these numbers with the resume file number inside directory
path = "C:/Users/vikra/Desktop/Python/wrangling/Assignment/A-1/Task2/resumeTxt"
Files=list(sorted(os.listdir(path)))
print("Total files in the directory:",len(Files))
required_text_files=[] #To store only the required files. This will store 219 files' names out of 866
pattern =re.compile('(?<=resume_\()[0-9]+(?=\).txt)') #using regex, matching the files
for each in Files:
for j in (pattern.findall(each)):
if j in resume_student_dataset:
required_text_files.append(each)
print("\nThe required file names are:",(required_text_files))
```
--------------
### 2) Read 219 resumes' data from the directory. Additionally, sentence segmentation and case normalization is done.
```
# Case normalization function, this function will lower case the first word of every sentence.
def normalize_fun(data):
new_list=''
new_list=''.join(re.sub(r'(^\s?\w+)',lambda m: m.group().lower(),data)) # lower case the first word of sentence
return new_list
# Read all 219 files and store in a dictionary format.
os.chdir('C:/Users/vikra/Desktop/Python/wrangling/Assignment/A-1/Task2/resumeTxt')
#This method opens and reads the file
def read_file_contents(file_name):
with open(file_name,'r',encoding='UTF-8') as f:
data = f.read()
return data
f.close()
# This step performs sentence segmentation and case normalization
def perform_seg_norm(data):
new_str=''
sent_tokenize_list = sent_tokenize(data)
sent_tokenize_list=[normalize_fun(each) for each in sent_tokenize_list]
new_str = ''.join(str(e) for e in sent_tokenize_list)
return new_str
#Read all 219 texts, clean and store them
all_text=[read_file_contents(name) for name in required_text_files] #Read all 219 data into one list
all_text=[clean_data_fun(each) for each in all_text] #remove unwanted characters for each file
# Applying sentence segmentation + case normalization, the result is final 219 data stored in a list
final_data=[]
final_data+=[perform_seg_norm(each) for each in all_text]
final_all_text=''.join(each for each in final_data) #Convert whole 219 data into one string. This is required for further steps
#Storing all resumes and respective data into a dictionary format
resume_dict = dict(zip(resume_student_dataset, final_data))
#resume_dict
print("'final_data' contains normalised 219 resumes' data in a list format")
print("'final_all_text' is a string, which contains all 219 data in one file")
```
-----------
### 3) Perform tokenisation.
```
#Below function tokenises based on the given regular expression.
def f_tokenise(data):
tokenizer = RegexpTokenizer(r"\w+(?:[-']\w+)?") # Regular expression used for tokenization
tokens = tokenizer.tokenize(data)
return tokens
tokenised_data=f_tokenise(final_all_text)
print("Total tokens present after reading all 219 resumes:",len(tokenised_data))
```
------------
### 4) Removal of stop words from tokens
```
#Fetching stop_words from the directory
os.chdir('C://Users//vikra//Desktop//Python//wrangling//Assignment//A-1//Task2')
with open('stopwords_en.txt','r') as input_file:
stop_words=input_file.read()
input_file.close()
stop_words=stop_words.split()
stop_words=set(stop_words) #this list contains the stop words given for the assessment.
#Filter stop words from tokens
stopped_stopwords=[w for w in tokenised_data if w not in stop_words]
print("Tokens filtered from stop words:",len(stopped_stopwords))
```
-----------------
### 5) Filtering tokens with less than 3 characters.
```
#Removes data with less than 3character
#A function which identifies and removes tokens with less than 3 characters
def filter_word(data):
filter_list=[]
for each in data:
if len(each)>3:
filter_list.append(each)
return filter_list
#A function which identifies 3char
def find_3charword(data):
filter_list=[]
for each in data:
if len(each)<3:
filter_list.append(each)
return filter_list
filtered_tokens=filter_word(stopped_stopwords) # calling the function to remove less than 3 characters
tokens_3char=find_3charword(stopped_stopwords)
print("Tokens less than 3 character removed and new filtered tokens=",len(filtered_tokens))
```
-----------------------------
### 6) Filtering context-dependent (with the threshold set to %98) and rare tokens (< than 2%)
#### This section finds the 98% and 2% tokens and removes them. Process is divided into 3 steps.
---------------------------------------------------------
#### 6-1: Function(count_tokens_sen) accepts the tokens and all the resume files data and returns the count of each tokens presence against each file.
```
def count_tokens_sen(word_set,phrase_set):
word_set=list(set(word_set))
matches=[]
for sen in phrase_set:
words=sen.split()
words=list(set(words))
matches+=[x for x in word_set if x in words]
counts={}
for each in matches:
if each in counts:counts[each] += 1
else:counts[each] = 1
return counts
tokens_count=count_tokens_sen(filtered_tokens,final_data) # stores the count of presence of each token in all files
# filtered_tokens contains the previous filtered tokens
# final_data here is the list which contains 219 resumes' data
```
#### Output of 6-1: The count of each token against presenece of 219 is calculated
---------------------------------------------------------------------------------
#### 6-2: This section finds the 98% and 2% tokens
```
# Defining a function which calculates the percentage of presenece of each token
def filter_contextdep(data):
tokens_to_be_filtered=[]
for key,value in data.items():
value=round((value/219)*100,2)
if value >= 98 or value <= 2:
#print(key,value,"%")
tokens_to_be_filtered.append(key)
return tokens_to_be_filtered
tokens_to_be_filtered_contextdep=filter_contextdep(tokens_count)
```
#### Output of 6-2: calculates the tokens which are context dependent
--------------------------------------------------------------------
#### 6-3: Filter tokens which are context dependent
```
filt_tokens=[w for w in filtered_tokens if w not in tokens_to_be_filtered_contextdep]
#-------------------------------------------------------------------------------
print("Output of step-3: tokens greater than 98% and less than 2% are removed.")
#-------------------------------------------------------------------------------
print("\nTotal words to be removed which are occuring more than 98% or less than 2%=",len(tokens_to_be_filtered_contextdep))
print("\nFiltered tokens after removing context dependent=",len(filt_tokens))
```
--------------------------------------
### 7) Stemming process. Since stemmer works only for lower-case tokens, hence process is to filter the lowercase and apply stemming
#### 7-1: Finding all Uppercase and lowercase tokens
```
uppercase_pattern=re.compile(r'^[A-Z].*\b') # A Regex pattern to identify uppercase tokens
lowercase_pattern=re.compile(r'^[a-z].*\b') # A Regex pattern to identify lowercase tokens
uppercase_list=[]
lowercase_list=[]
for each in filt_tokens:
uppercase_list+=uppercase_pattern.findall(each)
for each in filt_tokens:
lowercase_list+=lowercase_pattern.findall(each)
print("Total upper case tokens:",len(uppercase_list))
print("Total lower case tokens:",len(lowercase_list))
```
#### 7-2: Stemming using the Porter stemmer for lower case tokens
```
stemmer = PorterStemmer()
#print(['{0} -> {1}'.format(w, stemmer.stem(w)) for w in lowercase_list])
lowercase_tokens = [stemmer.stem(word) for word in lowercase_list]
print("Stemmed lower-case tokens:",len(lowercase_tokens))
```
#### 7-3: Combine both uppercase and stemmed lowercase to form the unigram
```
#combining lowercase and uppercase
final_unigram_tokens=list(lowercase_tokens+uppercase_list)
print("Total unigrams:",len(final_unigram_tokens))
#final_unigram_tokens
```
----------------------------------
### 8) Finding Bigrams
#### From previous step(7-3), top 200 bigrams are calculated from final uni-grams
```
bigram_measures = nltk.collocations.BigramAssocMeasures()
bigram_finder = nltk.collocations.BigramCollocationFinder.from_words(final_unigram_tokens)
bigram_finder.apply_freq_filter(1)
bigram_finder.apply_word_filter(lambda w: len(w) < 3)# or w.lower() in ignored_words)
top_200_bigrams = bigram_finder.nbest(bigram_measures.pmi, 200) # Top-200 bigrams
#top_200_bigrams
```
------------------------
### 9) Re-tokenization using MWETokenizer.
#### 9-1) Combine unigrams and bi-grams and make one vocab
```
mwe_tokenizer = MWETokenizer(top_200_bigrams)
mwe_tokens = mwe_tokenizer.tokenize(final_unigram_tokens)
mwe_tokens.sort()
mwe_tokens=set(mwe_tokens)
len(mwe_tokens)
#mwe_tokens contains the final tokens(unigrams and bi-grams)
```
#### 9-2) Convert mwe tokens to formated list
```
#This step is required to search the bi-grams in resume data set.
#Convert mwe tokens to formated list
new_list=[]
for each in mwe_tokens:
if re.match(".*\w+_\w+",each):
new_list.append(re.sub(r'_',' ',each))
else:
new_list.append(each)
```
#### 9-3) From the filtered tokens,checking for any context-dependent words and filtering them.
```
#From the filtered tokens,checking for any context-dependent words and filtering them.
tokens_tobefilt=count_tokens_sen(new_list,final_data) #from the tokens finding the count of presence in each resume file.
tokens_tobefilt=filter_contextdep(tokens_tobefilt) #finding the 98% and 2% tokens.
final_vocab=[w for w in new_list if w not in tokens_tobefilt] # cleaned tokens
final_vocab=[w for w in final_vocab if w not in stop_words]#removing any stopwords if present
final_vocab=filter_word(final_vocab) # removing any 3charc tokens
final_vocab.sort()
print("Final vocab:",len(final_vocab))
```
#### 9-4) Creating a final vocab index dictionary, which is output to file
```
# storing final vocab with dictionary with index
final_vocab_dict=dict(enumerate(final_vocab))
#print(final_vocab_dict)
```
--------------------------------------
### 10) Clean individual resume files and find the ocunt of each token in resume files
#### 10-1) This step is required to clean indivual resume files, in order to capture the count of tokens in each file
```
#The purpose of this step is to clean individual resume from stopwords, context dependent, context independent
#and less than 3 characters length
#Update the stop words with tokens with context dependent and tokens with less than 3 characters.
stop_words.update(tokens_to_be_filtered_contextdep+tokens_tobefilt+tokens_3char)
#Fucntion to remove stop words
def final_clean(data):
new_list=[]
new_list+=[w for w in data if w not in stop_words]
return new_list
#Access each resume and clean them.
final_cleaned_data=[]
for each in final_data:
new_str=''
token=f_tokenise(each)
new_str=final_clean(token)
final_cleaned_data.append(' '.join(new_str))
print("'final_cleaned_data' contains a list of cleaned individual resumes")
```
#### 10-2) Find the count of each tokens in each resume files
```
#Next find the count of each tokens in each resume files
#This function tokenises and calculates the count of each token appearing in the resume file
def word_count(data):
counts = dict()
words = f_tokenise(data)
for word in words:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
return counts
count_data_tokens=[]
for each_sen in final_cleaned_data:
count_data_tokens.append(word_count(each_sen))
print("'count_data_tokens' contains count of each token for each resume file")
```
#### 10-3) Now map the count of each token with the index of the final_vocab, which is the desired output
Here, 'resume_final_dict' contains the output in the format= token_index:count
```
final_dict=[]
for each_tokenised_file in count_data_tokens:
new_dict = dict((k, each_tokenised_file.get(v)) for k, v in final_vocab_dict.items())
new_dict={k:v for k,v in new_dict.items() if v is not None}
final_dict.append(new_dict)
resume_final_dict = dict(zip(resume_student_dataset, final_dict))
```
-----------------
### 11) Writing final vocab to a file
#### 11-1) Printing final vocab to a text file
```
import json
with open('29389690_vocab.txt','w') as output_file:
output_file.write("Vocab of Unigrams & Bigrams:\n")
output_file.write(json.dumps(final_vocab_dict))
output_file.close()
```
#### 11-2) Printing count vector to a text file
```
with open('29389690_countVec.txt','w') as output_file:
output_file.write("Count Vector:\n\n")
for k, v in resume_final_dict.items():
output_file.write('resume_'+str(k) + ','+ str(v).replace("{","").replace("}", "") + '\n\n')
output_file.close()
```
### 12) References
10-2) word_count referred from this link https://www.w3resource.com/python-exercises/string/python-data-type-string-exercise-12.php
### 13) Summary
#### Logic used for assessment:
Section 1) Identify the 250 resumes file numbers assigned to me.
##### Result: Out of 250, 219 were unqiue file numbers.
--------------
Section 2) Then out of total resumes(867) available, read only the 219 resumes' data from the directory. Additionally, sentence segmentation and case normalization is done.
##### Result: Out of 250, 219 were read, all sentences were segmented and case normalization was performed.
----------
Section 3) Perform tokenisation.
##### Result: Total tokens present after reading all 219 resumes: 137702
--------
Section 4) Tokens filtering from stop words.
##### Result: Tokens filtered from stop words: 105116
-------------
Section 5) Filtering tokens with less than 3 characters.
##### Result: Tokens less than 3 character removed and new filtered tokens= 91722
-----------
Section 6) Filtering context-dependent (with the threshold set to %98) and rare tokens (less than 2%).
##### Result: Filtered tokens after removing context dependent= 71962
-------------
Section 7) Stemming process only for lower-case tokens.
##### Result: After stemming, combining of uppercase and lowercase tokens, we get total tokens: 69002.
-------------
Section 8) Finding Bi-grams.
##### Result: Top 200 bi-grams is found.
-----------
Section 9) Re-tokenization using MWETokenizer.
##### Result: Using mwe tokeniser, bi-grams and unigrams are mixed and final vocab: 3296 is found.
--------------
Section 10) Calculating the term frequency and creating a vector for desired output.
##### Result: For each resume, count of each token is calculated.
---------------
Section 11) Writing output to files.
##### Result: Result is printed in two files
--------
#### The wrangling process illustrated above, shows how a text starting with bellow statistics, ended up reduced to more sparse text while conserving the main text feature.
######################### Text Statistics Before Wrangling ##################################
Total number of vocabs: 137702
######################### Final Text Statistics After Wranling ##################################
Total number of vocabs: 3296
### Creating Dataframe and plotting the values
```
df= pd.DataFrame({"count":[137702,105116,91722,71962,69002,3296]},
index=['Total tokens present after reading all 219 resumes','Tokens filtered from stop words','Tokens less than 3 character','Tokens removed context dependent','Count of tokens post stemming','Final vocab count post mwe tokenise'])
df.plot.bar()
plt.xlabel('Wrangling stage')
plt.ylabel("Frequency")
plt.title("Token reduction post each wrangling stage")
plt.show()
```
| github_jupyter |
# Linear Algebra with Python and NumPy
```
# First, we need to import the package NumPy, which is the library enabling all the fun with algebraic structures.
from numpy import *
```
## Complex Numbers
A complex number is a number of the form $z = x + jy$, where $x$ and $y$ are real numbers and $j$ is the **_imaginary unit_**, satisfying $j^2 = −1$. Note that the imaginary unit, often denoted as $i$, is denoted as $j$ in Python.
The set $\mathbb{C}$ of all complex numbers can be actually defined as the set of ordered pairs of real numbers $\{(x,y) \mid x,y\in\mathbb{R} \}$ that satisfies the following operations
<img src="https://betterexplained.com/wp-content/uploads/complex/complex_conjugates.png" style="float:right"/>
- *addition:* $(a,b)+(c,d) = (a+c,b+d)$
- *multiplication:* $(a,b)\cdot(c,d) = (ac-bd,ad+bc)$
Then, it is just a matter of notation to express a complex number as $(x, y)$ or as $x + jy$.
When we have a complex number $z\in\mathbb{C}$, we can denote its real and imaginary part as
$$ x = \Re(z), \quad y = \Im(z). $$
The **_complex conjugate_** of the complex number $z = x + jy$ is denoted by either $\bar{z}$ or $z^*$ and defined as
$$\bar{z} = x − jy .$$
The **_absolute value_** (or modulus or magnitude) of a complex number $z = x + jy$ is
$$ | z | = \sqrt{x^2+y^2} = \sqrt{z \bar{z}} .$$
```
z = 3 + 4j # Define complex number z
print('z =', z)
print('Re(z) =', real(z)) # Get real part of z
print('Im(z) =', imag(z)) # Get imaginary part of z
print('|z| =', abs(z)) # Get absolute value of z
```
Note that to obtain $j=\sqrt{-1}$ we must write the argument of `sqrt` function as a complex number (even if has zero imaginary part), otherwise Python tries to compute sqrt on real numbers and throws an error.
```
z = sqrt(-1+0j)
print('sqrt(-1) =', z)
```
## Vectors and Matrices
Using NumPy we can define vectors and matrices with both real or complex elements. Although, in contrast to Matlab, where matrix is the default type, in Python we need to define vectors and matrices as `array` or `matrix` type from NumPy package.
<img src="http://www.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture1/Images/matrix.gif"/>
```
a = array([10,20,30]) # Define a vector of size 3 using type 'array'
print(a)
print(a.shape) # Size/shape of vector
b = matrix('10 20 30') # Define a vector of size 3 using type 'matrix'
print(b)
print(b.shape) # Size/shape of vector
c = linspace(10,20,6) # Define vector as 6 values evenly spaced from 10 to 20
print(c)
```
Note that matrix and array elements in Python are indexed from 0, in contrast to Matlab where indexing starts from 1.
```
print(c[:]) # Get all elements
print(c[0]) # The first element
print(c[-1]) # The last element
print(c[:3]) # The first 3 elements
print(c[-3:]) # The last 3 elemnets
print(c[2:4]) # 2:4 selects elements of indexes 2 and 3
```
**_Euclidean norm_** of vector is returned by method `numpy.linalg.norm`
```
norm = linalg.norm(a) # Euclidean norm of vector a
print('a =', a)
print('norm(a) =', norm)
x = a/linalg.norm(a) # Make normalized/unit vector from a
print('x =', x)
print('norm(x) =', linalg.norm(x))
```
**_Transposition_** of vectors is not so intuitive as in Matlab, especially if a vector is defined as 1D `array` and you cannot distinguish between row and column vector. However, using the keyword `newaxis` it's possible to shape the vector into 2D array (as matrix of size $1 \times n$ or $n \times 1$), where transposition makes sense and can be obtained by attribute `.T`.
```
x = a[:,newaxis] # Make column vector from vector a (defined as array)
print(x)
print(x.shape) # Now size of column vector is 3x1
print(x.T) # Make row vector by transpostion of column vector
```
If a vector was defined as 2D array of type `matrix`, transportation is not a problem.
```
x = b.T # Make column vector from vector b (defined as matrix)
print(x)
print(x.shape) # Now size of column vector is 3x1
print(x.T) # Make row vector by transpostion of column vector
```
**_Matrices_** can be defined as 2D arrays of type `array` or `matrix` (there is no problem with transposition with any type).
```
A = array([[11,12,13], [21,22,23], [31,32,33]]) # Define matrix of size 3x3 as 2D 'array-type'
print(A)
print(A.shape)
B = matrix('11 12 13; 21 22 23; 31 32 33') # Define matrix of size 3x3 as 'matrix-type'
print(B)
print(B.shape)
print(B[0,1]) # Get matrix element at row 0, column 1
print(B[0,:]) # Get 1st row of matrix (A[0] returns also 1st row)
print(B[:,0]) # Get 1st column of matrix
print(A[:,0]) # Note that column from 'array-type' matrix is returned as 1D array
print(B[:,0]) # Column from 'matrix-type' matrix is returned as true column as expected
```
NumPy can generate some essential matrices exactly like Matlab.
```
print('3x3 Matrix full of zeros:')
print(zeros([3,3]))
print('\n3x3 Matrix full of ones:')
print(ones([3,3]))
print('\n3x3 identity matrix:')
print(eye(3))
print('\n3x3 diagonal matrix:')
x = array([1.,2.,3.])
print(diag(x))
print('\n3x3 random matrix:')
print(random.rand(3,3))
```
For merging matrices or vectors methods `numpy.hstack` and `numpy.vstack` can be used.
```
print(vstack([ A, ones([1,3]) ])) # Add row vector to matrix
print(hstack([ A, ones([3,1]) ])) # Add column vector to matrix
print(hstack([ A, eye(3) ])) # Merge two matrices horizontally
```
## Operations with Matrices
**_Matrix transposition_** is obtained by attribute `.T`
```
X = ones([2,5]) # Generate 2x5 matrix full of ones
Y = X.T # Obtain transpose of matrix X
print('Matrix X of size', X.shape, ':\n', X)
print('\nMatrix Y=X.T of size', Y.shape, ':\n', Y)
```
**_Hermitian transpose_** (or conjugate transpose) of complex matrix $\mathbf{A}\in\mathbb{C}^{m\times n}$ is obtained by taking the transpose of $\mathbf{A}$ and then taking the complex conjugate of each element. Note that for real matrices Hermitian transpose and plain transpose does not differ. In NumPy this kind of transposition is obtained by attribute `.H` (exists only for matrix type).
```
X = matrix((3+4j)*ones([2,5])) # Generate matrix full of complex elements 3+4j
Y = X.H # Obtain Hermitian transpose of matrix X
print('Matrix X of size', X.shape, ':\n', X)
print('\nMatrix Y=X.H of size', Y.shape, ':\n', Y)
```
**_Matrix multiplication_** must be executed by method for dot product `numpy.dot`. Operator `*` produces only element-wise multiplication in Python.
```
print('Matrix A:')
print(A)
print('\nMatrix B:')
B = ones([3,3])
print(B)
print('\nElement-wise multiplication A*B:')
print(A*B)
print('\nMatrix multiplication A by B:')
print(dot(A,B))
print('\nMatrix multiplication B by A:')
print(dot(B,A))
```
There are also methods for essential matrix features like **_Frobenius norm_**, **_rank_** or **_determinant_**.
```
print('Matrix A of size', A.shape, ':\n', A)
# Frobenius norm of matrix
print('\nFrobenius norm: ||A|| =', linalg.norm(A))
# Rank of matrix
print('rank(A) =', linalg.matrix_rank(A))
# Determinant of matrix
print('det(A) =', linalg.det(A))
```
In example above, note that the matrix $\mathbf{A}$ is a singular matrix, because its rank is lower than number of its rows, thus also its detemninat is zero.
## Conclusion
As we can see from this article, Python and NumPy package can be used to perform all the usual matrix manipulations. There are only few annoying things one need to keep in mind when writing Python code. For example, operator `*` applied to matrices doesn't produce matrix product, but only element-wise multiplication. Or vectors, many methods return them just as 1D `array`, so we need to convert them into 2D `array` or `matrix` type first, to be able to distinguish between row and column vector.
### References:
- [Complex numbers](https://en.wikipedia.org/wiki/Complex_number)
- [Vectors](https://en.wikipedia.org/wiki/Coordinate_vector)
- [Matrix][1]
- [Hermitian transpose](https://en.wikipedia.org/wiki/Conjugate_transpose)
- [Linear algebra](https://en.wikipedia.org/wiki/Linear_algebra)
- [Vector space](https://en.wikipedia.org/wiki/Vector_space)
- [NumPy documentation](http://docs.scipy.org/doc/numpy/)
- [NumPy for Matlab users](https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html)
- [Matplotlib documentation](http://matplotlib.org/)
[1]:https://en.wikipedia.org/wiki/Matrix_(mathematics)
| github_jupyter |
# Two Market Makers - via Pontryagin
This notebook corresponds to section 4 (**Agent based models**) of "Market Based Mechanisms for Incentivising Exchange Liquidity Provision" available [here](https://vega.xyz/papers/liquidity.pdf). It models two market makers and solves the resulting game by an iterative scheme based on the Pontryagin optimality principle.
```
import math, sys
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from os import path
count = 0
from matplotlib.backends.backend_pdf import PdfPages
T = 0.4;
sigma0 = 3
sigma1 = 0.5
lambd = 0.1
r = 0.0
rRisk0 = 0.3
rRisk1 = 0.1
delta_a = 1e-4
fee_scaling = 0.1
# This is key; how does instantenaous trading volume react
# to market making stake
# and to fees. You could specify different beleifs for the two different agents.
def fee_volume_response(f):
f = np.maximum(f, np.zeros(np.size(f)))
f = np.minimum(f, np.ones(np.size(f)))
return 1.0/(f+0.01) - f
def stake_volume_response(S):
return 1.0 / (1+np.exp(-0.05*S+2)) - 1.0 / (1+np.exp(2))
# Check that the shape below is concave (i.e. there is a single maximum) we need
# this if we want the optimization procedure to converge
x_span = np.linspace(0,1, 1000)
y = fee_scaling * fee_volume_response(x_span) * x_span
print('Max %f' % max(y))
max_idx=np.argmax(y)
plt.xlabel('fee in %')
plt.ylabel('volume in %')
plt.title('Fee response times fee')
plt.plot(x_span,y)
# Check that the shape below is concave (i.e. there is a single maximum) we need
# this if we want the optimization procedure to converge.
# Of course you may be lucky and things will work even in the case when it's not exactly convex...
x_span = np.linspace(0,200, 200)
y = stake_volume_response(x_span)
plt.xlabel('stake')
plt.ylabel('volume in %')
plt.title('Stake response')
plt.plot(x_span,y)
# As things are set-up at moment the agents only differ in their belief about
# the maximum trading volume they'd expect to see
def trading_volume0(f,S):
N_max = 10000
return N_max * fee_volume_response(f) * stake_volume_response(S)
def trading_volume1(f,S):
N_max = 50000
return N_max * fee_volume_response(f) * stake_volume_response(S)
def running_gain0(t,f,S0,S1,a0):
frac = S0/(S0+S1)
stake = S0+S1
return np.exp(-r*t) * (frac * fee_scaling * f * trading_volume0(f,stake) - max(lambd * sigma0 * S0,0)) - max(np.exp(rRisk0*t)*S0, 0) \
- delta_a * a0*a0
def running_gain1(t,f,S0,S1,a1):
frac = S1/(S0+S1)
stake = S0+S1
return np.exp(-r*t) * (frac * fee_scaling * f * trading_volume1(f,stake) - max(lambd * sigma1 * S1,0)) - max(np.exp(rRisk1*t)*S1, 0) \
- delta_a * a1*a1
def running_gain_x_0(t,x,S_1, a0):
f = x[0]
S_0 = x[1]
return running_gain0(t,f,S_0,S_1, a0)
def running_gain_x_1(t,x,S_0, a1):
f = x[0]
S_1 = x[1]
return running_gain1(t,f,S_0,S_1, a1)
# Below we define the gradients (using finite difference)
# of the running gain specified above - this is just a technicality
# used in the subsequent optimization.
def grad_x_of_running_gain_0(t,x,S1,a):
delta = 1e-8
grad = np.zeros(2)
#print(x)
x_plus = x + np.array([delta, 0])
x_minus = x - np.array([delta, 0])
rg_plus = running_gain_x_0(t,x_plus,S1,a)
rg_minus = running_gain_x_0(t,x_minus,S1,a)
#print(x_plus)
grad[0] = (rg_plus - rg_minus)/(2*delta)
x_plus = x + np.array([0, delta])
x_minus = x - np.array([0, delta])
rg_plus = running_gain_x_0(t,x_plus,S1,a)
rg_minus = running_gain_x_0(t,x_minus,S1,a)
grad[1] = (rg_plus - rg_minus)/(2*delta)
return grad
def grad_x_of_running_gain_1(t,x,S0,a):
delta = 1e-8
grad = np.zeros(2)
x_plus = x + np.array([delta, 0])
x_minus = x - np.array([delta, 0])
rg_plus = running_gain_x_1(t,x_plus,S0,a)
rg_minus = running_gain_x_1(t,x_minus,S0,a)
grad[0] = (rg_plus - rg_minus)/(2*delta)
x_plus = x + np.array([0, delta])
x_minus = x - np.array([0, delta])
rg_plus = running_gain_x_1(t,x_plus,S0,a)
rg_minus = running_gain_x_1(t,x_minus,S0,a)
grad[1] = (rg_plus - rg_minus)/(2*delta)
return grad
# Initialization
L_S = 150;
L_f = 1;
N_T = 200; delta_t = T / (N_T-1);
N_S = 45;
N_f = 45;
t_span = np.linspace(0, T, N_T)
f_span = np.linspace(0, L_f, N_f)
S_span = np.linspace(0, L_S, N_S)
def grid_idx_from(S,S_span):
min_S = S_span[0]
N_S = np.size(S_span)
max_S = S_span[N_S-1]
delta_S = (max_S-min_S)/(N_S-1)
return max(min(int(round(S/delta_S)), N_S-1),0)
F_vals = np.zeros([np.size(f_span), np.size(S_span)])
f_times_V_vals = np.zeros([np.size(f_span), np.size(S_span)])
grad_F_vals = np.zeros([np.size(f_span), np.size(S_span), 2])
for f_idx in range(0, np.size(f_span)):
for S_idx in range(0, np.size(S_span)):
f = f_span[f_idx]
S = S_span[S_idx]
F_vals[f_idx,S_idx] = running_gain0(T, f, S, 10, 0)
f_times_V_vals[f_idx,S_idx] = f*trading_volume0(f,S)
grad_F_vals[f_idx,S_idx,:] = grad_x_of_running_gain_0(T, np.array([f, S]), 10, 0)
max_idx = np.unravel_index(np.argmax(F_vals, axis=None),F_vals.shape)
print(f_span[max_idx[0]])
print(S_span[max_idx[1]])
plotGridX, plotGridY = np.meshgrid(S_span, f_span)
fig = plt.figure()
#ax1 = fig.add_subplot(111,projection='3d')
ax1 = fig.gca(projection='3d')
surf = ax1.plot_surface(plotGridX, plotGridY, f_times_V_vals[:,:], cmap=cm.autumn, antialiased=True)
ax1.set_xlabel('stake')
ax1.set_ylabel('fee')
ax1.set_zlabel('V')
ax1.set_zlim(0, 40000)
ax1.view_init(30, 20)
ax1.set_title('Agent 1')
plt.savefig('response1.pdf')
gamma_f = -0.02
gamma_S = 5
m = 1
def drift_0(a0,a1):
b = np.zeros(2)
b[0] = gamma_f*(a0+a1)
b[1] = gamma_S*a0
return b
def drift_1(a0,a1):
b = np.zeros(2)
b[0] = gamma_f*(a0+a1)
b[1] = gamma_S*a1
return b
def grad_a0_H0(y,a0,a1):
val = gamma_f*y[0] + gamma_S*y[1] - 2*delta_a*a0
return val
def grad_a1_H1(y,a0,a1):
val = gamma_f*y[0] + gamma_S*y[1] - 2*delta_a*a1
return val
# Fix initial fee & and stake of two players
fee_init = 0.5 # has to be between 0 and 1
player0_stake = 250
player1_stake = 10
# Learning params:
# higher value means faster convergence but less stability i.e.:
# if you see stupid output (explosion, negative fees etc.) set this lower.
rho = 0.05
# learning takes a long time and if it says "failed at the end it might just means that it's still updating a bit."
max_iter = 6000
#stopping criteria: once the updates are smaller than this in l-infinity then stop
max_error = 0.1
# fees are the 0th component, stake is the 1st component
# first player, index 0
actions0 = np.zeros([1,N_T+1])
x_vals0 = np.zeros([2,N_T+1])
x_vals0[:,0] = np.array([fee_init, player0_stake])
y_vals0 = np.zeros([2,N_T+1])
# second player, index 1
actions1 = np.zeros([1,N_T+1])
x_vals1 = np.zeros([2,N_T+1])
x_vals1[:,0] = np.array([fee_init, player1_stake])
y_vals1 = np.zeros([2,N_T+1])
def run_iterative_system(max_iter,max_error):
actions_old0 = np.zeros([1,N_T+1])
actions_old1 = np.zeros([1,N_T+1])
diff = 0; failed_to_converge=True
for iter_idx in range(0,max_iter):
# Run x0, x1 forwards
for i in range(0,N_T):
x_vals0[:,i+1] = x_vals0[:,i] + drift_0(actions0[0,i], actions1[0,i]) * delta_t
# second guy only updates the stake
# but the fee evolution is copied from first
x_vals1[0,i+1] = x_vals0[0,i+1]
x_vals1[1,i+1] = x_vals1[1,i] + drift_1(actions0[0,i], actions1[0,i])[1] * delta_t
# Run y0, y1 backwards
y_vals0[:,N_T] = np.zeros(2)
y_vals1[:,N_T] = np.zeros(2)
for i in reversed(range(0,N_T)):
S0 = x_vals0[1,i]
S1 = x_vals1[1,i]
grad_x_F_0 = grad_x_of_running_gain_0(t_span[i], x_vals0[:,i], S1, actions0[0,i])
grad_x_F_1 = grad_x_of_running_gain_1(t_span[i], x_vals1[:,i], S0, actions1[0,i])
y_vals0[:,i] = y_vals0[:,i+1] + grad_x_F_0 * delta_t
y_vals1[:,i] = y_vals1[:,i+1] + grad_x_F_1 * delta_t
for i in range(0,N_T):
# Do one gradient ascent step (we are maximizing)
actions0[0,i] = actions0[0,i] + rho*grad_a0_H0(y_vals0[:,i],actions0[0,i],actions1[0,i])
actions1[0,i] = actions1[0,i] + rho*grad_a1_H1(y_vals1[:,i],actions0[0,i],actions1[0,i])
diff0 = np.max(np.abs(actions0 - actions_old0))
diff1 = np.max(np.abs(actions1 - actions_old1))
if (diff0 < max_error) and (diff1 < max_error) :
print('Converged; iteration %d, diff0 is %f, diff1 is %f' % (iter_idx, diff0, diff1))
failed_to_converge = False
break
actions_old0 = np.copy(actions0)
actions_old1 = np.copy(actions1)
if failed_to_converge:
print('Failed after %d iteration, diff0 is %f, diff1 is %f' % (max_iter, diff0,diff1))
%timeit -n1 -r1 run_iterative_system(max_iter, max_error)
plt.plot(t_span, 1000 * fee_scaling * x_vals0[0,0:N_T].T,label='f0 in 10 x %')
plt.plot(t_span, 1000 * fee_scaling * x_vals1[0,0:N_T].T,color='green',label='f1 in 10 x %')
plt.xlabel('time')
plt.plot(t_span, x_vals0[1,0:N_T].T,color='red',label='stake 0')
plt.plot(t_span, x_vals1[1,0:N_T].T,color='pink',label='stake 1')
plt.title('State evolution - fees and stake')
plt.xlabel('time')
plt.ylabel('level')
plt.legend()
plt.savefig('state.pdf')
fig = plt.figure()
plt.plot(t_span, actions0[0,0:N_T].T,label='a - 0')
plt.plot(t_span, actions1[0,0:N_T].T, color='green',label='a - 1')
plt.title('Actions evolution')
plt.xlabel('time')
plt.ylabel('actions fees')
plt.xlabel('time')
plt.ylabel('level')
plt.legend()
plt.savefig('actions.pdf')
print('Minimum fee %.2f%%. Final fee %.2f%%.' % (fee_scaling * 100*min(x_vals1[0,0:N_T]),fee_scaling * 100*x_vals1[0,N_T-1]))
print('Minimum stake %.0f. Maximum stake %.0f. Final stake %.0f.' % (min(x_vals0[1,0:N_T]+x_vals1[1,0:N_T]),max(x_vals0[1,0:N_T]+x_vals1[1,0:N_T]),x_vals0[1,N_T-1]+x_vals1[1,N_T-1]))
# Adjoint process plot: this is a 'dummy' process used in the optimization
# and you can ignore it if all goes well
fig = plt.figure()
plt.plot(t_span, 0.1*y_vals0[0,0:N_T].T, label='adj. fees 0')
plt.plot(t_span, 0.1*y_vals1[0,0:N_T].T, color='green', label='adj. fees 1')
plt.xlabel('time')
plt.plot(t_span, y_vals0[1,0:N_T].T, color = 'red', label='adj. stake 0')
plt.plot(t_span, y_vals1[1,0:N_T].T, color = 'pink', label='adj. stake 0')
plt.title('Adjoint evolution - fees and stake')
plt.xlabel('time')
plt.legend()
```
| github_jupyter |
```
import sys
import os
sys.path.insert(0, os.path.abspath('../src/'))
```
# Plotting
```
from pathlib import Path
import SimplePreprocessor as sp
DATASETPATH = Path("../dataset/")
pr = sp.SimplePreprocessor(deltas=True, discretize=False, flevel="MAGIK")
netdata = pr.load_path(DATASETPATH)
netdata["_date"] = netdata.index.get_level_values("_time").strftime('%a %d %b %y')
import numpy as np
import ipywidgets as widgets
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from ipywidgets import HBox, VBox, interactive, Layout
devices_idxs = netdata.index.droplevel(2).unique()
devices = [f"{host} ({cat})" for cat, host in devices_idxs]
devices.sort()
available_channels = [c for c in netdata.columns if (("time" not in c) and (c[0] != "_"))]
available_channels.sort()
available_days = np.unique(netdata["_date"])
# ----- ----- WIDGETS ----- ----- #
# ----- ----- ------- ----- ----- #
device_w_list = widgets.Dropdown(options=devices)
days_w_list = widgets.Dropdown(options=available_days)
selectedc_w_list = widgets.SelectMultiple(options=available_channels,
description='Channel',
layout=Layout(width='400px'))
timerange_slider = widgets.FloatSlider(min=.005, max=1., step=.005)
smoothing_slider = widgets.FloatSlider(min=0, max=79, step=4,
description="Smoothing (aggregate x minutes)")
offset_slider = widgets.FloatSlider(min=.0, max=1., step=.01)
ts_selector = HBox([device_w_list, days_w_list])
col_selector = HBox([selectedc_w_list])
ts_shifting = HBox([timerange_slider, offset_slider])
wlist = VBox([ts_selector, col_selector, ts_shifting, smoothing_slider])
# ----- ----- PLOTTER ----- ----- #
# ----- ----- ------- ----- ----- #
def mprint(s):
display(Markdown(s))
def randcolors(n):
hexl = list('0123456789ABCDEF')
hexc = np.random.choice(hexl, size=(n, 6))
return ['#' + ''.join(x) for x in hexc]
def remove_empty(data):
empty_cols = [ c for c in data.columns if (data[c]==0).all() ]
for c in empty_cols:
mprint(f"**<span style='color: red'>Empty series:</span> {c}**")
return data.drop(empty_cols, axis=1)
def datetime2xaxis(dtseries, smoothing):
if len(dtseries) <= 50:
return "%a - %H:%M:%S"
elif len(dtseries) <= 100:
return "%a - %H:%M"
else:
return "%a - %H"
def describe_mtimeseries(plotname, data, smoothing=1):
# Data description ..... #
mprint(f"### {plotname}")
start = min(data.index)
end = max(data.index)
mprint(f"**Time range**: {start} **/** {end}")
mprint(f"**Total data range:** {end-start}")
mprint(f"**Samples shown**: {len(data)}")
mprint(f"**Smoothing**: {int(smoothing / 4)} minutes")
if len(data) <= 50:
xaxis_format = "%a - %H:%M:%S"
elif len(data) <= 100:
xaxis_format = "%a - %H:%M"
else:
xaxis_format = "%a - %H"
# Plotting clean data ..... #
empty_cols = []
legend = []
data = remove_empty(data)
# Smoothing ..... #
channels = data.drop(["_isanomaly"], axis=1).columns
data[channels] = data[channels].rolling(smoothing, center=True).sum() / smoothing
data = data.dropna()
anomaly_mask = (data["_isanomaly"] != "none")
for idx, c in enumerate(channels):
legend.append(c)
fig, ax = plt.subplots(figsize=(12, 6))
ax.format_xdata = mdates.DateFormatter(xaxis_format)
ax.plot(data.index, data[c])
fig.autofmt_xdate()
if anomaly_mask.any():
attack_data = data[anomaly_mask]
for anomalyname, anomalydata in attack_data.groupby("_isanomaly"):
legend.append(anomalyname)
anomalydata = anomalydata.drop("_isanomaly", axis=1)
ax.plot(anomalydata.index, anomalydata.values)
fig.autofmt_xdate()
fig.suptitle(f"{c}", fontweight="bold")
plt.legend(legend)
plt.show()
# ----- ----- INTERACTOR ----- ----- #
# ----- ----- ---------- ----- ----- #
def whandler(device, day, channel, timerange, offset, smoothing):
split = device.split(" ")
host = split[0].strip()
category = " ".join(split[1:]).replace("(", "").replace(")", "").strip()
data = netdata[netdata["_date"]==day]
chs = set(channel)
chs.add("_isanomaly")
chs = list(chs)
data = data.loc[category, host][chs]
# Filtering time range
full_length = len(data)
start_idx = int(full_length * offset)
end_idx = min(start_idx + int(full_length * timerange), full_length)
data = data.iloc[start_idx:end_idx]
describe_mtimeseries(device, data, int(smoothing+1))
%matplotlib inline
output = widgets.interactive(whandler,
device=device_w_list, day=days_w_list,
channel=selectedc_w_list,
timerange=timerange_slider,
offset=offset_slider,
smoothing=smoothing_slider).children[-1]
display(wlist)
display(output)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cedeerwe/brutalna-akademia/blob/master/notebooks/zaverecny_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Inštrukcie
Test pozostáva zo 7 príkladov, dokopy za 50 bodov. Na test máš 3 hodiny času, ktoré si musíš odsledovať sám/sama. Časovač si spusti vtedy, keď si začneš čítať zadanie prvej úlohy.
Každá úloha má v názve uvedený počet bodov, ktoré môžeš získať za kompletné riešenie. Čiastočné riešenia budú tiež bodované. Ak si to úloha vyžaduje, k riešeniu patrí aj vysvetlenie, prečo je vaše riešenie riešením.
Úlohy riešte priamo v kópii tohto colabu. Po dokončení nám svoj colab pošlite mailom. O teste prosím žiadným spôsobom nekomunikovať s ostatnými, kým k tomu nebudete vyzvaní (všetci ho odovzdajú).
Držíme palce!
# Úlohy
## Šípky (6 bodov)
### Zadanie
Idete sa zúčastniť súťaže v hádzaní šípok ([referencia](https://en.wikipedia.org/wiki/Darts#/media/File:Dartboard_diagram.svg)). Zlepšiť sa už síce nestihnete, ale môžte sa aspoň zamyslieť nad svojou stratégiou, prípadne si hodiť pár skúšobných hodov.
Kam by ste mali mieriť so svojími schopnosťami, aby ste maximalizovali svoj bodový zisk?
### Riešenie
## Poker (5 bodov)
### Zadanie
Prihlásili ste sa do súťaže v hraní matematického pokru. Pravidlá sú nasledovné:
1. hru hrajú dvaja hráči,
1. obaja hráči na začiatku do hry vložia 1€
1. každý hráč si vytiahne rovnomerne náhodné číslo od 0 po 1, predstavujúce silu jeho kariet,
1. náhodne sa určí začínajúci hráč,
1. začínajúci hráč môže: a) *fold* a prehrať, b) *raise* a zvýšiť hru o 1€, c) *check* a nechať ísť druhého hráča,
1. druhý hráč môže: a) *fold* a prehrať, b) *call* a dorovnať prvého hráča, ak zvýšil, c) *check* a pokračovať v hre, ak prvý hráč nezvýšil,
1. ak žiadny z hráčov nezložil, porovnajú sa čísla a víťaz si berie všetky stávky
Aká by mala byť vaša optimálna stratégia v tejto hre?
Ako by sa zmenila vaša stratégia, keby prvý hráč mohol zvýšiť o ľubovoľnú sumu, nie iba 1€?
### Riešenie
## Random playlist (10 bodov)
### Zadanie
Vlastníte službu na streamovanie hudby a máte veľa zákazníkov, ktorí majú veľa obľubených pesničiek. Vyžiadali si od vás funkcionalitu, aby si mohli púšťať svoje pesničky ako "random shuffle", teda v náhodnom poradí.
Každá pesnička má vo vašom katalógu niekoľko vlastností, konkrétne:
- interpret - reprezentovaný ako "a", "b", "c", ...
- žáner - reprezentovaný ako "A", "B", "C", ...
- číslo pesničky - reprezentované ako 1, 2, 3, ...
Toto celé je reprezentované ako trojica:
```
priklad = ("a", "F", 9)
```
Dostali ste zoznam 100 obľúbených pesničiek istého uživateľa. Vygenerujte z nej postupnosť 10,000 pesničiek vyskladaných z týchto 100 pesničiek v takom poradí, ako by ste mu ich pustili za sebou, keby mal svoj "random shuffle" pustený fakt dlho.
Ohodnotení budete na základe spokojnosti zákazníka po vypočutí všetkých 10,000 pesničiek. Zákazník očakáva od "random shuffle", že keď si ho pustí, tak pesničky budú chodiť rozumne náhodne a nebude počúvať za sebou príliš veľa podobných.
```
zoznam_pesniciek = [
('f', 'D', 0),
('j', 'C', 1),
('h', 'B', 2),
('e', 'D', 3),
('c', 'A', 4),
('a', 'C', 5),
('j', 'B', 6),
('i', 'D', 7),
('a', 'C', 8),
('d', 'B', 9),
('i', 'C', 10),
('i', 'D', 11),
('g', 'D', 12),
('f', 'B', 13),
('b', 'C', 14),
('b', 'D', 15),
('g', 'A', 16),
('c', 'A', 17),
('j', 'C', 18),
('h', 'A', 19),
('f', 'B', 20),
('e', 'C', 21),
('c', 'E', 22),
('i', 'B', 23),
('b', 'A', 24),
('g', 'D', 25),
('b', 'D', 26),
('b', 'A', 27),
('i', 'C', 28),
('g', 'E', 29),
('c', 'C', 30),
('a', 'D', 31),
('g', 'B', 32),
('d', 'B', 33),
('g', 'B', 34),
('f', 'A', 35),
('g', 'C', 36),
('a', 'B', 37),
('f', 'D', 38),
('i', 'A', 39),
('g', 'C', 40),
('d', 'D', 41),
('d', 'A', 42),
('e', 'A', 43),
('g', 'E', 44),
('d', 'D', 45),
('b', 'A', 46),
('e', 'E', 47),
('f', 'B', 48),
('i', 'A', 49),
('e', 'D', 50),
('c', 'A', 51),
('i', 'E', 52),
('j', 'E', 53),
('d', 'A', 54),
('d', 'C', 55),
('e', 'C', 56),
('a', 'C', 57),
('h', 'C', 58),
('i', 'E', 59),
('h', 'B', 60),
('e', 'C', 61),
('a', 'A', 62),
('f', 'A', 63),
('d', 'A', 64),
('f', 'D', 65),
('d', 'A', 66),
('a', 'E', 67),
('e', 'E', 68),
('d', 'E', 69),
('b', 'B', 70),
('i', 'A', 71),
('j', 'D', 72),
('h', 'B', 73),
('c', 'E', 74),
('i', 'D', 75),
('j', 'B', 76),
('e', 'C', 77),
('e', 'B', 78),
('g', 'A', 79),
('d', 'E', 80),
('i', 'E', 81),
('b', 'A', 82),
('d', 'E', 83),
('b', 'C', 84),
('c', 'B', 85),
('j', 'D', 86),
('a', 'E', 87),
('h', 'E', 88),
('i', 'C', 89),
('c', 'A', 90),
('i', 'C', 91),
('e', 'D', 92),
('a', 'E', 93),
('g', 'A', 94),
('b', 'B', 95),
('h', 'D', 96),
('a', 'A', 97),
('d', 'E', 98),
('i', 'B', 99)
]
```
### Riešenie
## Štvorsten (6 bodov)
### Zadanie
Idete sa zúčastniť skutočného turnaja v navrhovaní hracích štvorstenov, ktorého sa zúčastnia všetci účastníci, ktorí odovzdajú riešenie tejto úlohy. Počet bodov za túto úlohu bude záležať od vášho skutočného umiestnenia.
Hracie štvorsteny sa od hracích kociek tým, že majú iba 4 steny a počet bodiek na každej stene je v súčte iba 6. Navyše môžu byť tieto bodky ľubovoľne rozdelené po všetkých stenách.
Hracie štvorsteny sa porovnávajú tak, že spočítame kto má väčšiu šancu hodiť vyššie číslo - ten hráč vyhrá a získa 2 body. V prípade rovnosti je remíza za 1 bod pre každého.
Ako príklad, ak by sme porovnávali štvorsten [6, 0, 0, 0] a štvorsten [3, 2, 1, 0], prvý štvorsten vyhrá so šancou $\tfrac{1}{4}$, druhý štvorsten vyhrá so šancou $\tfrac{9}{16}$ a vo zvyšných $\tfrac{3}{16}$ je remíza. Druhý hráč by teda získal 2 body a prvý 0 bodov.
Turnaj má dve rôzne disciplíny:
1. Navrhnite jeden štvorsten, ktorý pôjde do turnaja a bude sa porovnávať s ostatnými. *Príklad riešenia: [3, 2, 1, 0].*
2. Navrhnite pravdepodobnostné rozdelenie cez všetky štvorsteny, aby ste v priemere získali čo najviac bodov pri porovnaní s pravdepodobnostnými rozdeleniami ostatných hráčov. *Príklad riešenia: 50% [3,2,1,0], 30% [3,3,0,0], 20% [6, 0, 0, 0].*
### Riešenie
## Internet (5 bodov)
### Zadanie
Dostali ste otázku "Ako funguje internet?" od istej osoby. Vašou úlohou je to tejto osobe vysvetliť jednou vetou a jedným odkazom na stránku, z ktorej pochopí viac, ak by ste chcela.
Spomínaná osoba je jednou z nasledujúcich možností:
1. prváčik v škole
1. váš rovesník z akadémie
1. učiteľ informatiky na univerzite
1. Bill Gates
1. babička na ulici
Zodpovedajte túto úlohu pre **každú** z vyššie uvedených možností.
### Riešenie
## Rosnička (9 bodov)
### Zadanie
V tejto úlohe máte za cieľ predpovedať isté hodnoty, pre konkrétny deň - 1. mája. Pre každú z týchto hodnôt si prosíme
- bodový odhad,
- 80% interval spoľahlivosti.
Hodnoty, ktoré nás zaujímajú:
- počet vykonaných testov na koronavírus na Slovensku
- počet Facebookových statusov od premiéra Igora Matoviča https://www.facebook.com/igor.matovic.7
- maximálna denná teplota v obci Bukovina podľa [SHMÚ](http://www.shmu.sk/sk/?page=1)
- minimálna cena za barel ropy podľa https://markets.businessinsider.com/commodities/oil-price?type=wti
- počet návštev na stránke https://en.wikipedia.org/wiki/Education
- počet 250g balení masla v mrazničke v domácnosti vášho lektora - Dominika Csibu - rátajú sa iba tie s obsahom tuku 82%
### Riešenie
## Ale fakt? (9 bodov)
### Zadanie
Zodpovedajte na nasledujúce otázky:
1. najviac koľkokrát za deň by sme mohli osláviť nový rok?
1. ktorý ostrov nazval Christopher Columbus *holy glory*?
1. ako najrýchlejšie zvládol človek postrkať nosom pomaranč, aby ním prešiel jeden kilometer?
1. aká je najdlhšia kosť v tele autora knihy *Life without limits*?
1. ktorý kuchynský projekt na kickstarteri mal svoj cieľ prekonaný viac ako 5000x násobne?
### Riešenie
| github_jupyter |
# Data types & Structures
### A great advatage of `Python` is the type of data it can handle & combine
Python has been widely used to handle internet related operations, which means lots and lots of text and numbers. combined!
***
## Let's start with the basic types!
### Like other programing languages, `Python` data types include integers, floats, complex, and strings & boolean
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>In the next cell, assign a float value to <b>x</b> and execute the cell
</div>
```
lat = 20
print(lat)
```
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Assign an integer value to <b>y</b> and execute the cell
</div>
```
y =
print(y, lat*y)
```
### Complex values are identified by a `j` at the end
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Assign a complex value to <b>z</b> and execute the cell
</div>
```
z =
print(z, type(z))
```
### Variables can be reassigned to other types anytime
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Execute the next cell
</div>
```
lat = 'Latitude'
print(lat)
step = True
print(step)
```
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Define your own string and boolean variables and print their type in the next cell
</div>
## One of the best types: datetime!
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code in the next cell to define two datetime variables: <b>today1</b> & <b>today2</b>
</div>
```
from datetime import date # call date function inside the datetime package
today1 = date.today()
from datetime import datetime # call datetime function inside the datetime package
today2 = datetime.now()
```
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br> - Now print both variables
<br>
- Try printing <b>today1.month</b>
<br>
- Try printing the following <b>today2.strftime('%c')</b>
<br>
- Now print the type of one of your date variables
</div>
<br>Note the use of <b>.</b> after a variable. This refers to a method of the variable or, as Python refers to it, the object.
We will use other functions of the datetime package later, and you could find more details about the attributes of the datetime object (variable): https://www.guru99.com/date-time-and-datetime-classes-in-python.html
***
## Python has a some basic data collections, we will talk about three of them:
List - ordered, changeable, allows duplicates
Tuple - ordered, unchangeable, allows duplicates
Dictionary - unordered, changeable, no duplicates allowed
***
## Lists: ordered, changeable, allows dupplicates
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code below and print the list
<br>
- Print the type of <b>mylist</b>
</div>
```
mylist=['temperature', 'wind', 'salinity'] # note the use of [ ]
```
### To access an element of a list we use the indices, that start at `0`
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Try printing: <b>mylist[0]</b>
<br>
- Now try reassigning the value of the second value to <b>'current velocity'</b>
</div>
### To add an element to the list use the method append
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Try <b>mylist.append('wind speed')</b>
<br>
- Then execute the next cell to print the entire list with a for loop
</div>
```
for myvar in mylist:
print(myvar+" has been recorded")
```
### Copying a list (or another object) needs to be done explicitely, other wise is just a new name for your variable
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Try these two codes:
<br>
<b> yourlist1 = mylist.copy()</b>
<br>
<b> yourlist2 = mylist</b>
<br>
- Then modify <b>yourlist1</b> and print it along with <b>mylist</b>
<br>
- Now modify <b>yourlist2</b> and print it along with <b>mylist</b>
</div>
***
## Tuples: ordered, unchangeable, allows duplicates
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code below and print the tuple
<br>
- Print the type of <b>mytuple</b>
<br>
- Print one element of <b>mytuple</b>
</div>
```
mytuple = ('latitude', 'longitude', 'time') # note the use of ( )
```
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br> - Now try reassigning an element of <b>mytuple</b>
</div>
***
## Dictionaries: unordered, changeable, no duplicates allowed
### Indexed pair of keys and values
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code below, and print the dictionary
<br>
- Add a new element to <b>mydict</b> with <b>mydict['units']='C'</b>
<br>
- Print one element of <b>mydict</b>. <i>Hint: the key is the key</i>
</div>
```
mydict = {'instrument': 'temperature sensor', 'measurement':'SST','depth': 5}
```
Certain libraries have specific data structures - arrays, data frames, and datasets
examples of each, but we will go back talking about each library.
***
***
# Few words about `Objects`, `Attributes` & `Methods`
## `Python` is an object oriented programming language. This means almost everything is an object or instances of a class. Variables are objects. And therefore they have `attributes` & `methods`
### `Properties` or `Attributes` are accessed with `.attribute` after the object
### `Methods` are functions, & are accessed with `.method(arguments)` after the object
We're not going to teach you how to create classes with properties or methods, but how to access them, because we will use them extensively
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Execute the code in the next cell to access the attributes and one method of the class <b>date</b>
</div>
```
today = date.today()
print(today)
## Date object attributes
print(today.year, today.month, today.day)
## Date object method 'ctime' - do not need arguments
print(today.ctime())
```
| github_jupyter |
# Masakhane - Machine Translation for African Languages (Using JoeyNMT)
## Note before beginning:
### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus.
### - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running
### - If you actually want to have a clue what you're doing, read the text and peek at the links
### - With 100 epochs, it should take around 7 hours to run in Google Colab
### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com
### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
## Retrieve your data & make a parallel corpus
If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.
Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe.
```
from google.colab import drive
drive.mount('/content/drive')
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "nya"
lc = False # If True, lowercase the data.
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
!mkdir -p "/content/drive/My Drive/masakhane/$src-$tgt-$tag"
g_drive_path = "/content/drive/My Drive/masakhane/%s-%s-%s" % (source_language, target_language, tag)
os.environ["gdrive_path"] = g_drive_path
models_path = '%s/models/%s%s_transformer'% (g_drive_path, source_language, target_language)
# model temporary directory for training
model_temp_dir = "/content/drive/My Drive/masakhane/model-temp"
# model permanent storage on the drive
!mkdir -p "$gdrive_path/models/${src}${tgt}_transformer/"
!echo $gdrive_path
#TODO: Skip for retrain
# Install opus-tools
! pip install opustools-pkg
#TODO: Skip for retrain
# Downloading our corpus
! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q
# extract the corpus file
! gunzip JW300_latest_xml_$src-$tgt.xml.gz
# extract the corpus file
! gunzip JW300_latest_xml_$tgt-$src.xml.gz
#TODO: Skip for retrain
# Download the global test set.
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en
# And the specific test set for this language pair.
os.environ["trg"] = target_language
os.environ["src"] = source_language
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en
! mv test.en-$trg.en test.en
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg
! mv test.en-$trg.$trg test.$trg
#TODO: Skip for retrain
# Read the test data to filter from train and dev splits.
# Store english portion in set for quick filtering checks.
en_test_sents = set()
filter_test_sents = "test.en-any.en"
j = 0
with open(filter_test_sents) as f:
for line in f:
en_test_sents.add(line.strip())
j += 1
print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))
#TODO: Skip for retrain
import pandas as pd
# TMX file to dataframe
source_file = 'jw300.' + source_language
target_file = 'jw300.' + target_language
source = []
target = []
skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.
with open(source_file) as f:
for i, line in enumerate(f):
# Skip sentences that are contained in the test set.
if line.strip() not in en_test_sents:
source.append(line.strip())
else:
skip_lines.append(i)
with open(target_file) as f:
for j, line in enumerate(f):
# Only add to corpus if corresponding source was not skipped.
if j not in skip_lines:
target.append(line.strip())
print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))
df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])
# if you get TypeError: data argument can't be an iterator is because of your zip version run this below
#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])
df.head(10)
```
## Pre-processing and export
It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.
In addition we will split our data into dev/test/train and export to the filesystem.
```
#TODO: Skip for retrain
# drop duplicate translations
df_pp = df.drop_duplicates()
# drop conflicting translations
# (this is optional and something that you might want to comment out
# depending on the size of your corpus)
df_pp.drop_duplicates(subset='source_sentence', inplace=True)
df_pp.drop_duplicates(subset='target_sentence', inplace=True)
# Shuffle the data to remove bias in dev set selection.
df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)
#TODO: Skip for retrain
# Install fuzzy wuzzy to remove "almost duplicate" sentences in the
# test and training sets.
! pip install fuzzywuzzy
! pip install python-Levenshtein
import time
from fuzzywuzzy import process
import numpy as np
# reset the index of the training set after previous filtering
df_pp.reset_index(drop=False, inplace=True)
# Remove samples from the training data set if they "almost overlap" with the
# samples in the test set.
# Filtering function. Adjust pad to narrow down the candidate matches to
# within a certain length of characters of the given sample.
def fuzzfilter(sample, candidates, pad):
candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad]
if len(candidates) > 0:
return process.extractOne(sample, candidates)[1]
else:
return np.nan
# NOTE - This might run slow depending on the size of your training set. We are
# printing some information to help you track how long it would take.
scores = []
start_time = time.time()
for idx, row in df_pp.iterrows():
scores.append(fuzzfilter(row['source_sentence'], list(en_test_sents), 5))
if idx % 1000 == 0:
hours, rem = divmod(time.time() - start_time, 3600)
minutes, seconds = divmod(rem, 60)
print("{:0>2}:{:0>2}:{:05.2f}".format(int(hours),int(minutes),seconds), "%0.2f percent complete" % (100.0*float(idx)/float(len(df_pp))))
# Filter out "almost overlapping samples"
df_pp['scores'] = scores
df_pp = df_pp[df_pp['scores'] < 95]
#TODO: Skip for retrain
# This section does the split between train/dev for the parallel corpora then saves them as separate files
# We use 1000 dev test and the given test set.
import csv
# Do the split between dev/train and create parallel corpora
num_dev_patterns = 1000
# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.
if lc: # Julia: making lowercasing optional
df_pp["source_sentence"] = df_pp["source_sentence"].str.lower()
df_pp["target_sentence"] = df_pp["target_sentence"].str.lower()
# Julia: test sets are already generated
dev = df_pp.tail(num_dev_patterns) # Herman: Error in original
stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file:
for index, row in stripped.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file:
for index, row in dev.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
#stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere
#stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.
#dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False)
#dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False)
# Doublecheck the format below. There should be no extra quotation marks or weird characters.
! head train.*
! head dev.*
```
---
## Installation of JoeyNMT
JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
```
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
```
# Preprocessing the Data into Subword BPE Tokens
- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).
- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)
- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable.
```
#TODO: Skip for retrain
# One of the huge boosts in NMT performance was to use a different method of tokenizing.
# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance
# Do subword NMT
from os import path
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
# Learn BPEs on the training data.
os.environ["data_path"] = path.join("joeynmt", "data", source_language + target_language) # Herman!
! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt
# Apply BPE splits to the development and test data.
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! cp bpe.codes.4000 $data_path
! ls $data_path
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path "$gdrive_path/vocab.txt"
# Some output
! echo "BPE Nyanja Sentences"
! tail -n 5 test.bpe.$tgt
! echo "Combined BPE Vocab"
! tail -n 10 "$gdrive_path/vocab.txt" # Herman
```
# Creating the JoeyNMT Config
JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!
- We used Transformer architecture
- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))
Things worth playing with:
- The batch size (also recommended to change for low-resourced languages)
- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)
- The decoder options (beam_size, alpha)
- Evaluation metrics (BLEU versus Crhf4)
```
def get_last_checkpoint(directory):
last_checkpoint = ''
try:
for filename in os.listdir(directory):
if 'best' in filename and filename.endswith(".ckpt"):
return filename
if not 'best' in filename and filename.endswith(".ckpt"):
if not last_checkpoint or int(filename.split('.')[0]) > int(last_checkpoint.split('.')[0]):
last_checkpoint = filename
except FileNotFoundError as e:
print('Error Occur ', e)
return last_checkpoint
# Copy the created models from the temporary storage to main storage on google drive for persistant storage
# the content of te folder will be overwrite when you start trainin
!cp -r "/content/drive/My Drive/masakhane/model-temp/"* "$gdrive_path/models/${src}${tgt}_transformer/"
last_checkpoint = get_last_checkpoint(models_path)
print('Last checkpoint :',last_checkpoint)
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "{gdrive_path}/train.bpe"
dev: "{gdrive_path}/dev.bpe"
test: "{gdrive_path}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "{gdrive_path}/vocab.txt"
trg_vocab: "{gdrive_path}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "{gdrive_path}/models/{name}_transformer/{last_checkpoint}" # TODO: uncommented to load a pre-trained model from last checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 50 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "{model_temp_dir}"
overwrite: True # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language, model_temp_dir=model_temp_dir, last_checkpoint=last_checkpoint)
with open("joeynmt/configs/transformer_{name}.yaml".format(name=name),'w') as f:
f.write(config)
```
# Train the Model
This single line of joeynmt runs the training using the config we made above
```
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml
# Copy the created models from the temporary storage to main storage on google drive for persistant storage
!cp -r "/content/drive/My Drive/masakhane/model-temp/"* "$gdrive_path/models/${src}${tgt}_transformer/"
# Output our validation accuracy
! cat "$gdrive_path/models/${src}${tgt}_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer/config.yaml"
```
| github_jupyter |
# Evaluate the Performance of MPNN models
Get all of the models, regardless how we trained them and evaluate their performance
```
%matplotlib inline
from matplotlib import pyplot as plt
from datetime import datetime
from sklearn import metrics
from tqdm import tqdm
from glob import glob
import pandas as pd
import numpy as np
import json
import os
```
## Find the Models and Summarize Them
There are `best_model.h5` files in subdirectories that contain data on their configuration.
```
models = glob(os.path.join('**', 'test_predictions.csv'), recursive=True)
print(f'Found {len(models)} models')
def generate_summary(path):
"""Generate the summary of a model, given path to its output
Args:
path (str): Path ot the trained weights
Returns:
(dict) Model information
"""
# Store the directory first
dir_name = os.path.dirname(path)
output = {'path': dir_name}
# Get the host and run parameters
for f in ['host_info.json', 'run_params.json']:
with open(os.path.join(dir_name, f)) as fp:
output.update(json.load(fp))
# Compute the number of nodes
output['n_nodes'] = output['total_ranks'] // output['ranks_per_node'] \
if 'total_ranks' in output else 1
# Convert the start time to a datetime
output['start_time'] = datetime.fromisoformat(output['start_time'])
# Get the log infomration
log_file = os.path.join(dir_name, 'log.csv')
log = pd.read_csv(log_file)
output['completed_epochs'] = len(log)
output['val_loss'] = log['val_loss'].min()
output['loss'] = log['loss'].min()
output['epoch_time'] = np.percentile(log['epoch_time'], 50)
output['total_train_time'] = log['epoch_time'].sum()
output['total_node_hours'] = output['total_train_time'] * output['n_nodes']
# Compute performance on hold-out set
results = pd.read_csv(os.path.join(output['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error', 'median_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
output[m] = v
return output
model_info = pd.DataFrame([generate_summary(m) for m in models])
print(f'Found {len(model_info)} models')
```
## Print out Best Performer
We are going to pick the one that has the best performance on the test set
### Coarse Network
See how we did on the "node per water" network
```
model = model_info.query('network_choice=="coarse"').sort_values('mean_absolute_error').iloc[0]
print(f'Model being evaluated: {model["path"]}')
model[['path', 'network_choice', 'activation', 'message_steps', 'dropout', 'features', 'batch_size']]
model[['loss', 'val_loss', 'mean_squared_error']]
```
Plot the logs
```
log = pd.read_csv(os.path.join(model['path'], 'log.csv'))
fig, ax = plt.subplots(figsize=(3.5, 2.5))
ax.semilogy(log['epoch'], log['loss'], label='Train')
ax.semilogy(log['epoch'], log['val_loss'], label='Validation')
ax.legend()
ax.set_xlabel('Epoch')
ax.set_ylabel('Loss')
```
*Finding*: Huge variance in validation loss is indicative of overfitting
Plot the performance on the test set
```
results = pd.read_csv(os.path.join(model['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
print(f'{m}: {v: .2f}')
```
Plot the true vs predicted
```
fig, ax = plt.subplots()
ax.scatter(results['y_true'], results['y_pred'], s=0.5, alpha=0.2)
ax.plot(ax.get_xlim(), ax.get_ylim(), 'k--')
ax.set_xlabel('$E$, True')
ax.set_ylabel('$E$, ML')
fig.set_size_inches(3.5, 3.5)
```
Plot only the largest cluster size
```
subset = results.query(f'n_waters == {results["n_waters"].max()}')
print(f'Scores for the {len(subset)} largest molecules with {results["n_waters"].max()} waters')
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(subset['y_true'], subset['y_pred'])
print(f'{m}: {v: .2f}')
fig, ax = plt.subplots()
errors = subset['y_pred'] - subset['y_true']
bins = np.linspace(-10, 10, 256)
ax.hist(errors, bins=bins, density=False)
ax.set_xlabel('Error (kcal/mol)')
ax.set_ylabel('Frequency')
fig.set_size_inches(3.5, 2)
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.scatter(subset['y_true'], subset['y_pred'], s=0.5, alpha=0.1)
ax.set_ylim(-340, -305)
ax.set_xlim(ax.get_ylim())
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
fig.tight_layout()
```
### Atomic Network
See how we did for the "node per atom" network
```
model = model_info.query('network_choice=="atomic"').sort_values('mean_absolute_error').iloc[0]
print(f'Model being evaluated: {model["path"]}')
model[['path', 'network_choice', 'activation', 'message_steps', 'dropout', 'features', 'batch_size']]
model[['loss', 'val_loss', 'mean_squared_error']]
```
Plot the logs
```
log = pd.read_csv(os.path.join(model['path'], 'log.csv'))
fig, ax = plt.subplots()
ax.semilogy(log['epoch'], log['loss'], label='Train')
ax.semilogy(log['epoch'], log['val_loss'], label='Validation')
ax.legend()
ax.set_xlabel('Epoch')
ax.set_ylabel('Loss')
```
*Finding*: Huge variance in validation loss is indicative of overfitting
Plot the performance on the test set
```
results = pd.read_csv(os.path.join(model['path'], 'test_predictions.csv'))
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(results['y_true'], results['y_pred'])
print(f'{m}: {v: .2f}')
```
Plot the true vs predicted
```
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.set_title('Performance on hold-out set')
ax.scatter(results['y_true'], results['y_pred'], s=0.5, alpha=0.2)
ax.plot(ax.get_xlim(), ax.get_ylim(), 'k--')
ax.set_xlabel('$E$, True')
ax.set_ylabel('$E$, ML')
fig.set_size_inches(3.5, 3.5)
```
Plot only the largest cluster size
```
subset = results.query(f'n_waters == {results["n_waters"].max()}')
print(f'Scores for the {len(subset)} largest molecules with {results["n_waters"].max()} waters')
for m in ['r2_score', 'mean_squared_error', 'mean_absolute_error']:
v = getattr(metrics, m)(subset['y_true'], subset['y_pred'])
print(f'{m}: {v: .2f}')
fig, ax = plt.subplots()
errors = subset['y_pred'] - subset['y_true']
bins = np.linspace(-10, 10, 256)
ax.hist(errors, bins=bins, density=False)
ax.set_xlabel('Error (kcal/mol)')
ax.set_ylabel('Frequency')
fig.set_size_inches(3.5, 2)
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.set_title('Clusters with 30 waters')
ax.scatter(subset['y_true'], subset['y_pred'], s=0.5, alpha=0.1)
ax.set_ylim(-340, -305)
ax.set_xlim(ax.get_ylim())
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
fig.tight_layout()
```
Make a publication-ready figure
```
fig, axs = plt.subplots(1, 3, figsize=(6.5, 2.5))
# Predicted vs actual plots
n_waters = results["n_waters"].max()
subset = results.query(f'n_waters == {n_waters}')
for d, ax, title in zip([results, subset], axs,
['Full Dataset', '30-Water Clusters']):
ax.set_title(title)
ax.scatter(d['y_true'], d['y_pred'], s=0.7, alpha=0.2, edgecolor='none')
max_ = max(ax.get_xlim()[1], ax.get_ylim()[1])
min_ = min(ax.get_xlim()[0], ax.get_ylim()[0])
ax.set_xlim([min_, max_])
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim(), 'k--')
ax.set_xlabel('$E$ (kcal/mol), True')
ax.set_ylabel('$E$ (kcal/mol), ML')
mae = metrics.mean_absolute_error(d['y_true'], d['y_pred'])
r2 = metrics.r2_score(d['y_true'], d['y_pred'])
ax.text(0.99, 0, f'MAE: {mae:.2f}\n$R^2$: {r2:.2f}',
ha='right', va='bottom', transform=ax.transAxes,
fontsize=10)
# Box and wisker plot
ax = axs[2]
error_stats = []
for s, subset in results.groupby('n_waters'):
error = np.abs(subset['y_pred'] - subset['y_true']) / s
error_stats.append({'size': s, 'mae': error.mean()})
error_stats = pd.DataFrame(error_stats)
ax.plot(error_stats['size'], error_stats['mae'], '--o', ms=3)
ax.set_xlabel('# Waters')
ax.set_ylabel('MAE (kcal/mol/water)')
# Add figure labels
for ax, l in zip(axs[:2], ['a', 'b']):
ax.text(0.02, 0.9, f'({l})', transform=ax.transAxes)
axs[2].text(0.82, 0.9, '(c)', transform=axs[2].transAxes)
fig.tight_layout()
fig.savefig(os.path.join('figures', 'mpnn-performance.png'), dpi=320)
```
## Make the Box Plot
To match Jenna's
```
results['abs_error_per_water'] = np.abs(results['y_true'] - results['y_pred']) / results['n_waters']
def make_box_plot(df, metric='abs_error_per_water'):
boxplot = df.query('n_waters >= 10 and n_waters <= 30').boxplot(metric, 'n_waters', grid=False, fontsize=20, figsize=(12,6), return_type='both')
plt.ylim(-0.01,0.7)
plt.ylabel('Absolute Error\n(kcal/mol/water)', fontsize=22, fontweight='bold', labelpad=15)
plt.xlabel('Cluster Size', fontsize=22, fontweight='bold', labelpad=15)
plt.xticks(range(1,23,2), ['10','12','14','16','18','20','22','24','26','28','30'])
plt.xlim(0, 22)
plt.suptitle('')
plt.title('')
plt.tight_layout()
plt.savefig('figures/mpnn_boxplot-horz.png',dpi=600)
make_box_plot(results)
```
## Evaluate Hyperparameter Sweeps
We did some manual hyperparameter tuning for the atomic model
### Batch Sizes
Evaluate different batch sizes to get a tradeoff between accuracy and using the full GPU
```
base_query = ('epochs==32 and shuffle_buffer_size==2097152 and activation=="sigmoid" '
'and message_steps==4 and network_choice=="atomic" and dropout==0 and features==64')
model_info.query(base_query).sort_values('val_loss')[['batch_size', 'loss', 'val_loss', 'mean_squared_error', 'epoch_time']]
```
*Finding*: We get decent accuracy with a batch size of 1024 and still use 90% of the GPU
### Activation Function
We evaluated different activation functions for the message steps
```
base_query = ('batch_size==1024 and epochs==32 and shuffle_buffer_size==2097152 '
'and message_steps==4 and network_choice=="atomic" and dropout==0 and features==64')
model_info.query(base_query).sort_values('mean_squared_error')[['activation', 'loss', 'val_loss', 'mean_squared_error', 'epoch_time']]
```
*Finding*: We should go with the softplus. Fastest and most accurate
### Number of Message Passing Layers
We compared increasing the number of message passing layers
```
base_query = ('hostname=="lambda3" and shuffle_buffer_size==2097152 and batch_size==1024 and activation=="softplus" and epochs==32 '
'and network_choice=="atomic"')
model_info.query(base_query).sort_values('message_steps')[['network_choice', 'message_steps',
'loss', 'val_loss', 'mean_squared_error',
'epoch_time']]
fig, ax = plt.subplots()
for label, subset in model_info.query(base_query).sort_values('message_steps').groupby('network_choice'):
ax.plot(subset['message_steps'], subset['mean_absolute_error'], '-o', label=label)
ax.set_xscale('log', base=2)
ax.set_xlabel('Message Steps')
ax.set_ylabel('Mean Absolute Error')
ax.legend()
```
*Finding*: We need many message passing layers, which can get expensive
| github_jupyter |
# Multi-center analysis
### Imports
```
import sys
sys.path.append('../')
from PAINTeR import connectivity # in-house lib used for the RPN-signature
from PAINTeR import plot # in-house lib used for the RPN-signature
from PAINTeR import model # in-house lib used for the RPN-signature
import numpy as np # hi old friend
import pandas as pd
from sklearn.preprocessing import StandardScaler
from nilearn.connectome import ConnectivityMeasure
from matplotlib.colors import ListedColormap
from matplotlib.colors import Normalize
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
from sklearn.linear_model import ElasticNet, Ridge
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.model_selection import LeaveOneOut, KFold, GroupKFold, LeavePGroupsOut
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error, r2_score, explained_variance_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import cross_validate
```
### Processing parameters
```
thres_mean_FD = 0.15 # mm
scrub_threshold = 0.15 # mm
thres_perc_scrub = 30 # % scubbed out
```
### Load all behavioral data
```
# load bochum data
df_bochum = pd.read_csv("../res/bochum_sample_excl.csv")
df_essen = pd.read_csv("../res/essen_sample_excl.csv")
df_szeged = pd.read_csv("../res/szeged_sample_excl.csv")
df_bochum['study']='bochum'
df_essen['study']='essen'
df_szeged['study']='szeged'
df=pd.concat((df_bochum, df_essen, df_szeged), sort=False)
df=df.reset_index()
df.groupby('study').hist('mean_QST_pain_sensitivity', bins=6)
```
### Load standardized scrubbed timeseries
```
timeseries = []
perc_scrubbed = []
for i, f in enumerate(df['ts_file']):
f = '..' + f.split('/..')[1]
f_scrub = f.split('.tsv')[0] + '-scrubbed.tsv'
ts = pd.read_csv(f_scrub).iloc[:,1:] # here we can omit global signal...
fd_file = df["fd_file"].values[i]
fd_file = '..' + fd_file.split('/..')[1]
fd = pd.read_csv(fd_file).values.ravel().tolist()
fd = [0] + fd
perc_scrubbed.append(100 - 100*len(ts.shape)/len(fd) )
timeseries.append(ts.values)
#region names
labels=ts.columns.values
l = pd.read_csv('../data/atlas_relabeled.tsv', sep="\t")
modules=np.insert(l['modules'].values, 0, "GlobSig")
# plot a specific timeseries
sub_idx=10
pd.DataFrame(timeseries[sub_idx], columns=ts.columns.values).loc[:, ['AINS_pd', 'AINS_v', 'PINS_v']].plot()
```
### Calculate connectivity
```
correlation_measure = ConnectivityMeasure(kind='partial correlation', vectorize=True, discard_diagonal=True)
X = correlation_measure.fit_transform(timeseries) # these are the features
mat=correlation_measure.mean_
#mat=mat[1:, 1:] #fisrt row and column is global signal
mat[range(mat.shape[0]), range(mat.shape[0])] = 0 # zero diag
# 3d plot in browser window
#coords = plotting.find_parcellation_cut_coords("../data/atlas_relabeled.nii.gz")
#view = plotting.view_connectome(mat, coords)
#view.open_in_browser()
plot.plot_matrix(mat, labels, modules)
y = df.mean_QST_pain_sensitivity
sns.distplot(y[df.study=='bochum'], hist=False, rug=True)
sns.distplot(y[df.study=='essen'], hist=False, rug=True)
sns.distplot(y[df.study=='szeged'], hist=False, rug=True)
print(X.shape, len(y))
```
### Group data to get balanced splits in a 30-fold cross-validation
```
plt.figure(figsize=(12, 0.3))
sns.heatmap([df.study.astype("category").cat.codes.values]).set_title('study center')
plt.show()
n_szeged = np.sum(df.study == 'szeged') # size of the smallest study
n_essen = np.sum(df.study == 'essen')
n_bochum = np.sum(df.study == 'bochum')
print(n_bochum, n_essen, n_szeged)
groups=np.zeros(len(df), dtype=int)
g=0
i=0
while i < n_bochum:
groups[i] = g
#groups[i+1] = g
i += 1
g += 1
g=0
i=n_bochum
while i < n_bochum+n_essen:
groups[i] = g
#groups[i+1] = g
i += 1
g += 1
g=0
i=n_bochum+n_essen
while i < len(df):
groups[i] = g
i += 1
g += 1
plt.figure(figsize=(12, 0.3))
sns.heatmap([groups]).set_title('groups')
plt.show()
groups
```
## Model training - non nested
```
def pipe_scale_fsel_elnet(scaler=preprocessing.RobustScaler(),
fsel=SelectKBest(f_regression),
model=ElasticNet(max_iter=100000),
p_grid = {'fsel__k': [25, 50, 100, 1000, 3000, 'all'],
'model__alpha': [ 0.001, 0.01, 0.1, 1, 10],
'model__l1_ratio': [0.0001, .25, .5, .75, 0.9999]
}):
mymodel = Pipeline(
[('scaler', scaler),
('fsel', fsel),
('model', model)])
return mymodel, p_grid
model, p_grid = pipe_scale_fsel_elnet()
cv = GroupKFold(30)
clf = GridSearchCV(estimator=model, param_grid=p_grid, cv=cv,
scoring="neg_mean_squared_error", verbose=True, return_train_score=False,
n_jobs=-1)
clf.fit(X, y, groups=groups)
print("**** Non-nested analysis ****")
print("** Best hyperparameters: " + str(clf.best_params_))
print("** Score on full data as training set:\t" + str(-mean_squared_error(y_pred=clf.best_estimator_.predict(X), y_true=y)))
print("** Score on mean as model: " + str(-mean_squared_error(np.repeat(y.mean(), len(y)), y)))
print("** Best Non-nested cross-validated score on test:\t" + str(clf.best_score_))
print("XXXXX Explained Variance: " + str(
1 - clf.best_score_ / -mean_squared_error(np.repeat(y.mean(), len(y)), y)))
cv_pred = cross_val_predict(clf.best_estimator_, X, y, cv=cv, groups=groups, n_jobs=-1)
plot.plot_prediction(y, predicted, sd=True, covar=[])
#for train_index, test_index in group_kfold.split(X, y, groups):
# #print("TRAIN:", train_index, "TEST:", test_index)
# #print(df.study[train_index].values)
# print('test:', df.study[test_index].values)
```
## Model training - nested
```
def pipe_scale_fsel_elnet(scaler=preprocessing.RobustScaler(),
fsel=SelectKBest(f_regression),
model=ElasticNet(max_iter=100000),
p_grid = {'fsel__k': [25, 2000, 4000, 6000],
'model__alpha': [ 0.001, 0.01, 0.1, 1],
'model__l1_ratio': [0.0001, .25, .5, .75, 0.9999]
}):
mymodel = Pipeline(
[('scaler', scaler),
('fsel', fsel),
('model', model)])
return mymodel, p_grid
model, p_grid = pipe_scale_fsel_elnet()
cv = GroupKFold(30)
clf = GridSearchCV(estimator=model, param_grid=p_grid, cv=cv,
scoring="neg_mean_squared_error", verbose=True, return_train_score=False,
n_jobs=-1)
clf.fit(X, y, groups=groups)
print("**** Non-nested analysis ****")
print("** Best hyperparameters: " + str(clf.best_params_))
print("** Score on full data as training set:\t" + str(-mean_squared_error(y_pred=clf.best_estimator_.predict(X), y_true=y)))
print("** Score on mean as model: " + str(-mean_squared_error(np.repeat(y.mean(), len(y)), y)))
print("** Best Non-nested cross-validated score on test:\t" + str(clf.best_score_))
print("XXXXX Explained Variance: " + str(
1 - clf.best_score_ / -mean_squared_error(np.repeat(y.mean(), len(y)), y)))
cv_pred = cross_val_predict(clf.best_estimator_, X, y, cv=cv, groups=groups, n_jobs=-1)
plot.plot_prediction(y, predicted, sd=True, covar=[])
#for train_index, test_index in group_kfold.split(X, y, groups):
# #print("TRAIN:", train_index, "TEST:", test_index)
# #print(df.study[train_index].values)
# print('test:', df.study[test_index].values)
def pipe_scale_fsel_elnet(scaler=preprocessing.RobustScaler(),
fsel=SelectKBest(f_regression),
model=ElasticNet(max_iter=100000),
p_grid = {'fsel__k': [10, 50, 100, 200, 500, 700, 1000, 2000, 3000, 4000, 5000, 'all'], 'model__alpha': [.001, .01, .1, 1, 10], 'model__l1_ratio': [0.001, .1, .3, .5, .7, .9, .999]
#p_grid = {'fsel__k': [1000, 2000, 5000], 'model__alpha': [.001, .005, .01, .05, .1], 'model__l1_ratio': [.999]
}):
mymodel = Pipeline(
[('scaler', scaler),
('fsel', fsel),
('model', model)])
return mymodel, p_grid
model, p_grid = pipe_scale_fsel_elnet()
outer_cv = GroupKFold(30)
inner_cv = GroupKFold(30)
clf = GridSearchCV(estimator=model, param_grid=p_grid, cv=inner_cv,
scoring="neg_mean_squared_error", verbose=True, return_train_score=False,
n_jobs=-1)
all_models = []
best_params = []
predicted = np.zeros(len(y))
nested_scores_train = np.zeros(outer_cv.get_n_splits(X))
nested_scores_test = np.zeros(outer_cv.get_n_splits(X))
print("model\tinner_cv mean score\touter vc score")
i=0
for train, test in outer_cv.split(X, y, groups=groups):
group_train = groups[train]
clf.fit(X[train], y[train], groups=group_train)
print(str(clf.best_params_) + " " + str(clf.best_score_) + " " + str(clf.score(X[test], y[test])))
all_models.append(clf.best_estimator_)
best_params.append(clf.best_params_)
predicted[test] = clf.predict(X[test])
nested_scores_train[i] = clf.best_score_
nested_scores_test[i] = clf.score(X[test], y[test])
i = i+1
print("*** Score on mean as model:\t" + str(-mean_squared_error(np.repeat(y.mean(), len(y)), y)))
print("** Mean score in the inner crossvaludation (inner_cv):\t" + str(nested_scores_train.mean()))
print("** Mean Nested Crossvalidation Score (outer_cv):\t" + str(nested_scores_test.mean()))
print("Explained Variance: " + str( 1- nested_scores_test.mean()/-mean_squared_error(np.repeat(y.mean(), len(y)), y) ))
print("Correlation: " + str(np.corrcoef(y, predicted)[0,1]))
plot.plot_prediction(y, predicted, sd=True, covar=[])
print("*** Score on mean as model:\t" + str(-mean_squared_error(np.repeat(y.mean(), len(y)), y)))
print("** Mean score in the inner crossvaludation (inner_cv):\t" + str(nested_scores_train.mean()))
print("** Mean Nested Crossvalidation Score (outer_cv):\t" + str(nested_scores_test.mean()))
print("Explained Variance: " + str( 1- nested_scores_test.mean()/-mean_squared_error(np.repeat(y.mean(), len(y)), y) ))
print("Correlation: " + str(np.corrcoef(y, predicted)[0,1]))
plot.plot_prediction(y, predicted, sd=True, covar=[])
```
## Finalize and save model
## Obtain predictive network and compare to the RPN-signature
| github_jupyter |
(tune-mnist-keras)=
# Using Keras & TensorFlow with Tune
```{image} /images/tf_keras_logo.jpeg
:align: center
:alt: Keras & TensorFlow Logo
:height: 120px
:target: https://www.keras.io
```
```{contents}
:backlinks: none
:local: true
```
## Example
```
import argparse
import os
from filelock import FileLock
from tensorflow.keras.datasets import mnist
import ray
from ray import tune
from ray.tune.schedulers import AsyncHyperBandScheduler
from ray.tune.integration.keras import TuneReportCallback
def train_mnist(config):
# https://github.com/tensorflow/tensorflow/issues/32159
import tensorflow as tf
batch_size = 128
num_classes = 10
epochs = 12
with FileLock(os.path.expanduser("~/.data.lock")):
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(config["hidden"], activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(lr=config["lr"], momentum=config["momentum"]),
metrics=["accuracy"],
)
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[TuneReportCallback({"mean_accuracy": "accuracy"})],
)
def tune_mnist(num_training_iterations):
sched = AsyncHyperBandScheduler(
time_attr="training_iteration", max_t=400, grace_period=20
)
analysis = tune.run(
train_mnist,
name="exp",
scheduler=sched,
metric="mean_accuracy",
mode="max",
stop={"mean_accuracy": 0.99, "training_iteration": num_training_iterations},
num_samples=10,
resources_per_trial={"cpu": 2, "gpu": 0},
config={
"threads": 2,
"lr": tune.uniform(0.001, 0.1),
"momentum": tune.uniform(0.1, 0.9),
"hidden": tune.randint(32, 512),
},
)
print("Best hyperparameters found were: ", analysis.best_config)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--smoke-test", action="store_true", help="Finish quickly for testing"
)
parser.add_argument(
"--server-address",
type=str,
default=None,
required=False,
help="The address of server to connect to if using " "Ray Client.",
)
args, _ = parser.parse_known_args()
if args.smoke_test:
ray.init(num_cpus=4)
elif args.server_address:
ray.init(f"ray://{args.server_address}")
tune_mnist(num_training_iterations=5 if args.smoke_test else 300)
```
## More Keras and TensorFlow Examples
- {doc}`/tune/examples/includes/pbt_memnn_example`: Example of training a Memory NN on bAbI with Keras using PBT.
- {doc}`/tune/examples/includes/tf_mnist_example`: Converts the Advanced TF2.0 MNIST example to use Tune
with the Trainable. This uses `tf.function`.
Original code from tensorflow: https://www.tensorflow.org/tutorials/quickstart/advanced
- {doc}`/tune/examples/includes/pbt_tune_cifar10_with_keras`:
A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.
| github_jupyter |
An example showing how different online solvers perform on the hand-written digits dataset.
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version
```
import sklearn
sklearn.__version__
```
### Imports
This tutorial imports [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html#sklearn.model_selection.train_test_split), [SGDClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier), [Perceptron](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html#sklearn.linear_model.Perceptron), [PassiveAggressiveClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PassiveAggressiveClassifier.html#sklearn.linear_model.PassiveAggressiveClassifier) and [LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression).
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier, Perceptron
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import LogisticRegression
```
### Calculations
```
heldout = [0.95, 0.90, 0.75, 0.50, 0.01]
rounds = 20
digits = datasets.load_digits()
X, y = digits.data, digits.target
classifiers = [
("SGD", SGDClassifier()),
("ASGD", SGDClassifier(average=True)),
("Perceptron", Perceptron()),
("Passive-Aggressive I", PassiveAggressiveClassifier(loss='hinge',
C=1.0)),
("Passive-Aggressive II", PassiveAggressiveClassifier(loss='squared_hinge',
C=1.0)),
("SAG", LogisticRegression(solver='sag', tol=1e-1, C=1.e4 / X.shape[0]))
]
xx = 1. - np.array(heldout)
```
### Plot Results
```
data = []
for name, clf in classifiers:
print("training %s" % name)
rng = np.random.RandomState(42)
yy = []
for i in heldout:
yy_ = []
for r in range(rounds):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=i, random_state=rng)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
yy_.append(1 - np.mean(y_pred == y_test))
yy.append(np.mean(yy_))
trace = go.Scatter(x=xx, y=yy,
mode='lines',
name=name)
data.append(trace)
layout = go.Layout(xaxis=dict(title="Proportion train"),
yaxis=dict(title="Test Error Rate")
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
```
### License
Author:
Rob Zinkov <rob@zinkov.com>
License:
BSD 3 clause
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Comparing Various Online Solvers.ipynb', 'scikit-learn/plot-sgd-comparison/', 'Comparing Various Online Solvers | plotly',
' ',
title = 'Comparing Various Online Solvers | plotly',
name = 'Comparing Various Online Solvers',
has_thumbnail='true', thumbnail='thumbnail/sgd-comparision.jpg',
language='scikit-learn', page_type='example_index',
display_as='linear_models', order=17,
ipynb= '~Diksha_Gabha/3220')
```
| github_jupyter |
## Text Data Preprocessing
In any machine learning task, cleaning or preprocessing the data is as important as model building if not more. And when it comes to unstructured data like text, this process is even more important.
Objective of this notebook is to understand the various text preprocessing steps with code examples.
Some of the common text preprocessing / cleaning steps are:
* Lower casing
* Removal of Punctuations
* Removal of Stopwords
* Removal of Frequent words
* Removal of Rare words
* Stemming
* Lemmatization
* Removal of emojis
* Removal of URLs
So these are the different types of text preprocessing steps which we can do on text data. But we need not do all of these all the times. We need to carefully choose the preprocessing steps based on our use case since that also play an important role.
For example, in sentiment analysis use case, we need not remove the emojis as it will convey some important information about the sentiment. Similarly we need to decide based on our use cases.
## Import libraries
```
import numpy as np
import pandas as pd
import re
import nltk
import spacy
import string
pd.options.mode.chained_assignment = None
```
## Read the data
```
pd.set_option('max_colwidth', 100)
df = pd.read_csv('../data/text.csv', lineterminator='\n')
df.head()
df.shape
```
## Lower Casing
Lower casing is a common text preprocessing technique. The idea is to convert the input text into same casing format so that 'text', 'Text' and 'TEXT' are treated the same way.
```
df['text_lower'] = df['text'].str.lower()
df.head()
```
## Removal of Punctuations
One another common text preprocessing technique is to remove the punctuations from the text data. This is again a text standardization process that will help to treat 'hurray' and 'hurray!' in the same way.
We also need to carefully choose the list of punctuations to exclude depending on the use case. For example, the string.punctuation in python contains the following punctuation symbols !"#$%&\'()*+,-./:;<=>?@[\\]^_{|}~`
We can add or remove more punctuations as per our need.
```
string.punctuation
# Regex pattern to remove punctuations
# here one slash is required to escape regex characters such as "[]" and one slash for python requirements
regex_pattern = '[' + ''.join('\\'+c for c in string.punctuation) + ']'
print(regex_pattern)
df['text_wo_punct'] = df['text_lower'].str.replace(regex_pattern, '')
df.head()
```
## Removal of stopwords
Stopwords are commonly occuring words in a language like 'the', 'a' and so on. They can be removed from the text most of the times, as they don't provide valuable information for downstream analysis. In cases like Part of Speech tagging, we should not remove them as provide very valuable information about the POS.
These stopword lists are already compiled for different languages and we can safely use them. For example, the stopword list for english language from the nltk package can be seen below.
```
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
STOP_WORDS = stopwords.words('english')
', '.join(STOP_WORDS)
df['text_wo_punct'].iloc[0]
' '.join([word for word in df['text_wo_punct'].iloc[0].split() if word not in STOP_WORDS])
def remove_stop_word(text: str, stopwords: list) -> str:
"""Custom function to remove stopwords
Args:
text (str): A string
"""
return ' '.join([word for word in text.split() if word not in stopwords])
df['text_wo_stop'] = df['text_wo_punct'].apply(lambda text: remove_stop_word(text, STOP_WORDS))
df.head()
```
## Removal of Frequent words
In the previos preprocessing step, we removed the stopwords based on language information. But say, if we have a domain specific corpus, we might also have some frequent words which are of not so much importance to us.
So this step is to remove the frequent words in the given corpus. If we use something like tfidf, this is automatically taken care of.
Let us get the most common words adn then remove them in the next step
```
from collections import Counter
cnt = Counter()
for text in df['text_wo_stop'].values:
for word in text.split():
cnt[word] += 1
cnt.most_common(10)
FREQ_WORDS = [word for word, _ in cnt.most_common(10)]
FREQ_WORDS
def remove_frequent_word(text: str, freqwords: list) -> str:
"""Custom function to remove frequent words
Args:
text (str): A string
"""
return ' '.join([word for word in text.split() if word not in freqwords])
df['text_wo_stopfreq'] = df['text_wo_stop'].apply(lambda text: remove_frequent_word(text, FREQ_WORDS))
df.head()
```
## Removal of Rare words
This is very similar to previous preprocessing step but we will remove the rare words from the corpus.
```
pd.DataFrame(cnt.most_common(), columns=['word', 'count']).query('count == 1')
n_rare_words = 10
RARE_WORDS = [word for word, _ in cnt.most_common()[:-n_rare_words:-1]]
RARE_WORDS
def remove_rare_word(text: str, rarewords: list) -> str:
"""Custom function to remove rare words
Args:
text (str): A string
"""
return ' '.join([word for word in text.split() if word not in rarewords])
df['text_wo_stopfreqrare'] = df['text_wo_stopfreq'].apply(lambda text: remove_rare_word(text, RARE_WORDS))
df[['text', 'text_wo_stopfreq', 'text_wo_stopfreqrare']].head()
```
## Stemming
Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form (From Wikipedia)
For example, if there are two words in the corpus walks and walking, then stemming will stem the suffix to make them walk. But say in another example, we have two words console and consoling, the stemmer will remove the suffix and make them consol which is not a proper english word.
There are several type of stemming algorithms available and one of the famous one is porter stemmer which is widely used. We can use nltk package for the same.
```
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
def stem_words(text: str) -> str:
"""Custom function to stem the words
Args:
text (str): A string
"""
return ' '.join([stemmer.stem(word) for word in text.split()])
df['text_stemmed'] = df['text_lower'].apply(stem_words)
df[['text', 'text_lower', 'text_stemmed']].head()
```
We can see that words like probable, unstable, update and website have their e at the end chopped off due to stemming. This is not intented. What can we do for that? We can use Lemmatization in such cases.
Also this porter stemmer is for English language. If we are working with other languages, we can use snowball stemmer. The supported languages for snowball stemmer are:
```
from nltk.stem.snowball import SnowballStemmer
SnowballStemmer.languages
```
## Lemmatization
Lemmatization is similar to stemming in reducing inflected words to their word stem but differs in the way that it makes sure the root word (also called as lemma) belongs to the language.
As a result, this one is generally slower than stemming process. So depending on the speed requirement, we can choose to use either stemming or lemmatization.
Let us use the WordNetLemmatizer in nltk to lemmatize our sentences
```
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize_words(text: str) -> str:
"""Custom function to lemmatize the words
Args:
text (str): A string
"""
return ' '.join([lemmatizer.lemmatize(word) for word in text.split()])
df['text_lemmatized'] = df['text_lower'].apply(lambda text: lemmatize_words(text))
df[['text', 'text_lower', 'text_stemmed', 'text_lemmatized']].head()
```
We can see that the trailing e in the unstable and website are retained when we use lemmatization unlike stemming.
Wait. There is one more thing in lemmatization. Let us try to lemmatize running now.
```
lemmatizer.lemmatize('running')
```
Wow! It returned running as such without converting it to the root form run. This is because the lemmatization process depends on the POS tag to come up with the correct lemma. Now let us lemmatize again by providing the POS tag for the word
```
lemmatizer.lemmatize('running', 'v')
```
## Redo the lemmatization process with POS tag for our dataset.
```
import nltk
nltk.download('averaged_perceptron_tagger')
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
wordnet_map = {
'N': wordnet.NOUN,
'V': wordnet.VERB,
'J': wordnet.ADJ,
'R': wordnet.ADV
}
wordnet_map
def lemmatize_words(text: str) -> str:
"""Custom function to lemmatize the words
Args:
text (str): A string
"""
pos_tagged_text = nltk.pos_tag(text.split())
return ' '.join([lemmatizer.lemmatize(word, wordnet_map.get(pos[0], wordnet.NOUN)) for word, pos in pos_tagged_text])
pos_tagged_text = nltk.pos_tag(df['text'].iloc[0].split())
pos_tagged_text
df['text_lemmatized'] = df['text_lower'].apply(lambda text: lemmatize_words(text))
df[['text', 'text_lower', 'text_stemmed', 'text_lemmatized']].head()
```
## Removal of Emojis
With more and more usage of social media platforms, there is an explosion in the usage of emojis in our day to day life as well. Probably we might need to remove these emojis for some of our textual analysis.
Thanks to [this code](https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b), please find below a helper function to remove emojis from our text.
```
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b
def remove_emoji(string: str) -> str:
"""Custom function to remove emojis"""
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', string)
remove_emoji("game is on 🔥🔥")
remove_emoji("Hilarious😂")
df['text_no_emoji'] = df['text'].apply(remove_emoji)
df[['text', 'text_no_emoji']].head()
```
## Removal of URLs
Next preprocessing step is to remove any URLs present in the data. For example, if we are doing a twitter analysis, then there is a good chance that the tweet will have some URL in it. Probably we might need to remove them for our further analysis.
We can use the below code snippet to do that
```
def remove_urls(text: str) -> str:
"""Custom function to remove URLs"""
url_pattern = re.compile(r'https?://\S+|www\.\S+')
return url_pattern.sub(r'', text)
text = "Driverless AI NLP blog post on https://www.h2o.ai/blog/detecting-sarcasm-is-difficult-but-ai-may-have-an-answer/"
remove_urls(text)
df['text_no_url'] = df['text'].apply(remove_urls)
df[['text', 'text_no_url']].head()
```
## Discussion activity:
* What usecases can you think for NLP?
- analysis of speech - news article, transcription of speeches -- topic modelling vs topic classification
* What role does preprocessing play in the application of NLP?
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive/')
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.layers import Conv2D, Activation, GlobalAvgPool2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.models import Sequential
file1='/content/drive/MyDrive/archive (1)/Train'
file2= '/content/drive/MyDrive/archive (1)/Test'
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
horizontal_flip = True)
train_set = train_datagen.flow_from_directory(file1,
target_size = (384, 384),
class_mode='categorical',
batch_size = 6)
test_datagen = ImageDataGenerator(horizontal_flip = True)
test_set = test_datagen.flow_from_directory(file2,
target_size = (384, 384),
class_mode='categorical',
batch_size =3)
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Dense(3, activation='relu', input_shape=[384,384,3]))
cnn.add(tf.keras.layers.Conv2D(128, kernel_size=[3,3], padding='valid', activation='relu'))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[3,3], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(64, kernel_size=[2,2],padding='valid', activation='relu' ))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[2,2], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(32, kernel_size=[2,2],padding='valid', activation='relu' ))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[2,2], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(16, kernel_size=[2,2],padding='valid', activation='relu' ))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[2,2], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(8, kernel_size=[2,2],padding='valid', activation='relu' ))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[2,2], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(4, kernel_size=[2,2],padding='valid', activation='relu' ))
cnn.add(tf.keras.layers.MaxPooling2D(pool_size=[2,2], strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(3, activation='softmax'))
cnn.compile(optimizer = 'adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
cnn.summary()
history = cnn.fit(train_set, validation_data =test_set, epochs=5, verbose=2)
import cv2
import matplotlib.pyplot as plt
import numpy as np
import cv2
def prepare(image):
IMG_SIZE=384
img_array = cv2.imread('/content/audi.jpg', cv2.IMREAD_COLOR)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3)
y=cnn.predict([prepare('/content/audi.jpg')])
print(y)
img = cv2.imread('/content/audi.jpg',0)
plt.imshow(img, interpolation = 'bicubic')
plt.xticks([]), plt.yticks([])
plt.show()
np.argmax(y)
def prepare(image):
IMG_SIZE=384
img_array = cv2.imread('/lamborg.jpg', cv2.IMREAD_COLOR)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3)
x=cnn.predict([prepare('/lamborg.jpg')])
print(x)
img = cv2.imread('/lamborg.jpg',0)
plt.imshow(img, interpolation = 'bicubic')
plt.xticks([]), plt.yticks([])
plt.show()
np.argmax(x)
def prepare(image):
IMG_SIZE=384
img_array = cv2.imread('/content/mercedez.jpg', cv2.IMREAD_COLOR)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3)
z=cnn.predict([prepare('/content/mercedez.jpg')])
print(z)
img = cv2.imread('/content/mercedez.jpg',0)
plt.imshow(img, interpolation = 'bicubic')
plt.xticks([]), plt.yticks([])
plt.show()
np.argmax(z)
```
| github_jupyter |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
# Nicer plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
matplotlib.rcParams['figure.figsize'] = (8,4)
```
# Disgten example
Similar to the simple example, but generating particles with Distgen
```
from distgen import Generator
YAML="""
n_particle: 10000
random_type: hammersley
start:
type: cathode
MTE:
value: 414
units: meV
total_charge:
value: 250
units: pC
r_dist:
n_sigma_cutoff: 1.5
sigma_xy:
value: 0.4
units: mm
type: radial_gaussian
t_dist:
type: superposition
dists:
d1:
type: gaussian
avg_t:
units: ps
value: -1
sigma_t:
units: ps
value: 1
d2:
type: gaussian
avg_t:
units: ps
value: 1
sigma_t:
units: ps
value: 1
"""
G = Generator(YAML)
# Tune the two dist separation
G['t_dist:dists:d1:avg_t:value'] = -1
G['t_dist:dists:d2:avg_t:value'] = 1
G.run()
GP = G.particles
GP.plot('t')
GP.plot('pz')
from impact import Impact
import matplotlib.pyplot as plt
import os
ifile = 'templates/lcls_injector/ImpactT.in'
os.path.exists(ifile)
# Make Impact object
I = Impact(ifile, initial_particles = G.particles, verbose=True)
# This will use the initial particles
I.write_initial_particles(update_header=True)
# Change some things
I.header['Nx'] = 16
I.header['Ny'] = 16
I.header['Nz'] = 16
I.header['Dt'] = 5e-13
# Turn Space Charge off
I.header['Bcurr'] = 0
# Other switches
I.timeout = 1000
# Switches for MPI
I.use_mpi=True
I.header['Nprow'] = 1
I.header['Npcol'] = 4
# Change stop location
I.stop = 1.5
#I.ele['stop_1']['s'] = I.ele['OTR2']['s']+.001
I.run()
I.input.keys()
I.output.keys()
I.output['stats'].keys()
I.output['slice_info'].keys()
```
# Particles
```
# Particles are automatically parsed in to openpmd-beamphysics ParticleGroup objects
I.output['particles']
PI = I.output['particles']['initial_particles']
PF = I.output['particles']['final_particles']
# Original particles
GP.plot('t', 'pz')
# Readback of initial particles from Impact-T.
PI.plot('t', 'pz')
# The initial time was shifted to account for this
I.header['Tini']
# Get the final particles, calculate some statistic
P = I.output['particles']['final_particles']
P['mean_energy']
# Show the units
P.units('mean_energy')
P.plot('z', 'pz')
```
# Stats
```
# Impact's own calculated statistics can be retieved
len(I.stat('norm_emit_x')), I.stat('norm_emit_x')[-1]
# Compare these.
key1 = 'mean_z'
key2 = 'sigma_x'
units1 = str(I.units(key1))
units2 = str(I.units(key2))
plt.xlabel(key1+f' ({units1})')
plt.ylabel(key2+f' ({units2})')
plt.plot(I.stat(key1), I.stat(key2))
plt.scatter(
[I.particles[name][key1] for name in I.particles],
[I.particles[name][key2] for name in I.particles], color='red')
```
# Archive, and restart from the middle
```
afile = I.archive()
I2 = Impact(verbose=False)
I2.load_archive(afile)
# Patch in these particles
I2.initial_particles = I2.particles['YAG02']
# Turn off cathode start
I2.header['Flagimg'] = 0
I2.configure()
# Run again
I2.use_mpi=True
I2.run()
# Compare these.
key1 = 'mean_z'
key2 = 'sigma_x'
units1 = str(I.units(key1))
units2 = str(I.units(key2))
plt.xlabel(key1+f' ({units1})')
plt.ylabel(key2+f' ({units2})')
plt.plot(I.stat(key1), I.stat(key2), color='black', label='original run')
plt.plot(I2.stat(key1), I2.stat(key2), color='red', label='restart run')
plt.scatter(
[I.particles[name][key1] for name in I.particles],
[I.particles[name][key2] for name in I.particles], color='black')
plt.scatter(
[I2.particles[name][key1] for name in I2.particles],
[I2.particles[name][key2] for name in I2.particles], color='red', marker='x')
plt.legend()
# Cleanup
os.remove(afile)
```
| github_jupyter |
<small><i>This notebook was prepared by Marco Guajardo. Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).</i></small>
# Challenge Notebook
## Problem: Implement a binary search tree with insert, delete, different traversals & max/min node values
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Is this a binary tree?
* Yes
* Is the root set to None initially?
* Yes
* Do we care if the tree is balanced?
* No
* What do we return for the traversals?
* Return a list of the data in the desired order
* What type of data can the tree hold?
* Assume the tree only takes ints. In a realistic example, we'd use a hash table to convert other types to ints.
## Test Cases
### Insert
* Always start with the root
* If value is less than the root, go to the left child
* if value is more than the root, go to the right child
### Delete
* Deleting a node from a binary tree is tricky. Make sure you arrange the tree correctly when deleting a node.
* Here are some basic [instructions](http://www.algolist.net/Data_structures/Binary_search_tree/Removal)
* If the value to delete isn't on the tree return False
### Traverals
* In order traversal -left, center, right
* Pre order traversal - center, left, right
* Post order traversal - left, right, center
* Return list for all traverals
### Max & Min
* Find the max node in the binary search tree
* Find the min node in the binary search tree
### treeIsEmpty
* check if the tree is empty
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/binary_tree_implementation/binary_tree_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Node (object):
def __init__ (self, data=None):
#TODO:implement me
pass
def __str__ (self):
#TODO:implement me
pass
class BinaryTree (object):
def __init__ (self):
#TODO:implement me
pass
def insert (self, newData):
#TODO:implement me
pass
def delete (self, key):
#TODO:implement me
pass
def maxNode (self):
#TODO:implement me
pass
def minNode (self):
#TODO:implement me
pass
def printPostOrder (self):
#TODO:implement me
pass
def printPreOrder (self):
#TODO:implement me
pass
def printInOrder (self):
#TODO:implement me
pass
def treeIsEmpty (self):
#TODO: implement me
pass
```
## Unit Test
```
from nose.tools import assert_equal
class TestBinaryTree(object):
def test_insert_traversals (self):
myTree = BinaryTree()
myTree2 = BinaryTree()
for num in [50, 30, 70, 10, 40, 60, 80, 7, 25, 38]:
myTree.insert(num)
[myTree2.insert(num) for num in range (1, 100, 10)]
print("Test: insert checking with in order traversal")
expectVal = [7, 10, 25, 30, 38, 40, 50, 60, 70, 80]
assert_equal(myTree.printInOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printInOrder(), expectVal)
print("Test: insert checking with post order traversal")
expectVal = [7, 25, 10, 38, 40, 30, 60, 80, 70, 50]
assert_equal(myTree.printPostOrder(), expectVal)
expectVal = [91, 81, 71, 61, 51, 41, 31, 21, 11, 1]
assert_equal(myTree2.printPostOrder(), expectVal)
print("Test: insert checking with pre order traversal")
expectVal = [50, 30, 10, 7, 25, 40, 38, 70, 60, 80]
assert_equal(myTree.printPreOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printPreOrder(), expectVal)
print("Success: test_insert_traversals")
def test_max_min_nodes (self):
myTree = BinaryTree()
myTree.insert(5)
myTree.insert(1)
myTree.insert(21)
print("Test: max node")
assert_equal(myTree.maxNode(), 21)
myTree.insert(32)
assert_equal(myTree.maxNode(), 32)
print("Test: min node")
assert_equal(myTree.minNode(), 1)
print("Test: min node inserting negative number")
myTree.insert(-10)
assert_equal(myTree.minNode(), -10)
print("Success: test_max_min_nodes")
def test_delete (self):
myTree = BinaryTree()
myTree.insert(5)
print("Test: delete")
myTree.delete(5)
assert_equal(myTree.treeIsEmpty(), True)
print("Test: more complex deletions")
[myTree.insert(x) for x in range(1, 5)]
myTree.delete(2)
assert_equal(myTree.root.rightChild.data, 3)
print("Test: delete invalid value")
assert_equal(myTree.delete(100), False)
print("Success: test_delete")
def main():
testing = TestBinaryTree()
testing.test_insert_traversals()
testing.test_max_min_nodes()
testing.test_delete()
if __name__=='__main__':
main()
```
**The following unit test is expected to fail until you solve the challenge.**
## Solution NoteBook
Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/binary_tree_implementation/binary_tree_solution.ipynb) for a discussion on algorithms and code solutions.
| github_jupyter |
```
#https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
```
# MNIST Dataset
### http://yann.lecun.com/exdb/mnist/
### The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
```
import matplotlib.pyplot as plt
import h5py #pip install h5py -- https://www.h5py.org/
#load train
f = h5py.File('MNISTdata.hdf5', 'r')
train_x, train_y = f['x_train'][:], f['y_train'][:,0]
f.close()
print("train_x", train_x.shape, train_x.dtype)
#each image is stored in 784*1 numpy.ndarray, basically 28*28 image
type(train_x)
plt.imshow(train_x[0].reshape(28, 28)), train_y[0]
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.utils.data
import torch.optim as optim
import torch.backends.cudnn as cudnn
import numpy as np
import os
import os.path
import argparse
from torch.autograd import Variable
class FNN(nn.Module):#Fully connected Neural Network
"""FNN."""
def __init__(self):
"""FNN Builder."""
super(FNN, self).__init__()
self.fc_layer = nn.Sequential(
nn.Linear(784, 100),#100 is the number of hidden nodes in the hidden layer
nn.ReLU(inplace=True),
nn.Linear(100, 10)
)
#self.layer1 = nn.Linear(784, 100)
#self.layer2 = nn.ReLU(inplace=True)
#self.layer3 = nn.Linear(100, 10)
def forward(self, x):
"""Perform forward."""
x = self.fc_layer(x)
return x
#x = self.layer1(x)
#x = self.layer2(x)
#x = self.layer3(x)
#y = self.fc_layer(x)
#return y
# 784*100 + 100*10 - NN
# 784
def calculate_accuracy(loader, is_gpu):
"""Calculate accuracy.
Args:
loader (torch.utils.data.DataLoader): training / test set loader
is_gpu (bool): whether to run on GPU
Returns:
tuple: (overall accuracy, class level accuracy)
"""
correct = 0
total = 0
for data in loader:
inputs, labels = data
if is_gpu:
inputs = inputs.cuda()
labels = labels.cuda()
inputs, labels = Variable(inputs), Variable(labels)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
# forward + backward + optimize
outputs = net(inputs)#forward
total += labels.size(0)
#correct += (predicted == labels).sum()
correct += (predicted == labels[:,0].T).sum()
return 100*correct.item()/float(total)
parser = argparse.ArgumentParser()
# hyperparameters settings
parser.add_argument('--lr', type=float, default=0.001, help='learning rate')
parser.add_argument('--wd', type=float, default=5e-4, help='weight decay')#lr/(c+wd)
parser.add_argument('--epochs', type=int, default=50,
help='number of epochs to train')
parser.add_argument('--batch_size_train', type=int,
default=16, help='training set input batch size')
parser.add_argument('--batch_size_test', type=int,
default=16, help='test set input batch size')
parser.add_argument('--is_gpu', type=bool, default=False,
help='whether training using GPU')
import sys
sys.argv=['']
del sys
# parse the arguments
opt = parser.parse_args()
f = h5py.File('MNISTdata.hdf5','r')
x_test_set=np.float32(f['x_test'][:])
y_test_set=np.int32(np.array(f['y_test'][:,0])).reshape(-1,1)
x_train_set=np.float32(f['x_train'][:])
y_train_set=np.int32(np.array(f['y_train'][:,0])).reshape(-1,1)
f.close()
#num_samples = y_train_set.shape[0]
#y_train_set = y_train_set.reshape(1, num_samples)
#y_train_set = np.eye(10)[y_train_set.astype('int32')]
#y_train_set = y_train_set.T.reshape(10, num_samples)
#num_samples = y_test_set.shape[0]
#y_test_set = y_test_set.reshape(1, num_samples)
#y_test_set = np.eye(10)[y_test_set.astype('int32')]
#y_test_set = y_test_set.T.reshape(10, num_samples)
trainset = torch.utils.data.TensorDataset(torch.Tensor(x_train_set), torch.Tensor(y_train_set)) # create your datset
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=opt.batch_size_train, shuffle=True)
#mini-batch gradient, stochastic gradient descent - 1 sample
testset = torch.utils.data.TensorDataset(torch.Tensor(x_test_set), torch.Tensor(y_test_set)) # create your datset
testloader = torch.utils.data.DataLoader(
testset, batch_size=opt.batch_size_test, shuffle=False)
type(trainset), type(trainloader)
# create the FNN instance
net = FNN()
# For training on GPU, transfer net and data into the GPU
if opt.is_gpu:
net = net.cuda()
net = torch.nn.DataParallel(net, device_ids=range(torch.cuda.device_count()))
cudnn.benchmark = True
else:
print('Training on CPU')
# Loss function and optimizer
criterion = nn.CrossEntropyLoss()#N dim -> prob (softmax) -> CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=opt.lr, weight_decay=opt.wd)#a variant of SGD
for epoch in range(opt.epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
#if training on GPU, wrap the data into the cuda
if opt.is_gpu:
inputs = inputs.cuda()
labels = labels.cuda()
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)#forward
loss = criterion(outputs, labels[:, 0].long())
loss.backward()#compute gradients
optimizer.step()#descent
# calculate loss
running_loss += loss.data.item()
# Normalizing the loss by the total number of train batches
running_loss /= len(trainloader)
# Calculate training/test set accuracy of the existing model
train_accuracy = calculate_accuracy(trainloader, opt.is_gpu)
test_accuracy = calculate_accuracy(testloader, opt.is_gpu)
print("Iteration: {0} | Loss: {1} | Training accuracy: {2}% | Test accuracy: {3}%".format(
epoch+1, running_loss, train_accuracy, test_accuracy))
loss, loss.requires_grad
outputs
labels[:, 0].long()
```
# Without Pytorch
```
import h5py
import numpy as np
import argparse
def sigmoid(x):
"""
define scale function
"""
return np.exp(x)/(1.0+np.exp(x))
def RELU(x):
return np.np.maximum(x,0)
def reluDerivative(x):
return np.array([reluDerivativeSingleElement(xi) for xi in x])
def reluDerivativeSingleElement(xi):
if xi > 0:
return 1
elif xi <= 0:
return 0
def compute_loss(Y,V):
L_sum = np.sum(np.multiply(Y, np.log(V)))
m = Y.shape[1]
L = -(1./m) * L_sum
return L
def feed_forward(X, params):
tempt={}
tempt["Z"]=np.matmul(params["W"], X) + params["b1"]
tempt["H"]=sigmoid(tempt["Z"])
#tempt["H"]=RELU(tempt["Z"])
tempt["U"]=np.matmul(params["C"], tempt["H"]) + params["b2"]
tempt["V"]=np.exp(tempt["U"]) / np.sum(np.exp(tempt["U"]), axis=0)
return tempt
def back_propagate(X, Y, params, tempt, m_batch):
# X is m*n matrix
# Y is m*1 matrix
# tempt is the value in each neural cell
dU=tempt["V"]-Y # the loss of output layer
dC=(1. / m_batch) * np.matmul(dU, tempt["H"].T)
db2=(1. / m_batch) * np.sum(dU, axis=1, keepdims=True)
dH=np.matmul(params["C"].T, dU)
dZ = dH * sigmoid(tempt["Z"]) * (1 - sigmoid(tempt["Z"]))
#dZ=dH*reluDerivative(tempt["Z"])
dW = (1. / m_batch) * np.matmul(dZ, X.T)
db1 = (1. / m_batch) * np.sum(dZ, axis=1, keepdims=True)
grads={"dW":dW, "db1":db1, "dC":dC, "db2":db2}
return grads
#hyperparameters
epochs=10
batch_size=1
batchs=np.int32(60000/batch_size)
LR=0.01
dh=100#number of hidden nodes
#getting 60000 samples of training data and 10000 samples of testing data
f=h5py.File('MNISTdata.hdf5','r')
x_test_set=np.float32(f['x_test'][:])
y_test_set=np.int32(np.array(f['y_test'][:,0])).reshape(-1,1)
x_train_set=np.float32(f['x_train'][:])
y_train_set=np.int32(np.array(f['y_train'][:,0])).reshape(-1,1)
f.close()
X=np.vstack((x_train_set,x_test_set))
Y=np.vstack((y_train_set,y_test_set))
num_samples=Y.shape[0]
Y=Y.reshape(1,num_samples)
Y_new = np.eye(10)[Y.astype('int32')]
Y_new = Y_new.T.reshape(10, num_samples)
X_train, X_test=X[:60000].T, X[60000:].T
Y_train, Y_test=Y_new[:,:60000], Y_new[:,60000:]
#building fully connected neural network with one hidden layer
#initialization of parameters
params={"b1":np.zeros((dh,1)),
"W":np.random.randn(dh,784)*np.sqrt(1. / 784),
"b2":np.zeros((10,1)),
"C":np.random.randn(10,dh)*np.sqrt(1. / dh)}
#training the network
for num_epoches in range(epochs):
if (num_epoches > 5):
LR = 0.001
if (num_epoches > 10):
LR = 0.0001
if (num_epoches > 15):
LR = 0.00001
#shuffle the training data
shuffle_index=np.random.permutation(X_train.shape[1])
X_train= X_train[:, shuffle_index]
Y_train=Y_train[:, shuffle_index]
for num_batch in range(batchs):
left_index=num_batch*batch_size
right_index=min(left_index+batch_size,x_train_set.shape[0]-1)
m_batch=right_index-left_index
X=X_train[:,left_index:right_index]
Y=Y_train[:,left_index:right_index]
tempt=feed_forward(X, params)
grads = back_propagate(X, Y, params, tempt, 1)
#gradient descent
params["W"] = params["W"] - LR * grads["dW"]
params["b1"] = params["b1"] - LR * grads["db1"]
params["C"] = params["C"] - LR * grads["dC"]
params["b2"] = params["b2"] - LR * grads["db2"]
#compute loss on training data
tempt = feed_forward(X_train, params)
train_loss = compute_loss(Y_train, tempt["V"])
#compute loss on test set
tempt=feed_forward(X_test, params)
test_loss = compute_loss(Y_test, tempt["V"])
total_correct=0
for n in range(Y_test.shape[1]):
p = tempt["V"][:,n]
prediction = np.argmax(p)
if prediction == np.argmax(Y_test[:,n]):
total_correct+=1
accuracy = np.float32(total_correct) / (Y_test.shape[1])
#print(params)
print("Epoch {}: training loss = {}, test loss = {}, accuracy={}".format(
num_epoches + 1, train_loss, test_loss, accuracy))
```
# ML Model with JD Data
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from scipy import stats
#read/write data from/to local files
prefix_path = 'JD_data/'
# 'skus' table
skus = pd.read_csv(prefix_path + 'JD_sku_data.csv')
# 'users' table
users = pd.read_csv(prefix_path + 'JD_user_data.csv')
# 'clicks' table
clicks = pd.read_csv(prefix_path + 'JD_click_data.csv')
# 'orders' table
orders = pd.read_csv(prefix_path + 'JD_order_data.csv')
# 'delivery' table
delivery = pd.read_csv(prefix_path + 'JD_delivery_data.csv')
# 'inventory' table
inventory = pd.read_csv(prefix_path + 'JD_inventory_data.csv')
# 'network' table
network = pd.read_csv(prefix_path + 'JD_network_data.csv')
orders['order_date'] = pd.to_datetime(orders['order_date'])
orders['weekday'] = orders['order_date'].dt.dayofweek
df_temp = orders[['weekday','final_unit_price']]
#Add dummy variables
df_temp1 = pd.get_dummies(df_temp['weekday'], prefix='weekday')
cols_to_keep = ['final_unit_price']
df_temp = df_temp[cols_to_keep].join(df_temp1.iloc[:,0:])#not df_temp1.ix[:,0:], consider the gender case
df_temp['intercept'] = 1
train_cols_ = df_temp.columns[1:]#can write ['x1', 'x2'] manually
train_df = df_temp[train_cols_]
opt2 = parser.parse_args()
trainset_JD = torch.utils.data.TensorDataset(torch.Tensor(train_df.values), torch.Tensor(df_temp['final_unit_price'].values)) # create your datset
trainloader_JD = torch.utils.data.DataLoader(
trainset_JD, batch_size=opt2.batch_size_train, shuffle=True)
class FNN_JD(nn.Module):
"""FNN."""
def __init__(self):
"""FNN Builder."""
super(FNN_JD, self).__init__()
self.fc_layer = nn.Sequential(
nn.Linear(8, 4),
nn.ReLU(inplace=True),
nn.Linear(4, 1)
)
#self.fc_layer = nn.Sequential(
# nn.Linear(8, 4),
# nn.ReLU(inplace=True),
# nn.Linear(4, 2),
# nn.ReLU(inplace=True),
# nn.Linear(2, 1)
#)
def forward(self, x):
"""Perform forward."""
x = self.fc_layer(x)
return x
# create the FNN instance
net_JD = FNN_JD()
# For training on GPU, transfer net and data into the GPU
if opt2.is_gpu:
net_JD = net.cuda()
net_JD = torch.nn.DataParallel(net, device_ids=range(torch.cuda.device_count()))
cudnn.benchmark = True
else:
print('Training on CPU')
# Loss function and optimizer
criterion_JD = nn.MSELoss()
optimizer_JD = optim.Adam(net_JD.parameters(), lr=opt2.lr, weight_decay=opt2.wd)
train_df
for epoch in range(opt2.epochs):
running_loss = 0.0
for i, data in enumerate(trainloader_JD, 0):
# get the inputs
inputs, prices = data
#if training on GPU, wrap the data into the cuda
if opt2.is_gpu:
inputs = inputs.cuda()
prices = prices.cuda()
# wrap them in Variable
inputs, prices = Variable(inputs), Variable(prices)
# zero the parameter gradients
optimizer_JD.zero_grad()
# forward + backward + optimize
outputs = net_JD(inputs)
loss = criterion_JD(outputs[:,0], prices)
loss.backward()
optimizer_JD.step()
# calculate loss
running_loss += loss.data.item()
# Normalizing the loss by the total number of train batches
#running_loss /= len(trainloader)
# Calculate training/test set accuracy of the existing model
#train_accuracy = calculate_accuracy(trainloader, opt.is_gpu)
print("Iteration: {0} | Loss: {1}".format(
epoch+1, running_loss))
#sum of squared error
opt2.batch_size_train * 197859128
```
## Ways to improve accuracy:
### 1. hyperparameter tuning: different algorithm and learning rate - SGD, different loss function, batch size
### 2. different network structures, different activiation layer
### 3. more features/inputs
# Compare with Linear Regression
```
import statsmodels.api as sm
df_temp = orders[['weekday','final_unit_price']]
#Add dummy variables
df_temp1 = pd.get_dummies(df_temp['weekday'], prefix='weekday')
cols_to_keep = ['final_unit_price']
df_temp = df_temp[cols_to_keep].join(df_temp1.iloc[:,1:])#not df_temp1.ix[:,0:], consider the gender case
df_temp['intercept'] = 1
train_cols_ = df_temp.columns[1:]#can write ['x1', 'x2'] manually
train_df = df_temp[train_cols_]
linear_model = sm.OLS(df_temp['final_unit_price'], train_df)
res = linear_model.fit()
print(res.summary())
res.params
coef = res.params.values
x = train_df.values
y = df_temp['final_unit_price']
loss = 0
for i in range(len(y)):
predict = np.dot(coef, x[i])
loss += (predict - y[i])**2
loss
# 8*4 + 4*1
# 7
```
| github_jupyter |
# List, Set, and Dictionary Comprehensions
In our prior session we discussed a variety of loop patterns.
One of the most common patterns that we encounter in practice is the need to iterate through a list of values, transform the elements of the list using some operations, filter out the results, and return back a new list of values.
## Example
Let's examine again our example with the NBA teams and franchise names:
```
nba_teams = [
"Atlanta Hawks", "Boston Celtics", "Brooklyn Nets", "Charlotte Hornets",
"Chicago Bulls", "Cleveland Cavaliers", "Dallas Mavericks",
"Denver Nuggets", "Detroit Pistons", "Golden State Warriors",
"Houston Rockets", "Indiana Pacers", "LA Clippers", "Los Angeles Lakers",
"Memphis Grizzlies", "Miami Heat", "Milwaukee Bucks",
"Minnesota Timberwolves", "New Orleans Pelicans", "New York Knicks",
"Oklahoma City Thunder", "Orlando Magic", "Philadelphia 76ers",
"Phoenix Suns", "Portland Trail Blazers", "Sacramento Kings",
"San Antonio Spurs", "Toronto Raptors", "Utah Jazz", "Washington Wizards"
]
print("The list contains", len(nba_teams), "teams")
franchise_names = [] # We create an empty list
for team in nba_teams: # We iterate over all elements of the list
# Do some operation on the list element "team"
# and get back the result "franchise"
franchise = team.split()[-1]
# Append the "franchise" element in the list that we created before the loop
franchise_names.append(franchise)
```
And below we re-write the code above as a **list comprehension**.
```
franchise_names = [ team.split()[-1] for team in nba_teams ]
```
In other words, list comprehensions give us the ability to write a very common loop pattern as a one-liner. However, it is not just about brevity; when we see code that uses a list comprehension we understand quickly that the code is processing one list to create another, and the various elements are together in a very specific order. Such a clarity is not guaranteed with a loop, as loops may have many uses.
## Defining List Comprehensions
The syntax of list comprehensions is based on the way mathematicians define sets and lists, a syntax that leaves it clear what the contents should be.
For example `S` is a set of the square of all integer numbers from 0 to 9. In math notation, we write:
+ `S = {x² : x in {0 ... 9}}`
Python's list comprehensions give a very natural way to write statements just like these. It may look strange early on, but it becomes a very natural and concise way of creating lists, without having to write for-loops.
Let's see again the comparison with for loops:
```
# This code below will create a list with the squares
# of the numbers from 0 to 9
S = [] # we create an empty list
for i in range(10): # We iterate over all numbers from 0 to 9
S.append(i*i) # We add in the list the square of the number i
print(S )# we print(the list)
S = [i*i for i in range(10)]
print(S)
```
Let's do one more example. The `V` is the powers of 2 from $2^0$ until $2^{12}$:
+ `V = (1, 2, 4, 8, ..., 2¹²)`
```
V=[] # Create a list
for i in range(13): # Change i to be from 0 to 12
V.append(2**i) # Add 2**i in the new list
print(V)
# And rewritten as a list comprehension:
V = [2**i for i in range(13)]
print(V)
```
Again notice the structure:
```python
newlist = []
for i in somelist:
x = do_something_with(i)
newlist.append(x)
```
gets rewritten as
```python
newlist = [do_something_with(i) for i in somelist]
```
## The *if* statement within a list comprehension
Now let's consider the following case. We want to process the list of NBA teams, and keep in a list the teams that have a franchise name that contains a given substring.
In the example below, we will try to find all the teams that start with the letter `B`.
```
nba_teams = [
"Atlanta Hawks", "Boston Celtics", "Brooklyn Nets", "Charlotte Hornets",
"Chicago Bulls", "Cleveland Cavaliers", "Dallas Mavericks",
"Denver Nuggets", "Detroit Pistons", "Golden State Warriors",
"Houston Rockets", "Indiana Pacers", "LA Clippers", "Los Angeles Lakers",
"Memphis Grizzlies", "Miami Heat", "Milwaukee Bucks",
"Minnesota Timberwolves", "New Orleans Pelicans", "New York Knicks",
"Oklahoma City Thunder", "Orlando Magic", "Philadelphia 76ers",
"Phoenix Suns", "Portland Trail Blazers", "Sacramento Kings",
"San Antonio Spurs", "Toronto Raptors", "Utah Jazz", "Washington Wizards"
]
franchise_names = []
look_for = 'B' #looking
for team in nba_teams:
franchise = team.split()[-1]
if franchise.startswith(look_for):
franchise_names.append(franchise)
print(franchise_names)
```
This pattern, where we do not add *all* the elements in the resulting list is also very common. List comprehensions allow such patterns to be also expressed as list comprehensions
```
look_for = 'B'
franchise_names = [team.split()[-1] for team in nba_teams if team.split()[-1].startswith(look_for)]
print(franchise_names)
# Alternatively, you can even break the lines within a comprehension
# This may help with readability
franchise_names = [team.split()[-1]
for team in nba_teams
if team.split()[-1].startswith(look_for)]
print(franchise_names)
```
Here is another example, with a list comprehension. We have `S` is a set of the square of all integer numbers from 0 to 9, and we define `M` to be all the elements in `S` that are even. In math notation:
+ `S = {x² : x in {0 ... 9}}`
+ `M = {x | x in S and x even}`
Now let's write the above as list comprehensions. **Note the list comprehension for deriving M uses a "if statement" to filter out those values that aren't of interest**, restricting to only the even squares.
```
S = [i*i for i in range(10)]
print(S)
M = []
for i in S: # iterate through all elements in S
if i%2 == 0: # if i is an event number
M.append(i) # ..add it to the list
print(M)
M = [x for x in S if x%2 == 0]
print(M)
```
These are simple examples, using numerical compuation. Let's see a more "practical" use: In the following operation we transform a string into an list of values, a more complex operation:
```
sentence = 'The quick brown fox jumps over the lazy dog'
words = [(w.upper(), w.lower(), len(w)) for w in sentence.split()]
words
```
So, what the code does here? It takes as input the string `sentence`, creates a list of words, and for each word it creates a tuple, with the word in uppercase, lowercase, together with the length of the word.
## Set and Dictionary Comprehensions
In addition to _list_ comprehensions, we also have the same principle for sets and dictionaries. We can create sets and dictionaries in the same way, but now we do not use square brackets to surround the comprehension, but use braces instead.
```
# Creating a set instead of a list.
S = {i*i for i in range(10)}
S
# Dictionary comprehension, where team name becomes the key, and franchise name the value
teams_franchise = {team:team.split()[-1] for team in nba_teams}
teams_franchise
# Dictionary comprehension, where team name becomes the key, and franchise name the value
words = {w:len(w) for w in sentence.split()}
words
```
## Exercise
You are given the sentence 'The quick brown fox jumps over the lazy dog',
```
sentence = 'The quick brown fox jumps over the lazy dog'
```
**Question 1**: List each word and its length from the string 'The quick brown fox jumps over the lazy dog', conditioned on the length of the word being four characters and above
**Question 2**: List only words with the letter o in them
```
# List each word and its length from the string
# 'The quick brown fox jumps over the lazy dog',
# conditioned on the length of the word being four characters and above
```
### Solution
```
[ (word, len(word)) for word in sentence.split() if len(word)>=4]
# List only words with the letter o in them
```
### Solution
```
[ word for word in sentence.split() if 'o' in word]
```
## Exercise
We will work now on a more challenging exercise. This will not only require the use of comprehensions, but will also ask you to put together things that we learned earlier in the course, especially when we studied strings.
**Question 1**: You are given the `wsj` article below. Write a list comprehension for getting the words that appear more than once.
* Use the `.split()` command for splitting, without passing a parameter.
* When counting words, case does not matter (i.e., YAHOO is the same as Yahoo).
**Question 2**: Find all the *characters* in the article that are not letters or numbers. You can use the isdigit() and isalpha() functions, which work on strings. (e.g, `"Panos".isalpha()` and `"1234".isdigit()` return True)
```
wsj = """
Yahoo Inc. disclosed a massive security breach by a “state-sponsored actor” affecting at least 500 million users, potentially the largest such data breach on record and the latest hurdle for the beaten-down internet company as it works through the sale of its core business.
Yahoo said certain user account information—including names, email addresses, telephone numbers, dates of birth, hashed passwords and, in some cases, encrypted or unencrypted security questions and answers—was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.
Yahoo said it is notifying potentially affected users and has taken steps to secure their accounts by invalidating unencrypted security questions and answers so they can’t be used to access an account and asking potentially affected users to change their passwords.
Yahoo recommended users who haven’t changed their passwords since 2014 do so. It also encouraged users change their passwords as well as security questions and answers for any other accounts on which they use the same or similar information used for their Yahoo account.
The company, which is working with law enforcement, said the continuing investigation indicates that stolen information didn't include unprotected passwords, payment-card data or bank account information.
With 500 million user accounts affected, this is the largest-ever publicly disclosed data breach, according to Paul Stephens, director of policy and advocacy with Privacy Rights Clearing House, a not-for-profit group that compiles information on data breaches.
No evidence has been found to suggest the state-sponsored actor is currently in Yahoo’s network, and Yahoo didn’t name the country it suspected was involved. In August, a hacker called “Peace” appeared in online forums, offering to sell 200 million of the company’s usernames and passwords for about $1,900 in total. Peace had previously sold data taken from breaches at Myspace and LinkedIn Corp.
"""
# getting the words that appear more than once
```
### Solution
```
words = wsj.lower().split()
recurring = [w for w in words if words.count(w)>1]
print(recurring)
print(sorted(set(recurring)))
# Find all the *characters* in the article that are not letters or numbers
```
### Solution
```
# Let's use a set comprehension here, to eliminate duplicates
nonalphanumeric = {c for c in wsj if not c.isdigit() and not c.isalpha()}
print(nonalphanumeric)
```
| github_jupyter |
# *Circuitos Elétricos I - Semana 10*
### Problema 1
(Problema 7.19 - Nilsson) Para o circuito abaixo, pede-se:
<img src="./figures/J13C1.png" width="400">
a) Determine a tensão $v_0(t)$ sobre o indutor de $48\;mH$ para $t\geq0$.\
b) Determine a corrente $i_0(t)$ sobre o indutor de $48\;mH$ para $t\geq0$.\
c) Determine a energia consumida pelo resistor de $2.5\;k\Omega$ no intervalo $0\leq t \leq\infty$.
Link para a simulação do circuito: https://tinyurl.com/yj69udn8
```
# valores das indutâncias
L1 = 20e-3
L2 = 80e-3
L3 = 48e-3
# valores iniciais das correntes
i1_0 = 5e-3
i2_0 = 5e-3
i3_0 = 0
# indutância equivalente
Leq1 = (L2*L3)/(L2+L3)
Leq = L1 + Leq1
print('Leq = ', Leq/1e-3, ' mH')
R = 2.5e3
# constante de tempo
τ = Leq/R
print('τ = ', τ, ' s')
import sympy as sp
iL_inf = 0
iL_0 = i1_0
# define as variável tempo
t = sp.symbols('t')
# define i(t)
iL = iL_inf + (iL_0 - iL_inf)*sp.exp(-t/τ)
print('Corrente no indutor equivalente:')
print('iL(t) = ', iL/1e-3 , ' mA')
# calcula v0
v0 = Leq1*sp.diff(iL,t)
print('v0(t) = ', v0 , ' V')
# correntes nos indutores em função da tensão aplicada aos terminais
i1 = iL
i2 = (1/L2)*sp.integrate(v0, (t, 0, t)) + i2_0
i3 = (1/L3)*sp.integrate(v0, (t, 0, t)) + i3_0
print('Correntes nos indutores:')
print('i1(t) = ', i1/1e-3 , ' mA')
print('i2(t) = ', i2/1e-3 , ' mA')
print('i3(t) = ', i3/1e-3 , ' mA')
# calculando os valores de energia em t=0
E1_0 = (1/2)*L1*(i1.evalf(subs={t:0}))**2
E2_0 = (1/2)*L2*(i2.evalf(subs={t:0}))**2
E3_0 = (1/2)*L3*(i3.evalf(subs={t:0}))**2
print('Energia inicial armazenada nos indutores:')
print('E1(0) = %.2f μJ' %(E1_0/1e-6))
print('E2(0) = %.2f μJ' %(E2_0/1e-6))
print('E3(0) = %.2f μJ' %(E3_0/1e-6))
# calculando os valores de energia em t =oo
E1_inf = (1/2)*L1*(i1.evalf(subs={t:100}))**2
E2_inf = (1/2)*L2*(i2.evalf(subs={t:100}))**2
E3_inf = (1/2)*L3*(i3.evalf(subs={t:100}))**2
print('Energia final armazenada nos indutores:')
print('E1(oo) = %.2f μJ' %(E1_inf/1e-6))
print('E2(oo) = %.2f μJ' %(E2_inf/1e-6))
print('E3(oo) = %.2f μJ' %(E3_inf/1e-6))
# calculando a variação de energia nos indutores
ΔE = (E1_inf-E1_0) + (E2_inf-E2_0) + (E3_inf-E3_0)
print('Variação da energia armazenada nos indutores:')
print('ΔE = %.2f μJ' %(ΔE/1e-6))
# define tensão sobre o resistor vR(t)
vR = R*i1
# potência consumida pelo resistor
p = vR*i1
# energia consumida pelo resistor
E = sp.integrate(p, (t, 0, sp.oo))
print('Energia consumida pelo resistor:')
print('E = %.2f μJ' %(E/1e-6))
```
| github_jupyter |
___
<a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
___
# SF Salaries Exercise
Welcome to a quick exercise for you to practice your pandas skills! We will be using the [SF Salaries Dataset](https://www.kaggle.com/kaggle/sf-salaries) from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along.
** Import pandas as pd.**
```
import pandas as pd
```
** Read Salaries.csv as a dataframe called sal.**
```
sal = pd.read_csv('data/Salaries.csv')
```
** Check the head of the DataFrame. **
```
sal.head()
```
** Use the .info() method to find out how many entries there are.**
```
sal.info()
```
**What is the average BasePay ?**
```
sal['BasePay'].mean()
```
** What is the highest amount of OvertimePay in the dataset ? **
```
sal['OvertimePay'].max()
```
** What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll). **
```
sal[sal['EmployeeName'] == 'JOSEPH DRISCOLL']['JobTitle']
```
** How much does JOSEPH DRISCOLL make (including benefits)? **
```
sal[sal['EmployeeName'] == 'JOSEPH DRISCOLL']['TotalPayBenefits']
```
** What is the name of highest paid person (including benefits)?**
```
sal[sal["TotalPayBenefits"]==sal["TotalPayBenefits"].max()]
```
** What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?**
```
sal[sal["TotalPayBenefits"]==sal["TotalPayBenefits"].min()]
```
** What was the average (mean) BasePay of all employees per year? (2011-2014) ? **
```
sal.groupby('Year').mean()['BasePay']
```
** How many unique job titles are there? **
```
sal['JobTitle'].nunique()
```
** What are the top 5 most common jobs? **
```
sal['JobTitle'].value_counts().sort_values(ascending=False).head()
```
** How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?) **
```
(sal[sal['Year']==2013]['JobTitle'].value_counts()==1).sum()
```
** How many people have the word Chief in their job title? (This is pretty tricky) **
```
def check(x):
'''To convert the jobtitles into lower case,split it and check if it has 'chief' in it '''
x=x.lower().split()
if 'chief' in x:
return True
else:
return False
a=list()
for i in sal['JobTitle']:
a.append(check(i))
sum(a)
```
** Bonus: Is there a correlation between length of the Job Title string and Salary? **
```
sal['len'] = sal['JobTitle'].apply(len)
sal[['len','TotalPayBenefits']].corr()
```
**Plotting**
```
from matplotlib import pyplot as plt
%matplotlib inline
plt.scatter(sal['len'],sal['TotalPayBenefits'],color='R')
plt.xlabel('Length of JobTitles')
plt.ylabel('TotalPayBenefits')
```
# Great Job!
| github_jupyter |
```
from __future__ import absolute_import, division, print_function, unicode_literals
from IPython import display
from matplotlib import pyplot as plt
from scipy.ndimage.filters import gaussian_filter1d
import pandas as pd
import numpy as np
import datetime
import tensorflow as tf
!rm -rf ./logs/
# Load the TensorBoard notebook extension
%load_ext tensorboard
higgs_path = tf.keras.utils.get_file('HIGGSSmall.csv.gz', 'https://github.com/PacktWorkshops/The-Reinforcement-Learning-Workshop/blob/master/Chapter03/Dataset/HIGGSSmall.csv.gz?raw=true')
N_TEST = int(1e3)
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(N_TRAIN)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
N_FEATURES = 28
ds = tf.data.experimental.CsvDataset(higgs_path,[float(),]*(N_FEATURES+1), compression_type="GZIP")
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
packed_ds = ds.batch(N_TRAIN).map(pack_row).unbatch()
validate_ds = packed_ds.take(N_VALIDATION).cache()
test_ds = packed_ds.skip(N_VALIDATION).take(N_TEST).cache()
train_ds = packed_ds.skip(N_VALIDATION+N_TEST).take(N_TRAIN).cache()
test_ds = test_ds.batch(BATCH_SIZE)
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
def compile_and_fit(model, name, max_epochs=3000):
optimizer = tf.keras.optimizers.Adam(lr_schedule)
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, profile_batch=0)
history = model.fit(train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tensorboard_callback],
verbose=2)
return history
regularization_model = tf.keras.Sequential([
tf.keras.layers.Dense(512, kernel_regularizer=tf.keras.regularizers.l2(0.0001),
activation='elu', input_shape=(N_FEATURES,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, kernel_regularizer=tf.keras.regularizers.l2(0.0001),
activation='elu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, kernel_regularizer=tf.keras.regularizers.l2(0.0001),
activation='elu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, kernel_regularizer=tf.keras.regularizers.l2(0.0001),
activation='elu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1)
])
compile_and_fit(regularization_model, "regularizers/regularization", max_epochs=9000)
test_accuracy = tf.keras.metrics.Accuracy()
for (features, labels) in test_ds:
logits = regularization_model(features)
probabilities = tf.keras.activations.sigmoid(logits)
predictions = 1*(probabilities.numpy() > 0.5)
test_accuracy(predictions, labels)
regularization_model_accuracy = test_accuracy.result()
print("Test set accuracy: {:.3%}".format(regularization_model_accuracy))
%tensorboard --logdir logs/fit
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv('data/small_corpus.csv')
df['reviews']= df['reviews'].astype(str)
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
def classify(item):
output = classifier(item)[0]
label = output['label']
score = output['score']
return ','.join([label,str(score)])
df['label_score'] = df['reviews'].apply(lambda x : classify(x[:512]))
def prediction_to_class(label_score,threshold):
val = label_score.split(',')
label = val[0]
score = float(val[1])
if label == "NEGATIVE" and score > threshold:
return 0
elif label == "POSITIVE" and score > threshold:
return 2
else:
return 1
df['predicted'] = df['label_score'].apply(lambda x : prediction_to_class(x,0.75))
```
# Results
```
def score_to_Target(value):
if value >= 5:
return 2
if value <= 4 and value >= 2:
return 1
else:
return 0
df['rating_classes'] = df['ratings'].apply(lambda x:score_to_Target(x))
df['predicted'] = df['label_score'].apply(lambda x : prediction_to_class(x,0.99))
rating_classes = list(df["rating_classes"])
predicated_values = list(df["predicted"])
target_names = ["negative", "neutral", "positive"]
from sklearn.metrics import classification_report
print(classification_report(rating_classes, predicated_values, target_names=target_names))
import altair as alt
import numpy as np
from sklearn.metrics import confusion_matrix
x, y = np.meshgrid(range(0, 3), range(0, 3))
cm=confusion_matrix(rating_classes, predicated_values, labels=[0, 1, 2])
import numpy as np
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
plot_confusion_matrix(cm = cm,
normalize = False,
target_names = ["negative", "neutral", "positive"],
title = "Confusion Matrix")
```
| github_jupyter |
<h1 align=center>The Cobweb Model</h1>
Presentation follows <a href="http://www.parisschoolofeconomics.eu/docs/guesnerie-roger/hommes94.pdf">Hommes, <em>JEBO 1994</em></a>. Let $p_t$ denote the <em>observed price</em> of goods and $p_t^e$ the <em>expected price</em> of goods in period $t$. Similarly, let $q_t^d$ denote the <em>quantity demanded</em> of all goods in period $t$ and $q_t^s$ the <em>quantity supplied</em> of all goods in period $t$.
\begin{align}
q_t^d =& D(p_t) \tag{1} \\
q_t^s =& S(p_t^e) \tag{2} \\
q_t^d =& q_t^s \tag{3} \\
p_t^e =& p_{t-1}^e + w\big(p_{t-1} - p_{t-1}^e\big) = (1 - w)p_{t-1}^e + w p_{t-1} \tag{4}
\end{align}
Equation 1 says that the quantity demanded of goods in period $t$ is some function of the <em>observed price</em> in period $t$. Equation 2, meanwhile, states that the quantity of goods supplied in period $t$ is a function of the <em>expected price</em> in period $t$. Equation 3 is a market clearing equilibrium condition. Finally, equation 4 is an adaptive expectation formation rule that specifies how goods producers form their expectations about the price of goods in period $t$ as a function of past prices.
Combine the equations as follows. Note that equation 3 implies that...
$$ D(p_t) = q_t^d = q_t^s = S(p_t^e) $$
...and therefore, assuming the demand function $D$ is invertible, we can write the observed price of goods in period $t$ as...
$$ p_t = D^{-1}\big(S(p_t^e)\big). \tag{5}$$
Substituting equation 5 into equation 4 we arrive at the following difference equation
$$ p_{t+1}^e = w D^{-1}\big(S(p_t^e)\big) + (1 - w)p_t^e. \tag{7}$$
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import functools
import ipywidgets
import matplotlib.pyplot as plt
import numpy as np
from scipy import optimize
import seaborn as sns
import cobweb
def observed_price(D_inverse, S, expected_price, **params):
"""The observed price of goods in a particular period."""
actual_price = D_inverse(S(expected_price, **params), **params)
return actual_price
def adaptive_expectations(D_inverse, S, expected_price, w, **params):
"""An adaptive expectations price forecasting rule."""
actual_price = observed_price(D_inverse, S, expected_price, **params)
price_forecast = w * actual_price + (1 - w) * expected_price
return price_forecast
```
<h2> Non-linear supply functions </h2>
When thinking about supply it helps to start with the following considerations...
<ol>
<li> ...when prices are low, the quantity supplied increases slowly because of fixed costs of production (think startup costs, etc).
<li> ...when prices are high, supply also increases slowly because of capacity constraints.
</ol>
These considerations motivate our focus on "S-shaped" supply functions...
$$ S_{\gamma}(p_t^e) = -tan^{-1}(-\gamma \bar{p}) + tan^{-1}(\gamma (p_t^e - \bar{p})). \tag{10}$$
The parameter $0 < \gamma < \infty$ controls the "steepness" of the supply function.
```
def quantity_supply(expected_price, gamma, p_bar, **params):
"""The quantity of goods supplied in period t given the epxected price."""
return -np.arctan(-gamma * p_bar) + np.arctan(gamma * (expected_price - p_bar))
```
<h3> Exploring supply shocks </h3>
Interactively change the value of $\gamma$ to see the impact on the shape of the supply function.
```
ipywidgets.interact?
interactive_quantity_supply_plot = ipywidgets.interact(cobweb.quantity_supply_plot,
S=ipywidgets.fixed(quantity_supply),
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Special case: Linear demand functions </h2>
Suppose the the quantity demanded of goods is a simple, decresing linear function of the observed price.
$$ q_t^d = D(p_t) = a - b p_t \implies p_t = D^{-1}(q_t^d) = \frac{a}{b} - \frac{1}{b}q_t^d \tag{11} $$
...where $-\infty < a < \infty$ and $0 < b < \infty$.
```
def quantity_demand(observed_price, a, b):
"""The quantity demand of goods in period t given the price."""
quantity = a - b * observed_price
return quantity
def inverse_demand(quantity_demand, a, b, **params):
"""The price of goods in period t given the quantity demanded."""
price = (a / b) - (1 / b) * quantity_demand
return price
```
<h3> Exploring demand shocks </h3>
Interactively change the values of $a$ and $b$ to get a feel for how they impact demand. Shocks to $a$ shift the entire demand curve; shocks to $b$ change the slope of the demand curve (higher $b$ implies greater sensitivity to price; lower $b$ implies less sensitivity to price).
```
interactive_quantity_demand_plot = ipywidgets.interact(cobweb.quantity_demand_plot,
D=ipywidgets.fixed(quantity_demand),
a=cobweb.a_float_slider,
b=cobweb.b_float_slider)
```
<h2> Supply and demand </h2>
Market clearing equilibrium price, $p^*$, satisfies...
$$ D(p_t) = S(p_t^e). $$
Really this is also an equilibrium in beliefs because we also require that $p_t = p_t^e$!
```
interactive_supply_demand_plot = ipywidgets.interact(cobweb.supply_demand_plot,
D=ipywidgets.fixed(quantity_demand),
S=ipywidgets.fixed(quantity_supply),
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Analyzing dynamics of the model via simulation... </h2>
Model has no closed form solution (i.e., we can not solve for a function that describes $p_t^e$ as a function of time and model parameters). BUT, we can simulate equation 7 above to better understand the dynamics of the model...
We can simulate our model and plot time series for different parameter values. Questions for discussion...
<ol>
<li> Can you find a two-cycle? What does this mean?</li>
<li> Can you find higher cycles? Perhaps a four-cycle? Maybe even a three-cycle?</li>
<li> Do simulations with similar initial conditions converge or diverge over time? </li>
</ol>
Can we relate these things to other SFI MOOCS on non-linear dynamics and chaos? Surely yes!
```
model = functools.partial(adaptive_expectations, inverse_demand, quantity_supply)
interactive_time_series_plot = ipywidgets.interact(cobweb.time_series_plot,
F=ipywidgets.fixed(model),
X0=cobweb.initial_expected_price_slider,
T=cobweb.T_int_slider,
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
w=cobweb.w_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Forecast errors </h2>
How do we measure forecast error? What does the distribution of forecast errors look like for different parameters? Could an agent learn to avoid chaos? Specifically, suppose an agent learned to tune the value of $w$ in order to minimize its mean forecast error. Would this eliminate chaotic dynamics?
```
interactive_forecast_error_plot = ipywidgets.interact(cobweb.forecast_error_plot,
D_inverse=ipywidgets.fixed(inverse_demand),
S=ipywidgets.fixed(quantity_supply),
F=ipywidgets.fixed(model),
X0=cobweb.initial_expected_price_slider,
T=cobweb.T_int_slider,
a=cobweb.a_float_slider,
b=cobweb.b_float_slider,
w=cobweb.w_float_slider,
gamma=cobweb.gamma_float_slider,
p_bar=cobweb.p_bar_float_slider)
```
<h2> Other things of possible interest? </h2>
Impulse response functions?
Compare constrast model predictions for rational expectations, naive expectations, adaptive expectations. Depending on what Cars might have in mind, we could also add other expectation formation rules from his more recent work and have students analyze those...
| github_jupyter |
# Efficient Interpolation & Exploration with STONED SELFIES
### By: AkshatKumar Nigam, Robert Pollice, Mario Krenn, Gabriel dos Passos Gomes, and Alan Aspuru-Guzik
Paper Link: https://doi.org/10.26434/chemrxiv.13383266.v2 \
Paper Github: https://github.com/aspuru-guzik-group/stoned-selfies
<img src="https://github.com/aspuru-guzik-group/stoned-selfies/blob/main/readme_docs/fig_main_algo.png?raw=True" width="900" />
# Experiment Imports / Functions:
```
import time
import selfies
import rdkit
import random
import numpy as np
import random
from rdkit import Chem
from selfies import encoder, decoder
from rdkit.Chem import MolFromSmiles as smi2mol
from rdkit.Chem import AllChem
from rdkit.DataStructs.cDataStructs import TanimotoSimilarity
from rdkit.Chem import Mol
from rdkit.Chem.AtomPairs.Sheridan import GetBPFingerprint, GetBTFingerprint
from rdkit.Chem.Pharm2D import Generate, Gobbi_Pharm2D
from rdkit.Chem import Draw
from rdkit.Chem import MolToSmiles as mol2smi
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
def randomize_smiles(mol):
'''Returns a random (dearomatized) SMILES given an rdkit mol object of a molecule.
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
Returns:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
'''
if not mol:
return None
Chem.Kekulize(mol)
return rdkit.Chem.MolToSmiles(mol, canonical=False, doRandom=True, isomericSmiles=False, kekuleSmiles=True)
def sanitize_smiles(smi):
'''Return a canonical smile representation of smi
Parameters:
smi (string) : smile string to be canonicalized
Returns:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
smi_canon (string) : Canonicalized smile representation of smi (None if invalid smile string smi)
conversion_successful (bool): True/False to indicate if conversion was successful
'''
try:
mol = smi2mol(smi, sanitize=True)
smi_canon = mol2smi(mol, isomericSmiles=False, canonical=True)
return (mol, smi_canon, True)
except:
return (None, None, False)
def get_selfie_chars(selfie):
'''Obtain a list of all selfie characters in string selfie
Parameters:
selfie (string) : A selfie string - representing a molecule
Example:
>>> get_selfie_chars('[C][=C][C][=C][C][=C][Ring1][Branch1_1]')
['[C]', '[=C]', '[C]', '[=C]', '[C]', '[=C]', '[Ring1]', '[Branch1_1]']
Returns:
chars_selfie: list of selfie characters present in molecule selfie
'''
chars_selfie = [] # A list of all SELFIE sybols from string selfie
while selfie != '':
chars_selfie.append(selfie[selfie.find('['): selfie.find(']')+1])
selfie = selfie[selfie.find(']')+1:]
return chars_selfie
class _FingerprintCalculator:
''' Calculate the fingerprint for a molecule, given the fingerprint type
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
fp_type (string) :Fingerprint type (choices: AP/PHCO/BPF,BTF,PAT,ECFP4,ECFP6,FCFP4,FCFP6)
Returns:
RDKit fingerprint object
'''
def get_fingerprint(self, mol: Mol, fp_type: str):
method_name = 'get_' + fp_type
method = getattr(self, method_name)
if method is None:
raise Exception(f'{fp_type} is not a supported fingerprint type.')
return method(mol)
def get_AP(self, mol: Mol):
return AllChem.GetAtomPairFingerprint(mol, maxLength=10)
def get_PHCO(self, mol: Mol):
return Generate.Gen2DFingerprint(mol, Gobbi_Pharm2D.factory)
def get_BPF(self, mol: Mol):
return GetBPFingerprint(mol)
def get_BTF(self, mol: Mol):
return GetBTFingerprint(mol)
def get_PATH(self, mol: Mol):
return AllChem.RDKFingerprint(mol)
def get_ECFP4(self, mol: Mol):
return AllChem.GetMorganFingerprint(mol, 2)
def get_ECFP6(self, mol: Mol):
return AllChem.GetMorganFingerprint(mol, 3)
def get_FCFP4(self, mol: Mol):
return AllChem.GetMorganFingerprint(mol, 2, useFeatures=True)
def get_FCFP6(self, mol: Mol):
return AllChem.GetMorganFingerprint(mol, 3, useFeatures=True)
def get_fingerprint(mol: Mol, fp_type: str):
''' Fingerprint getter method. Fingerprint is returned after using object of
class '_FingerprintCalculator'
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
fp_type (string) :Fingerprint type (choices: AP/PHCO/BPF,BTF,PAT,ECFP4,ECFP6,FCFP4,FCFP6)
Returns:
RDKit fingerprint object
'''
return _FingerprintCalculator().get_fingerprint(mol=mol, fp_type=fp_type)
def mutate_selfie(selfie, max_molecules_len, write_fail_cases=False):
'''Return a mutated selfie string (only one mutation on slefie is performed)
Mutations are done until a valid molecule is obtained
Rules of mutation: With a 33.3% propbabily, either:
1. Add a random SELFIE character in the string
2. Replace a random SELFIE character with another
3. Delete a random character
Parameters:
selfie (string) : SELFIE string to be mutated
max_molecules_len (int) : Mutations of SELFIE string are allowed up to this length
write_fail_cases (bool) : If true, failed mutations are recorded in "selfie_failure_cases.txt"
Returns:
selfie_mutated (string) : Mutated SELFIE string
smiles_canon (string) : canonical smile of mutated SELFIE string
'''
valid=False
fail_counter = 0
chars_selfie = get_selfie_chars(selfie)
while not valid:
fail_counter += 1
alphabet = list(selfies.get_semantic_robust_alphabet()) # 34 SELFIE characters
choice_ls = [1, 2, 3] # 1=Insert; 2=Replace; 3=Delete
random_choice = np.random.choice(choice_ls, 1)[0]
# Insert a character in a Random Location
if random_choice == 1:
random_index = np.random.randint(len(chars_selfie)+1)
random_character = np.random.choice(alphabet, size=1)[0]
selfie_mutated_chars = chars_selfie[:random_index] + [random_character] + chars_selfie[random_index:]
# Replace a random character
elif random_choice == 2:
random_index = np.random.randint(len(chars_selfie))
random_character = np.random.choice(alphabet, size=1)[0]
if random_index == 0:
selfie_mutated_chars = [random_character] + chars_selfie[random_index+1:]
else:
selfie_mutated_chars = chars_selfie[:random_index] + [random_character] + chars_selfie[random_index+1:]
# Delete a random character
elif random_choice == 3:
random_index = np.random.randint(len(chars_selfie))
if random_index == 0:
selfie_mutated_chars = chars_selfie[random_index+1:]
else:
selfie_mutated_chars = chars_selfie[:random_index] + chars_selfie[random_index+1:]
else:
raise Exception('Invalid Operation trying to be performed')
selfie_mutated = "".join(x for x in selfie_mutated_chars)
sf = "".join(x for x in chars_selfie)
try:
smiles = decoder(selfie_mutated)
mol, smiles_canon, done = sanitize_smiles(smiles)
if len(selfie_mutated_chars) > max_molecules_len or smiles_canon=="":
done = False
if done:
valid = True
else:
valid = False
except:
valid=False
if fail_counter > 1 and write_fail_cases == True:
f = open("selfie_failure_cases.txt", "a+")
f.write('Tried to mutate SELFIE: '+str(sf)+' To Obtain: '+str(selfie_mutated) + '\n')
f.close()
return (selfie_mutated, smiles_canon)
def get_mutated_SELFIES(selfies_ls, num_mutations):
''' Mutate all the SELFIES in 'selfies_ls' 'num_mutations' number of times.
Parameters:
selfies_ls (list) : A list of SELFIES
num_mutations (int) : number of mutations to perform on each SELFIES within 'selfies_ls'
Returns:
selfies_ls (list) : A list of mutated SELFIES
'''
for _ in range(num_mutations):
selfie_ls_mut_ls = []
for str_ in selfies_ls:
str_chars = get_selfie_chars(str_)
max_molecules_len = len(str_chars) + num_mutations
selfie_mutated, _ = mutate_selfie(str_, max_molecules_len)
selfie_ls_mut_ls.append(selfie_mutated)
selfies_ls = selfie_ls_mut_ls.copy()
return selfies_ls
def get_fp_scores(smiles_back, target_smi, fp_type):
'''Calculate the Tanimoto fingerprint (using fp_type fingerint) similarity between a list
of SMILES and a known target structure (target_smi).
Parameters:
smiles_back (list) : A list of valid SMILES strings
target_smi (string) : A valid SMILES string. Each smile in 'smiles_back' will be compared to this stucture
fp_type (string) : Type of fingerprint (choices: AP/PHCO/BPF,BTF,PAT,ECFP4,ECFP6,FCFP4,FCFP6)
Returns:
smiles_back_scores (list of floats) : List of fingerprint similarities
'''
smiles_back_scores = []
target = Chem.MolFromSmiles(target_smi)
fp_target = get_fingerprint(target, fp_type)
for item in smiles_back:
mol = Chem.MolFromSmiles(item)
fp_mol = get_fingerprint(mol, fp_type)
score = TanimotoSimilarity(fp_mol, fp_target)
smiles_back_scores.append(score)
return smiles_back_scores
```
# Fomation of Local Chemical Subspaces:
The task here is to generate multiple molecules in the chemical subspace of a structure. We consider the molecule Celecoxib from the paper. We consider 3 experiments:
1. Generating the chemical subspace of Celecoxib, without restrictions,
3. Generating the chemical subspace of Celecoxib while filtering out un-synthetic structures, and
2. Generating the chemical subspace of Celecoxib while preserving a specific substructure.
### 1. Generating the chemical subspace of Celecoxib, without any restrictions
```
smi = 'CC1=CC=C(C=C1)C2=CC(=NN2C3=CC=C(C=C3)S(=O)(=O)N)C(F)(F)F' # Celecoxib
fp_type = 'ECFP4'
total_time = time.time()
# num_random_samples = 50000 # For a more exhaustive search!
num_random_samples = 1000
num_mutation_ls = [1, 2, 3, 4, 5]
mol = Chem.MolFromSmiles(smi)
if mol == None:
raise Exception('Invalid starting structure encountered')
start_time = time.time()
randomized_smile_orderings = [randomize_smiles(mol) for _ in range(num_random_samples)]
# Convert all the molecules to SELFIES
selfies_ls = [encoder(x) for x in randomized_smile_orderings]
print('Randomized molecules (in SELFIES) time: ', time.time()-start_time)
all_smiles_collect = []
all_smiles_collect_broken = []
start_time = time.time()
for num_mutations in num_mutation_ls:
# Mutate the SELFIES:
selfies_mut = get_mutated_SELFIES(selfies_ls.copy(), num_mutations=num_mutations)
# Convert back to SMILES:
smiles_back = [decoder(x) for x in selfies_mut]
all_smiles_collect = all_smiles_collect + smiles_back
all_smiles_collect_broken.append(smiles_back)
print('Mutation obtainment time (back to smiles): ', time.time()-start_time)
# Work on: all_smiles_collect
start_time = time.time()
canon_smi_ls = []
for item in all_smiles_collect:
mol, smi_canon, did_convert = sanitize_smiles(item)
if mol == None or smi_canon == '' or did_convert == False:
raise Exception('Invalid smile string found')
canon_smi_ls.append(smi_canon)
canon_smi_ls = list(set(canon_smi_ls))
print('Unique mutated structure obtainment time: ', time.time()-start_time)
start_time = time.time()
canon_smi_ls_scores = get_fp_scores(canon_smi_ls, target_smi=smi, fp_type=fp_type)
print('Fingerprint calculation time: ', time.time()-start_time)
print('Total time: ', time.time()-total_time)
# Molecules with fingerprint similarity > 0.8
indices_thresh_8 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.8]
mols_8 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_8]
# Molecules with fingerprint similarity > 0.6
indices_thresh_6 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.6 and x < 0.8]
mols_6 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_6]
# Molecules with fingerprint similarity > 0.4
indices_thresh_4 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.4 and x < 0.6]
mols_4 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_4]
```
### Visualizing Molecules with Similarity > 0.8
```
img=Draw.MolsToGridImage(mols_8[:8],molsPerRow=4,subImgSize=(200,200))
img
```
### Visualizing Molecules with Similarity > 0.6 & Less than 0.8
```
img=Draw.MolsToGridImage(mols_6[:8],molsPerRow=4,subImgSize=(200,200))
img
```
### Visualizing Molecules with Similarity > 0.4 Less than 0.6
```
img=Draw.MolsToGridImage(mols_4[:8],molsPerRow=4,subImgSize=(200,200))
img
```
### 2. Generating the chemical subspace of Celecoxib while filtering out un-synthetic structures.
For this example, we make use of SYBA to filter out the most synthetic structures:
1. Code: https://github.com/lich-uct/syba
2. Paper: https://jcheminf.biomedcentral.com/articles/10.1186/s13321-020-00439-2
```
from syba.syba import SybaClassifier
syba = SybaClassifier()
syba.fitDefaultScore()
syba_scores = []
for item in canon_smi_ls:
syba_scores.append(syba.predict(smi=item))
A = np.argsort(syba_scores)
smi_arranged = [canon_smi_ls[i] for i in A]
smi_arranged = smi_arranged[-20:]
mols_ = [Chem.MolFromSmiles(x) for x in smi_arranged]
img=Draw.MolsToGridImage(mols_,molsPerRow=4,subImgSize=(200,200))
img
```
### 3. Generating the chemical subspace of Celecoxib while preserving a specific substructure
We will preserve the structure marked in red for Celecoxib:
<img src="https://github.com/aspuru-guzik-group/stoned-selfies/blob/main/data/struct_pres.png?raw=True" width="250" />
We write a function with RDKit that can detect the highlighted structure ( substructure_preserver ). While performing mutations with SELFIES, we check if the function returns True (has the substructure). Else, the algorithm is asked to retry/perform a different mutation. Have a look at the specific line:
```
if len(selfie_mutated_chars) > max_molecules_len or smiles_canon=="" or substructure_preserver(mol)==False:
```
```
def substructure_preserver(mol):
"""
Check for substructure violates
Return True: contains a substructure violation
Return False: No substructure violation
"""
if mol.HasSubstructMatch(rdkit.Chem.MolFromSmarts('NS(=O)(=O)c1ccc(-n2cccn2)cc1')) == True:
return True # The has substructure!
else:
return False # Molecule does not have substructure!
def mutate_selfie(selfie, max_molecules_len, write_fail_cases=False):
'''Return a mutated selfie string (only one mutation on slefie is performed)
Mutations are done until a valid molecule is obtained
Rules of mutation: With a 33.3% propbabily, either:
1. Add a random SELFIE character in the string
2. Replace a random SELFIE character with another
3. Delete a random character
Parameters:
selfie (string) : SELFIE string to be mutated
max_molecules_len (int) : Mutations of SELFIE string are allowed up to this length
write_fail_cases (bool) : If true, failed mutations are recorded in "selfie_failure_cases.txt"
Returns:
selfie_mutated (string) : Mutated SELFIE string
smiles_canon (string) : canonical smile of mutated SELFIE string
'''
valid=False
fail_counter = 0
chars_selfie = get_selfie_chars(selfie)
while not valid:
fail_counter += 1
alphabet = list(selfies.get_semantic_robust_alphabet()) # 34 SELFIE characters
choice_ls = [1, 2, 3] # 1=Insert; 2=Replace; 3=Delete
random_choice = np.random.choice(choice_ls, 1)[0]
# Insert a character in a Random Location
if random_choice == 1:
random_index = np.random.randint(len(chars_selfie)+1)
random_character = np.random.choice(alphabet, size=1)[0]
selfie_mutated_chars = chars_selfie[:random_index] + [random_character] + chars_selfie[random_index:]
# Replace a random character
elif random_choice == 2:
random_index = np.random.randint(len(chars_selfie))
random_character = np.random.choice(alphabet, size=1)[0]
if random_index == 0:
selfie_mutated_chars = [random_character] + chars_selfie[random_index+1:]
else:
selfie_mutated_chars = chars_selfie[:random_index] + [random_character] + chars_selfie[random_index+1:]
# Delete a random character
elif random_choice == 3:
random_index = np.random.randint(len(chars_selfie))
if random_index == 0:
selfie_mutated_chars = chars_selfie[random_index+1:]
else:
selfie_mutated_chars = chars_selfie[:random_index] + chars_selfie[random_index+1:]
else:
raise Exception('Invalid Operation trying to be performed')
selfie_mutated = "".join(x for x in selfie_mutated_chars)
sf = "".join(x for x in chars_selfie)
try:
smiles = decoder(selfie_mutated)
mol, smiles_canon, done = sanitize_smiles(smiles)
if len(selfie_mutated_chars) > max_molecules_len or smiles_canon=="" or substructure_preserver(mol)==False:
done = False
if done:
valid = True
else:
valid = False
except:
valid=False
if fail_counter > 1 and write_fail_cases == True:
f = open("selfie_failure_cases.txt", "a+")
f.write('Tried to mutate SELFIE: '+str(sf)+' To Obtain: '+str(selfie_mutated) + '\n')
f.close()
return (selfie_mutated, smiles_canon)
smi = 'CC1=CC=C(C=C1)C2=CC(=NN2C3=CC=C(C=C3)S(=O)(=O)N)C(F)(F)F' # Celecoxib
fp_type = 'ECFP4'
total_time = time.time()
# num_random_samples = 50000 # For a more exhaustive search!
num_random_samples = 100
num_mutation_ls = [1, 2, 3, 4, 5]
mol = Chem.MolFromSmiles(smi)
if mol == None:
raise Exception('Invalid starting structure encountered')
start_time = time.time()
randomized_smile_orderings = [randomize_smiles(mol) for _ in range(num_random_samples)]
# Convert all the molecules to SELFIES
selfies_ls = [encoder(x) for x in randomized_smile_orderings]
print('Randomized molecules (in SELFIES) time: ', time.time()-start_time)
all_smiles_collect = []
all_smiles_collect_broken = []
start_time = time.time()
for num_mutations in num_mutation_ls:
# Mutate the SELFIES:
selfies_mut = get_mutated_SELFIES(selfies_ls.copy(), num_mutations=num_mutations)
# Convert back to SMILES:
smiles_back = [decoder(x) for x in selfies_mut]
all_smiles_collect = all_smiles_collect + smiles_back
all_smiles_collect_broken.append(smiles_back)
print('Mutation obtainment time (back to smiles): ', time.time()-start_time)
# Work on: all_smiles_collect
start_time = time.time()
canon_smi_ls = []
for item in all_smiles_collect:
mol, smi_canon, did_convert = sanitize_smiles(item)
if mol == None or smi_canon == '' or did_convert == False:
raise Exception('Invalid smile string found')
canon_smi_ls.append(smi_canon)
canon_smi_ls = list(set(canon_smi_ls))
print('Unique mutated structure obtainment time: ', time.time()-start_time)
start_time = time.time()
canon_smi_ls_scores = get_fp_scores(canon_smi_ls, target_smi=smi, fp_type=fp_type)
print('Fingerprint calculation time: ', time.time()-start_time)
print('Total time: ', time.time()-total_time)
# Molecules with fingerprint similarity > 0.8
indices_thresh_8 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.8]
mols_8 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_8]
# Molecules with fingerprint similarity > 0.6
indices_thresh_6 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.6 and x < 0.8]
mols_6 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_6]
# Molecules with fingerprint similarity > 0.4
indices_thresh_4 = [i for i,x in enumerate(canon_smi_ls_scores) if x > 0.4 and x < 0.6]
mols_4 = [Chem.MolFromSmiles(canon_smi_ls[idx]) for idx in indices_thresh_4]
img=Draw.MolsToGridImage(mols_8[:8],molsPerRow=4,subImgSize=(200,200))
img
img=Draw.MolsToGridImage(mols_6[:8],molsPerRow=4,subImgSize=(200,200))
img
img=Draw.MolsToGridImage(mols_4[:8],molsPerRow=4,subImgSize=(200,200))
img
```
# Chemical Path Formation:
## Imports for Chemical Path Formation
```
import os
import numpy as np
import random
from random import randrange
import matplotlib.pyplot as plt
import rdkit
from rdkit.Chem import MolFromSmiles as smi2mol
from rdkit.Chem import MolToSmiles as mol2smi
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.DataStructs.cDataStructs import TanimotoSimilarity
from selfies import encoder, decoder
import seaborn as sns
import selfies
import random
import numpy as np
from selfies import encoder, decoder
from rdkit import Chem
from rdkit.Chem import Descriptors
from rdkit.Chem import AllChem
from rdkit.DataStructs.cDataStructs import TanimotoSimilarity
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
def get_ECFP4(mol):
''' Return rdkit ECFP4 fingerprint object for mol
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object
Returns:
rdkit ECFP4 fingerprint object for mol
'''
return AllChem.GetMorganFingerprint(mol, 2)
def sanitize_smiles(smi):
'''Return a canonical smile representation of smi
Parameters:
smi (string) : smile string to be canonicalized
Returns:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
smi_canon (string) : Canonicalized smile representation of smi (None if invalid smile string smi)
conversion_successful (bool): True/False to indicate if conversion was successful
'''
try:
mol = smi2mol(smi, sanitize=True)
smi_canon = mol2smi(mol, isomericSmiles=False, canonical=True)
return (mol, smi_canon, True)
except:
return (None, None, False)
def get_fp_scores(smiles_back, target_smi):
'''Calculate the Tanimoto fingerprint (ECFP4 fingerint) similarity between a list
of SMILES and a known target structure (target_smi).
Parameters:
smiles_back (list) : A list of valid SMILES strings
target_smi (string) : A valid SMILES string. Each smile in 'smiles_back' will be compared to this stucture
Returns:
smiles_back_scores (list of floats) : List of fingerprint similarities
'''
smiles_back_scores = []
target = Chem.MolFromSmiles(target_smi)
fp_target = get_ECFP4(target)
for item in smiles_back:
mol = Chem.MolFromSmiles(item)
fp_mol = get_ECFP4(mol)
score = TanimotoSimilarity(fp_mol, fp_target)
smiles_back_scores.append(score)
return smiles_back_scores
def get_selfie_chars(selfie):
'''Obtain a list of all selfie characters in string selfie
Parameters:
selfie (string) : A selfie string - representing a molecule
Example:
>>> get_selfie_chars('[C][=C][C][=C][C][=C][Ring1][Branch1_1]')
['[C]', '[=C]', '[C]', '[=C]', '[C]', '[=C]', '[Ring1]', '[Branch1_1]']
Returns:
chars_selfie: list of selfie characters present in molecule selfie
'''
chars_selfie = [] # A list of all SELFIE sybols from string selfie
while selfie != '':
chars_selfie.append(selfie[selfie.find('['): selfie.find(']')+1])
selfie = selfie[selfie.find(']')+1:]
return chars_selfie
def randomize_smiles(mol):
'''Returns a random (dearomatized) SMILES given an rdkit mol object of a molecule.
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
Returns:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object (None if invalid smile string smi)
'''
if not mol:
return None
Chem.Kekulize(mol)
return rdkit.Chem.MolToSmiles(mol, canonical=False, doRandom=True, isomericSmiles=False, kekuleSmiles=True)
def get_random_smiles(smi, num_random_samples):
''' Obtain 'num_random_samples' non-unique SMILES orderings of smi
Parameters:
smi (string) : Input SMILES string (needs to be a valid molecule)
num_random_samples (int): Number fo unique different SMILES orderings to form
Returns:
randomized_smile_orderings (list) : list of SMILES strings
'''
mol = Chem.MolFromSmiles(smi)
if mol == None:
raise Exception('Invalid starting structure encountered')
randomized_smile_orderings = [randomize_smiles(mol) for _ in range(num_random_samples)]
randomized_smile_orderings = list(set(randomized_smile_orderings)) # Only consider unique SMILE strings
return randomized_smile_orderings
def obtain_path(starting_smile, target_smile, filter_path=False):
''' Obtain a path/chemical path from starting_smile to target_smile
Parameters:
starting_smile (string) : SMILES string (needs to be a valid molecule)
target_smile (int) : SMILES string (needs to be a valid molecule)
filter_path (bool) : If True, a chemical path is returned, else only a path
Returns:
path_smiles (list) : A list of smiles in path between starting_smile & target_smile
path_fp_scores (list of floats) : Fingerprint similarity to 'target_smile' for each smiles in path_smiles
smiles_path (list) : A list of smiles in CHEMICAL path between starting_smile & target_smile (if filter_path==False, then empty)
filtered_path_score (list of floats): Fingerprint similarity to 'target_smile' for each smiles in smiles_path (if filter_path==False, then empty)
'''
starting_selfie = encoder(starting_smile)
target_selfie = encoder(target_smile)
starting_selfie_chars = get_selfie_chars(starting_selfie)
target_selfie_chars = get_selfie_chars(target_selfie)
# Pad the smaller string
if len(starting_selfie_chars) < len(target_selfie_chars):
for _ in range(len(target_selfie_chars)-len(starting_selfie_chars)):
starting_selfie_chars.append(' ')
else:
for _ in range(len(starting_selfie_chars)-len(target_selfie_chars)):
target_selfie_chars.append(' ')
indices_diff = [i for i in range(len(starting_selfie_chars)) if starting_selfie_chars[i] != target_selfie_chars[i]]
path = {}
path[0] = starting_selfie_chars
for iter_ in range(len(indices_diff)):
idx = np.random.choice(indices_diff, 1)[0] # Index to be operated on
indices_diff.remove(idx) # Remove that index
# Select the last member of path:
path_member = path[iter_].copy()
# Mutate that character to the correct value:
path_member[idx] = target_selfie_chars[idx]
path[iter_+1] = path_member.copy()
# Collapse path to make them into SELFIE strings
paths_selfies = []
for i in range(len(path)):
selfie_str = ''.join(x for x in path[i])
paths_selfies.append(selfie_str.replace(' ', ''))
if paths_selfies[-1] != target_selfie:
raise Exception("Unable to discover target structure!")
# Obtain similarity scores, and only choose the increasing members:
path_smiles = [decoder(x) for x in paths_selfies]
path_fp_scores = []
filtered_path_score = []
smiles_path = []
if filter_path:
path_fp_scores = get_fp_scores(path_smiles, target_smile)
filtered_path_score = []
smiles_path = []
for i in range(1, len(path_fp_scores)-1):
if i == 1:
filtered_path_score.append(path_fp_scores[1])
smiles_path.append(path_smiles[i])
continue
if filtered_path_score[-1] < path_fp_scores[i]:
filtered_path_score.append(path_fp_scores[i])
smiles_path.append(path_smiles[i])
return path_smiles, path_fp_scores, smiles_path, filtered_path_score
def get_compr_paths(starting_smile, target_smile, num_tries, num_random_samples, collect_bidirectional):
''' Obtaining multiple paths/chemical paths from starting_smile to target_smile.
Parameters:
starting_smile (string) : SMILES string (needs to be a valid molecule)
target_smile (int) : SMILES string (needs to be a valid molecule)
num_tries (int) : Number of path/chemical path attempts between the exact same smiles
num_random_samples (int) : Number of different SMILES string orderings to conside for starting_smile & target_smile
collect_bidirectional (bool): If true, forms paths from target_smiles-> target_smiles (doubles number of paths)
Returns:
smiles_paths_dir1 (list): list paths containing smiles in path between starting_smile -> target_smile
smiles_paths_dir2 (list): list paths containing smiles in path between target_smile -> starting_smile
'''
starting_smile_rand_ord = get_random_smiles(starting_smile, num_random_samples=num_random_samples)
target_smile_rand_ord = get_random_smiles(target_smile, num_random_samples=num_random_samples)
smiles_paths_dir1 = [] # All paths from starting_smile -> target_smile
for smi_start in starting_smile_rand_ord:
for smi_target in target_smile_rand_ord:
if Chem.MolFromSmiles(smi_start) == None or Chem.MolFromSmiles(smi_target) == None:
raise Exception('Invalid structures')
for _ in range(num_tries):
path, _, _, _ = obtain_path(smi_start, smi_target, filter_path=True)
smiles_paths_dir1.append(path)
smiles_paths_dir2 = [] # All paths from starting_smile -> target_smile
if collect_bidirectional == True:
starting_smile_rand_ord = get_random_smiles(target_smile, num_random_samples=num_random_samples)
target_smile_rand_ord = get_random_smiles(starting_smile, num_random_samples=num_random_samples)
for smi_start in starting_smile_rand_ord:
for smi_target in target_smile_rand_ord:
if Chem.MolFromSmiles(smi_start) == None or Chem.MolFromSmiles(smi_target) == None:
raise Exception('Invalid structures')
for _ in range(num_tries):
path, _, _, _ = obtain_path(smi_start, smi_target, filter_path=True)
smiles_paths_dir2.append(path)
return smiles_paths_dir1, smiles_paths_dir2
```
## Analyzing QED & LogP values for a Chemical Path between Tadalafil & Sildenafil:
The get_compr_paths() function generates multiple chemical paths between two input SMILES.
```
def get_ECFP4(mol):
return AllChem.GetMorganFingerprint(mol, 2)
def get_fp_scores(smiles_back, target_smi):
smiles_back_scores = []
target = Chem.MolFromSmiles(target_smi)
fp_target = get_ECFP4(target)
for item in smiles_back:
mol = Chem.MolFromSmiles(item)
fp_mol = get_ECFP4(mol)
score = TanimotoSimilarity(fp_mol, fp_target)
smiles_back_scores.append(score)
return smiles_back_scores
def get_logP(mol):
'''Calculate logP of a molecule
Parameters:
mol (rdkit.Chem.rdchem.Mol) : RdKit mol object, for which logP is to calculates
Returns:
float : logP of molecule (mol)
'''
return Descriptors.MolLogP(mol)
def get_selfie_chars(selfie):
'''Obtain a list of all selfie characters in string selfie
Parameters:
selfie (string) : A selfie string - representing a molecule
Example:
>>> get_selfie_chars('[C][=C][C][=C][C][=C][Ring1][Branch1_1]')
['[C]', '[=C]', '[C]', '[=C]', '[C]', '[=C]', '[Ring1]', '[Branch1_1]']
Returns:
chars_selfie: list of selfie characters present in molecule selfie
'''
chars_selfie = [] # A list of all SELFIE sybols from string selfie
while selfie != '':
chars_selfie.append(selfie[selfie.find('['): selfie.find(']')+1])
selfie = selfie[selfie.find(']')+1:]
return chars_selfie
starting_smile = 'CN1CC(=O)N2C(C1=O)CC3=C(C2C4=CC5=C(C=C4)OCO5)NC6=CC=CC=C36' # Tadalafil
target_smile = 'CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)S(=O)(=O)N4CCN(CC4)C)OCC)C' # Sildenafil
mol_starting = Chem.MolFromSmiles(starting_smile)
mol_target = Chem.MolFromSmiles(target_smile)
qed_starting = Chem.QED.qed(mol_starting)
qed_target = Chem.QED.qed(mol_target)
logP_starting = get_logP(mol_starting)
logP_target = get_logP(mol_target)
scores_start_1 = get_fp_scores([starting_smile], starting_smile) # similarity to target
scores_target_1 = get_fp_scores([starting_smile], target_smile) # similarity to starting structure
data = np.array([scores_target_1, scores_start_1])
avg_score_1 = np.average(data, axis=0)
better_score_1 = avg_score_1 - (np.abs(data[0] - data[1]))
better_score_1 = ((1/9) * better_score_1**3) - ((7/9) * better_score_1**2) + ((19/12) * better_score_1)
scores_start_2 = get_fp_scores([target_smile], starting_smile) # similarity to target
scores_target_2 = get_fp_scores([target_smile], target_smile) # similarity to starting structure
data = np.array([scores_target_2, scores_start_2])
avg_score_2 = np.average(data, axis=0)
better_score_2 = avg_score_2 - (np.abs(data[0] - data[1]))
better_score_2 = ((1/9) * better_score_2**3) - ((7/9) * better_score_2**2) + ((19/12) * better_score_2)
print('Starting logP:{} QED:{}'.format(logP_starting, qed_starting))
print('Target logP:{} QED:{}'.format(logP_target, qed_target))
num_tries = 2
num_random_samples = 2
collect_bidirectional = True # Doubles the number of paths: source->target & target->source
print('Initiating path collection')
smiles_paths_dir1, smiles_paths_dir2 = get_compr_paths(starting_smile, target_smile, num_tries, num_random_samples, collect_bidirectional)
print('Path collection complete')
# Find the median molecule & plot:
all_smiles_dir_1 = [item for sublist in smiles_paths_dir1 for item in sublist] # all the smile string of dir1
all_smiles_dir_2 = [item for sublist in smiles_paths_dir2 for item in sublist] # all the smile string of dir2
all_smiles = all_smiles_dir_1 + all_smiles_dir_2
logP_path = [get_logP(Chem.MolFromSmiles(x)) for x in all_smiles]
QED_path = [Chem.QED.qed(Chem.MolFromSmiles(x)) for x in all_smiles]
scores_start = get_fp_scores(all_smiles, starting_smile) # similarity to target
scores_target = get_fp_scores(all_smiles, target_smile) # similarity to starting structure
data = np.array([scores_target, scores_start])
avg_score = np.average(data, axis=0)
better_score = avg_score - (np.abs(data[0] - data[1]))
better_score = ((1/9) * better_score**3) - ((7/9) * better_score**2) + ((19/12) * better_score)
# Filter based on better score:
apply_score_threshold = False
if apply_score_threshold:
indices_threshold = []
for i in range(len(better_score)):
if better_score[i] >= -20: # 0.2 = if required, Threshold!
indices_threshold.append(i)
all_smiles = [all_smiles[i] for i in indices_threshold]
logP_path = [get_logP(Chem.MolFromSmiles(x)) for x in all_smiles]
QED_path = [Chem.QED.qed(Chem.MolFromSmiles(x)) for x in all_smiles]
scores_start = get_fp_scores(all_smiles, starting_smile) # similarity to target
scores_target = get_fp_scores(all_smiles, target_smile) # similarity to starting structure
data = np.array([scores_target, scores_start])
avg_score = np.average(data, axis=0)
better_score = avg_score - (np.abs(data[0] - data[1]))
better_score = ((1/9) * better_score**3) - ((7/9) * better_score**2) + ((19/12) * better_score)
print('Min {} Max {}'.format(min(better_score), max(better_score)))
# raise Exception('get vmax value')
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
cm = plt.cm.get_cmap('viridis')
sc = ax.scatter(logP_path, QED_path, c=better_score.tolist(), cmap=cm, s=13)
clb = plt.colorbar(sc)
sc = ax.plot([logP_starting, logP_target], [qed_starting, qed_target], 'o', c='black', markersize=7, linewidth=3) # TARGETS
clb.set_label('Joint Similarity', fontsize=10)
ax.set_xlabel('LogP', fontsize=10)
ax.set_ylabel('QED', fontsize=10)
plt.xlim([-4, 8])
ax.grid(True)
fig.tight_layout()
plt.show()
alphabet = list(selfies.get_semantic_robust_alphabet()) # 34 SELFIE characters
max_len_random_struct = max([len(get_selfie_chars(encoder(starting_smile))), len(get_selfie_chars(encoder(target_smile)))])
min_len_random_struct = min([len(get_selfie_chars(encoder(starting_smile))), len(get_selfie_chars(encoder(target_smile)))])
num_samples = len(logP_path)
random_selfies = []
for _ in range(num_samples):
selfie = ''
for i in range(random.randint(min_len_random_struct, max_len_random_struct)): # max_molecules_len = max random selfie string length
selfie = selfie + np.random.choice(alphabet, size=1)[0]
random_selfies.append(selfie)
random_smiles = [decoder(x) for x in random_selfies]
scores_start_rnd = get_fp_scores(random_smiles, starting_smile) # similarity to target
scores_target_rnd = get_fp_scores(random_smiles, target_smile) # similarity to starting structure
data_rnd = np.array([scores_target_rnd, scores_start_rnd])
avg_score_rnd = np.average(data_rnd, axis=0)
better_score_random = avg_score_rnd - (np.abs(data_rnd[0] - data_rnd[1]))
better_score_random = ((1/9) * better_score_random**3) - ((7/9) * better_score_random**2) + ((19/12) * better_score_random)
logP_path_random = [get_logP(Chem.MolFromSmiles(x)) for x in random_smiles]
QED_path_random = [Chem.QED.qed(Chem.MolFromSmiles(x)) for x in random_smiles]
# DISTRIBUTION PLOTS!
A = sns.kdeplot(logP_path_random, bw_method=0.2, label="Random SELFIES")
A = sns.kdeplot(logP_path, bw_method=0.2, label="Chemical path", color='yellowgreen')
plt.axvline(logP_starting, 0, 1.0, c='black') # vertical line
plt.axvline(logP_target, 0, 1.0, c='black') # vertical line
A.set_xlabel('LogP', fontsize=10)
A.set_ylabel('Density', fontsize=10)
plt.xlim([-4, 8])
plt.legend()
# plt.savefig('./final_saved/logP_distrb.svg', dpi=500)
plt.show()
B = sns.kdeplot(QED_path_random, bw=.2, label="Random SELFIES")
B = sns.kdeplot(QED_path, bw=.2, label="Chemical path", color='yellowgreen')
plt.axvline(qed_starting, 0, 1.0, c='black') # vertical line
plt.axvline(qed_target, 0, 1.0, c='black') # vertical line
B.set_xlabel('QED', fontsize=10)
B.set_ylabel('Density', fontsize=10)
plt.xlim([0, 1])
plt.legend()
plt.show()
```
# Median Molecule Formation:
### Imports for Obtaining median molecules between two input SMILES
```
def get_ECFP4(mol):
return AllChem.GetMorganFingerprint(mol, 2)
def get_fp_scores(smiles_back, target_smi):
smiles_back_scores = []
target = Chem.MolFromSmiles(target_smi)
fp_target = get_ECFP4(target)
for item in smiles_back:
mol = Chem.MolFromSmiles(item)
fp_mol = get_ECFP4(mol)
score = TanimotoSimilarity(fp_mol, fp_target)
smiles_back_scores.append(score)
return smiles_back_scores
def sanitize_smiles(smi):
try:
mol = smi2mol(smi, sanitize=True)
smi_canon = mol2smi(mol, isomericSmiles=False, canonical=True)
return (mol, smi_canon, True)
except:
return (None, None, False)
def get_median_mols(starting_smile, target_smile, num_tries, num_random_samples, collect_bidirectional, num_top_iter):
smiles_paths_dir1, smiles_paths_dir2 = get_compr_paths(starting_smile, target_smile, num_tries, num_random_samples, collect_bidirectional)
# Find the median molecule & plot:
all_smiles_dir_1 = [item for sublist in smiles_paths_dir1 for item in sublist] # all the smile string of dir1
all_smiles_dir_2 = [item for sublist in smiles_paths_dir2 for item in sublist] # all the smile string of dir2
all_smiles = [] # Collection of valid smile strings
for smi in all_smiles_dir_1 + all_smiles_dir_2:
if Chem.MolFromSmiles(smi) != None:
mol, smi_canon, _ = sanitize_smiles(smi)
all_smiles.append(smi_canon)
all_smiles = list(set(all_smiles))
scores_start = get_fp_scores(all_smiles, starting_smile) # similarity to target
scores_target = get_fp_scores(all_smiles, target_smile) # similarity to starting structure
data = np.array([scores_target, scores_start])
avg_score = np.average(data, axis=0)
better_score = avg_score - (np.abs(data[0] - data[1]))
better_score = ((1/9) * better_score**3) - ((7/9) * better_score**2) + ((19/12) * better_score)
best_idx = better_score.argsort()[-num_top_iter:][::-1]
best_smi = [all_smiles[i] for i in best_idx]
best_scores = [better_score[i] for i in best_idx]
return best_smi, best_scores
```
## Obtain the best median molecules (i.e. possessing high joint similarity) for Tadalafil & Sildenafil
```
num_tries = 6
num_random_samples = 6
collect_bidirectional = True # Doubles the number of paths: source->target & target->source
apply_filter = False
num_top_iter = 12 # Number of molecules that are selected after each iteration
smi_1 = 'CN1CC(=O)N2C(C1=O)CC3=C(C2C4=CC5=C(C=C4)OCO5)NC6=CC=CC=C36' # Tadalafil
smi_2 = 'CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)S(=O)(=O)N4CCN(CC4)C)OCC)C' # Sildenafil
# SMILES, joint-sim score
smiles_, best_scores = get_median_mols(smi_1, smi_2, num_tries, num_random_samples, collect_bidirectional, num_top_iter)
mol_ = [Chem.MolFromSmiles(x) for x in smiles_]
img=Draw.MolsToGridImage(mol_[:40],molsPerRow=4,subImgSize=(200,200))
img
```
| github_jupyter |
```
import pandas as pd
import numpy as np
log = pd.read_csv('../data/log.xls', header=None, names=['user_id', 'time', 'bet', 'win'])
log.time =log.time.str.replace('[', '')
log.time = pd.to_datetime(log.time)
log.head()
users = pd.read_csv('../data/users.xls', encoding="koi8-r", sep='\t', names=['user_id', 'email', 'geo'])
users.head()
sum(log.time.isna())
log_backup = log.copy()
log_backup.dropna(axis=1).shape
log_backup = log.copy()
log_backup.dropna(axis=0).shape
log_backup = log.copy()
for col in ['user_id', 'time']:
if sum(log_backup[col].isna()) > 0:
log_backup.drop(col, axis=1, inplace=True)
log_backup.shape
log_backup = log.copy()
log_backup.drop_duplicates(subset=['user_id', 'time']).shape
log.time.max().hour
# log = pd.read_csv('../data/log.xls', header=None, names=['user_id', 'time', 'bet', 'win'])
# log = log.dropna()
# log.columns = ['user_id', 'time', 'bet', 'win']
# log['time'] = log['time'].apply(lambda x: x[1:])
# log['time'] = pd.to_datetime(log['time'])
# log['time'] = log['time'].apply(lambda x: x.minute)
# log.time.head()
log['time'].dt.minute.value_counts().index[0]
log['time'].dt.month.value_counts(ascending=True).index[0]
log.time.dt.dayofweek.value_counts()[[5, 6]].sum()
def time_to_daytime(value):
if 0 <= value < 6:
return 'night'
elif 6 <= value < 12:
return 'morning'
elif 12 <= value < 18:
return 'afternoon'
elif 18 <= value < 24:
return 'evening'
log_backup = log.copy()
log_backup.dropna(axis=0, subset=['time'], inplace=True)
log_backup['time_of_day'] = log_backup.time.dt.hour.apply(time_to_daytime)
log_backup.time_of_day.value_counts().index[-1]
log.bet.fillna(0).value_counts()[0]
log.bet[0]
import math
def fill_win(win, bet):
if not math.isnan(win):
return win
else:
if math.isnan(bet):
return 0
else:
return -bet
log['win'] = log.apply(lambda row: fill_win(row.win, row.bet), axis=1)
log[log.win < 0].shape[0]
def calculate_sum_win(win, bet):
if win < 0:
return win
else:
return win - bet
log['net'] = log.apply(lambda row: calculate_sum_win(row.win, row.bet), axis=1)
round(log[log.win > 0].net.mean(), 0)
round(log[log.win > 0].net.median(), 0)
net_above_0 = log[log.win > 0]
net_above_0.net.plot('box')
log.bet.mean(skipna=True)
log['bet'].dropna().mean()
log.bet.sum() / log.bet.dropna().shape[0]
np.mean(log.bet)
log.bet.mean()
log_backup = log.copy()
log_backup[['win', 'bet']] = log_backup[['win', 'bet']].fillna(0)
log_backup['net'] = log_backup.win - log_backup.bet
log_backup[log_backup.bet > 0].shape[0] * 100 / len(log_backup)
log_backup[log_backup.bet > 0].net.mean()
log_backup[log_backup.net < 0].net.mean()
print(f"% of loses = {log_backup[log_backup.net < 0].shape[0] * 100 / len(log_backup)}")
print(f"% of wins = {log_backup[log_backup.net > 0].shape[0] * 100 / len(log_backup)}")
log_backup = pd.read_csv('log.csv', header=None, names=['user_id', 'time', 'bet', 'win'])
min_bet = log_backup.bet.min()
min_bet_amount = log_backup[log_backup.bet == min_bet].shape[0]
log = pd.read_csv('../data/log.xls', header=None, names=['user_id', 'time', 'bet', 'win'])
log.time =log.time.str.replace('[', '')
log.time = pd.to_datetime(log.time)
log.head()
users = pd.read_csv('../data/users.xls', encoding="koi8-r", sep='\t', names=['user_id', 'email', 'geo'])
users.head()
# Приведем признак user_id к одному формату в обоих датасетах
users.user_id = users.user_id.apply(lambda x: x.lower())
# Избавимся от ошибок в user_id
log = log[log.user_id != '#error']
log.user_id = log.user_id.str.split(' - ').apply(lambda x: x[1])
df = pd.merge(users, log, on='user_id')
df.shape
df.groupby('user_id').win.median().median()
df[['win', 'bet']] = df[['win', 'bet']].fillna(0)
df['net'] = df.apply(lambda row: calculate_sum_win(row.win, row.bet), axis=1)
df.head()
df.groupby('user_id').net.sum().median()
no_bets_per_person = []
for user in df.user_id.unique():
data = df[df.user_id == user]
no_bets = len(data[data.bet == 0])
if no_bets < len(data):
no_bets_per_person.append(no_bets)
np.mean(no_bets_per_person)
# another solution
# после преобразований объединили данные из двух файлов
users_log = pd.merge(log, users, on ='user_id')
# создаем два датафрейма для подсчета случаев нулевой ставки
group = users_log[users_log.bet==0].groupby('user_id').bet.count()
group_not_null = users_log[users_log.bet>0].groupby('user_id').bet.count()
# и объединяем по user_id
joined=pd.merge(group, group_not_null, on=['user_id'])
# Оставим только те строки, в которых у посетителя была хотя бы одна ставка
joined = joined[joined['bet_y']>0]
# И посчитаем среднее количество приходов без ставки
joined['bet_x'].sum()/len(joined)
group = df[df.bet==0].groupby('user_id').bet.count()
group_not_null = df[df.bet>0].groupby('user_id').bet.count()
joined=pd.merge(group, group_not_null, on=['user_id'])
joined.index
time = []
df.sort_values('user_id', inplace=True)
for user in df.user_id.unique():
data = df[df.user_id == user]
no_bets = len(data[data.bet == 0])
if no_bets < len(data):
min_time_zero_bets = data[data.bet == 0].time.min()
min_time_norm_bets = data[data.bet != 0].time.min()
if min_time_norm_bets > min_time_zero_bets:
time.append(min_time_norm_bets - min_time_zero_bets)
else:
time.append(pd.Timedelta(0))
np.mean(time)
mean_bets_per_city = df[df.bet > 0].groupby('geo').bet.mean()
mean_bets_per_city.max() / mean_bets_per_city.min()
# log = pd.read_csv("../data/log.xls", header=None, names=['user_id','time','bet','win'])
# users = pd.read_csv("../data/users.xls", encoding='KOI8-R', sep='\t', names=['user_id','email','geo'])
# users.user_id = users.user_id.apply(lambda x: x.lower())
# log = log[log.user_id != '#error']
# log.user_id = log.user_id.str.split(' - ').apply(lambda x: x[1])
# df = pd.merge(log, users, on ='user_id')
# sample2 = df.groupby('geo').user_id.count()
df['time'].dt.minute.value_counts()
```
| github_jupyter |
# Band Ratios Conflations
This notebook steps through how band ratio measures are underdetermined.
By 'underdetermined', we mean that the same value, or same change in value between measures, can arise from different underlying causes.
This shows that band ratios are a non-specific measure.
As an example case, we use the theta-beta ratio.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from fooof import FOOOF
from fooof.sim import gen_power_spectrum
from fooof.plts.spectra import (plot_spectrum, plot_spectra,
plot_spectrum_shading, plot_spectra_shading)
# Import custom project code
import sys
sys.path.append('../bratios')
from ratios import calc_band_ratio
from paths import FIGS_PATHS as fp
# Settings
SAVE_FIG = False
PLOT_TITLES = True # Whether to plot titles on each axis
# Plot settings
shade_color = '#0365C0'
# Band Settings
theta_band = [4, 8]
beta_band = [20, 30]
# Set up index helpers
cf_ind = 0
pw_ind = 1
bw_ind = 2
# Simulated power spectra settings
freq_range = [1, 35]
freq_res = 0.1
nlv = 0
# Define default aperiodic values
ap_def = [0, 1]
# Define default periodic values
theta_def = [6, 0.4, 1]
alpha_def = [10, 0.5, 0.75]
beta_def = [25, 0.3, 1.5]
```
## Comparing Band Ratio Values
First, let's consider a hypothetical investigation comparing band ratio measures between two groups.
The typical interpretation of finding a difference between measured band ratios would be that there is a difference in the relative powers of the oscillation bands used in the calculation of the band ratio. That is to say, the change in ratio could come from a change in two things (the power of the low band, and/or the power of the high band).
Here, we will show that there are actually many more ways in which one could measure this difference.
A numerically identically change in theta / beta ratio can be obtained from:
#### Periodic Changes
- a change in theta power
- a change in theta bandwidth
- a change in beta center frequency
- a change in beta power
- a change in beta bandwidth
#### Aperiodic Changes
- a change in aperiodic exponent
- with or without oscillations present
Note that the specific values in the simulations below have been tuned to create numerically identical changes in measured band ratio.
```
# Create a baseline PSD, with oscillations, to compare to
freqs, ps_base = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_def],
nlv, freq_res)
```
### Periodic Changes
```
## CF
# Change in center frequency - high band
beta_cf = beta_def.copy(); beta_cf[cf_ind] = 19.388
freqs, ps_be_cf = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_cf],
nlv, freq_res)
## PW
# Changes in oscillation power - low band
theta_pw = theta_def.copy(); theta_pw[pw_ind] = 0.5041
freqs, ps_th_pw = gen_power_spectrum(freq_range, ap_def,
[theta_pw, alpha_def, beta_def],
nlv, freq_res)
# Changes in oscillation power - high band
beta_pw = beta_def.copy(); beta_pw[pw_ind] = 0.1403
freqs, ps_be_pw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_pw],
nlv, freq_res)
## BW
# Changes in oscillation bandwidth - low band
theta_bw = theta_def.copy(); theta_bw[bw_ind] = 1.61
freqs, ps_th_bw = gen_power_spectrum(freq_range, ap_def,
[theta_bw, alpha_def, beta_def],
nlv, freq_res)
# Changes in oscillation bandwidth - high band
beta_bw = beta_def.copy(); beta_bw[bw_ind] = 0.609
freqs, ps_be_bw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_def, beta_bw],
nlv, freq_res)
# Changes in other band - center frequency
alpha_cf = alpha_def.copy(); alpha_cf[cf_ind] = 8.212
freqs, ps_al_cf = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_cf, beta_def],
nlv, freq_res)
# Changes in other band - bandwidth
alpha_bw = alpha_def.copy(); alpha_bw[bw_ind] = 1.8845
freqs, ps_al_bw = gen_power_spectrum(freq_range, ap_def,
[theta_def, alpha_bw, beta_def],
nlv, freq_res)
# Collect all the power spectra together
spectra_data = {'Theta Frequency' : None,
'Theta Power' : ps_th_pw,
'Theta Bandwidth' : ps_th_bw,
'Alpha Frequency' : ps_al_cf,
'Alpha Power' : None,
'Alpha Bandwidth' : ps_al_bw,
'Beta Frequency' : ps_be_cf,
'Beta Power' : ps_be_pw,
'Beta Bandwidth' : ps_be_bw}
# Calcualte theta beta ratio of the baseline power spectrum
base_br = calc_band_ratio(freqs, ps_base, theta_band, beta_band)
# Calculate changes in theta / beta ratios
diffreqs = {}
for label, spectra in spectra_data.items():
if np.all(spectra):
comp_br = calc_band_ratio(freqs, spectra, theta_band, beta_band)
diffreqs[label] = base_br - comp_br
# Check the computed ratio values of each spectrum
print('TBR of base spectrum is: {:1.3f}'.format(base_br))
print('TBR of comp spectrum is: {:1.3f}'.format(comp_br))
# Check TBR difference measures from periodic changes
for label, diff in diffreqs.items():
print('TBR difference from {:20} is \t {:1.3f}'.format(label, diff))
# Create figure of periodic changes
title_settings = {'fontsize': 16, 'fontweight': 'bold'}
fig, ax = plt.subplots(3, 3, figsize=(15, 14))
for axis, (title, data) in zip(ax.flatten(), spectra_data.items()):
if not np.all(data): continue
plot_spectra_shading(freqs, [ps_base, data], [theta_band, beta_band],
shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=axis)
if PLOT_TITLES:
axis.set_title(title, **title_settings)
axis.set_xlim([0, 35])
axis.set_ylim([-1.75, 0])
axis.xaxis.label.set_visible(False)
axis.yaxis.label.set_visible(False)
# Turn off empty axes
ax[0, 0].axis('off')
ax[1, 1].axis('off')
fig.subplots_adjust(hspace=.3)
fig.subplots_adjust(wspace=.3)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'Underdetermined-Periodic', 'pdf'))
```
Each panel above plots two PSDs, where the blue curve is the same reference power spectrum plotted in all panels, and the orange is a unique comparison spectrum.
The difference between TBR from the blue and orange curve is the same (see cell above) across each panel.
This shows that multiple spectral parameters could change to arrive at identical differences in a ratio measure.
#### Periodic Notes
Note that for a given change (or direction of change) in theta / beta ratio (TBR), there is only one center frequency change that could do it.
This is true for the case, as is simulated, in which the 'baseline' spectrum has oscillations entirely within band ranges. In this example, the change is a relative increase in 'theta', and there is no way to increase relative theta by changing theta CF alone. This is due to the choice of comparison spectrum, and in another scenario, theta CF could also change measured ratio measures.
### Aperiodic Changes
The same change in ratio can also be driven from changes in aperiodic properties.
This can happen with or without oscillations even being present.
```
# Change in aperiodic exponent
ap_shift = [0.13, 1.1099]
freqs, ps_ap_ex = gen_power_spectrum(freq_range, ap_shift,
[theta_def, alpha_def, beta_def],
nlv, freq_res)
# Use a new base and transformation, without any oscillations
freqs, ps_new_base = gen_power_spectrum(freq_range, ap_def, [],
nlv, freq_res)
ap_shift = [0.13, 1.1417]
freqs, ps_new_apch = gen_power_spectrum(freq_range, ap_shift, [],
nlv, freq_res)
# Calculate the differences in ratio from baseline spectra
d_ap_osc = base_br - calc_band_ratio(freqs, ps_ap_ex, theta_band, beta_band)
d_ap_no_osc = calc_band_ratio(freqs, ps_new_base, theta_band, beta_band) - \
calc_band_ratio(freqs, ps_new_apch, theta_band, beta_band)
# Check TBR difference measures from aperiodic changes
base_text = 'TBR difference from the aperiodic component '
print(base_text + 'with oscillations is \t {:1.3f}'.format(d_ap_osc))
print(base_text + 'without oscillations is \t {:1.3f}'.format(d_ap_no_osc))
# Collect together components to plot
ap_bases = [ps_base, ps_new_base]
ap_diffs = [ps_ap_ex, ps_new_apch]
# Create aperiodic differences figure
fig, ax = plt.subplots(2, 1, figsize=(5, 9))
for ps_base, ps_diff, axis in zip(ap_bases, ap_diffs, ax.flatten()):
plot_spectra_shading(freqs, [ps_base, ps_diff], [theta_band, beta_band],
shade_colors=shade_color,
log_freqs=False, log_powers=True, ax=axis)
if PLOT_TITLES:
axis.set_title('Aperiodic Exponent', **title_settings)
# Plot Aesthetics
axis.set_xlim([0, 35])
axis.set_ylim([-1.75, 0])
axis.xaxis.label.set_visible(False)
axis.yaxis.label.set_visible(False)
fig.subplots_adjust(wspace=.3)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'Underdetermined-Aperiodic', 'pdf'))
```
#### Conclusions
In this example, we have explored changes to measured band ratios by varying different spectral parameters.
Given an observed change in a BandRatio measure, there is no way to tell what has actually changed.
Variations in multiple spectral parameters can lead to the exact same change in ratio measure.
There is no reason to think the change even reflects oscillatory activity, given that aperiodic shifts can drive this effect.
In this notebook, we simulated variations in one parameter at a time, but in practice, all of these changes could happen together.
In subsequent notebooks, we will further characterize these findings by simulating changes in each parameter, to estimate how impactful different parameters are to ratio measures, as well as by simulating concurrent changes in multiple parameters, to explore the interaction between changes.
## Same Ratio, Different Spectra
So far we have seen how multiple possible changes in power spectra can lead to the same measured difference in band ratio measures across power spectra.
What if we calculate band ratio measures and find that they are the same? Can we infer that the analyzed power spectra are in some ways equivalent?
Next, let's examine if and how different power spectra can have the same band ratio value.
```
# Create a collection of spectra with different properties, with the same measured ratio value
freqs, ps1 = gen_power_spectrum(freq_range, [0, 0.9059],
[theta_def, alpha_def, beta_def],
nlv, freq_res)
freqs, ps2 = gen_power_spectrum(freq_range, [0, 0.9059],
[[6, 0.5, 2], alpha_def, [25, 0.3544, 5]],
nlv, freq_res)
freqs, ps3 = gen_power_spectrum(freq_range, [0.25, 1.2029],
[[6, 0.10, 1], alpha_def, beta_def],
nlv, freq_res)
freqs, ps4 = gen_power_spectrum(freq_range, [0.25, 1.2029],
[theta_def, alpha_def, [25, 0.66444, 1.5]],
nlv, freq_res)
# Collect the generated spectra together
spectra_list = [ps1, ps2, ps3, ps4]
# Calculate the ratio value for each spectrum
for spectrum in spectra_list:
print('Ratio value:\t {:1.3f}'.format(calc_band_ratio(freqs, ps1, theta_band, beta_band)))
# Plot all the power spectra together
plot_spectra_shading(freqs, spectra_list, [theta_band, beta_band],
shade_colors=shade_color, linewidth=3,
log_freqs=False, log_powers=True)
if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'EquivalentRatioSpectra', 'pdf'))
```
In the plot above, we can see four different power spectra.
However, each of these power spectra has the exact same measured theta / beta ratio value.
Thus we can conclude that measuring the same band ratio value for different power spectra should not be taken to imply that they are in any way equivalent.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.