code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# SVM
```
import numpy as np
import sympy as sym
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(1)
```
## Simple Example Application
对于简单的数据样本例子(也就是说可以进行线性划分,且不包含噪声点)
**算法:**
输入:线性可分训练集$T={(x_1,y_1),(x_2,y_2),...,(x_N,y_N)}$,其中$x_i \in \textit{X}=\textit{R},y_i \in \textit{Y}={+1,-1},i=1,2...,N$
输出:分离超平面和分类决策函数
(1) 构造并求解约束条件最优化问题
$\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$
s.t $\sum_{i=1}^{N}\alpha_i y_i=0$
$\alpha_i \geq 0,i=1,2,...,N$
求得最优$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$
其中正分量$\alpha_j^{*}>0$就为支持向量
(2) 计算
$w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$
选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算
$b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$
(3) 求得分离超平面
$w^{*}\cdot x + b^{*}=0$
分类决策函数:
$f(x)=sign(w^{*}\cdot x + b^{*})$
这里的sign表示:值大于0的为1,值小于0的为-1.
```
def loadSimpleDataSet():
"""
从文本加载数据集
返回:
数据集和标签集
"""
train_x = np.array([[3,3],[4,3],[1,1]]).T
train_y = np.array([[1,1,-1]]).T
return train_x,train_y
train_x,train_y = loadSimpleDataSet()
print("train_x shape is : ",train_x.shape)
print("train_y shape is : ",train_y.shape)
plt.scatter(train_x[0,:],train_x[1,:],c=np.squeeze(train_y))
```
为了方便计算$\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>$
我们需要先求出train_x、train_y、alphas的内积然后逐个元素相乘然后累加.
计算train_x的内积
```
Inner_train_x = np.dot(train_x.T,train_x)
print("Train_x is:\n",train_x)
print("Inner train x is:\n",Inner_train_x)
```
计算train_y的内积
```
Inner_train_y = np.dot(train_y,train_y.T)
print("Train y is:\n",train_y)
print("Inner train y is:\n",Inner_train_y)
```
计算alphas(拉格朗日乘子)的内积,但是要注意,我们在这里固定拉格朗日乘子中的某两个alpha之外的其他alpha,因为根据理论知识,我们需要固定两个alpha之外的其他alphas,然后不断的再一堆alphas中去迭代更新这两个alpha.由于这个例子过于简单,且只有3个样本点(事实上$\alpha_1,\alpha_3$就是支持向量).
将约束条件带入其中:
$\sum_{i=1}^3\alpha_i y_i=\alpha_1y_1+\alpha_2y_2+\alpha_3y_3 =0 \Rightarrow $
--
$\alpha_3 = -(\alpha_1y_1+\alpha_2y_2)/y_3 $
--
```
alphas_sym = sym.symbols('alpha1:4')
alphas = np.array([alphas_sym]).T
alphas[-1]= -np.sum(alphas[:-1,:]*train_y[:-1,:]) / train_y[-1,:]
Inner_alphas = np.dot(alphas,alphas.T)
print("alphas is: \n",alphas)
print("Inner alphas is:\n",Inner_alphas)
```
现在求最优的$\alpha^{*}=(\alpha_1^{*},\alpha_2^{*},...,\alpha_n^{*})$
$\underset{\alpha}{min}$ $\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_i \alpha_j y_i y_j <x_i \cdot x_j>-\sum_{i=1}^{N}\alpha_i$
**注意:**
这里需要使用sympy库,详情请见[柚子皮-Sympy符号计算库](https://blog.csdn.net/pipisorry/article/details/39123247)
或者[Sympy](https://www.sympy.org/en/index.html)
```
def compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y):
"""
Parameters:
alphas: initialization lagrange multiplier,shape is (n,1).
n:number of example.
Inner_alphas: Inner product of alphas.
Inner_train_x: Inner product of train x set.
Inner_train_y: Inner product of train y set.
simplify : simplify compute result of dual function.
return:
s_alpha: result of dual function
"""
s_alpha = sym.simplify(1/2*np.sum(Inner_alphas * Inner_train_x*Inner_train_y) - (np.sum(alphas)))
return s_alpha
s_alpha = compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y)
print('s_alpha is:\n ',s_alpha)
```
现在对每一个alpha求偏导令其等于0.
```
def Derivative_alphas(alphas,s_alpha):
"""
Parameters:
alphas: lagrange multiplier.
s_alpha: dual function
return:
bool value.
True: Meet all constraints,means,all lagrange multiplier >0
False:Does not satisfy all constraints,means some lagrange multiplier <0.
"""
cache_derivative_alpha = []
for alpha in alphas.squeeze()[:-1]: # remove the last element.
derivative = s_alpha.diff(alpha) # diff: derivative
cache_derivative_alpha.append(derivative)
derivative_alpha = sym.solve(cache_derivative_alpha,set=True) # calculate alphas.
print('derivative_alpha is: ',derivative_alpha)
# check alpha > 0
check_alpha_np = np.array(list(derivative_alpha[1])) > 0
return check_alpha_np.all()
check_alpha = Derivative_alphas(alphas,s_alpha)
print("Constraint lagrange multiplier is: ",check_alpha)
```
可以看出如果是对于$\alpha_2<0$,不满足$\alpha_2 \geqslant 0 $所以我们不能使用极值
-------------
由于在求偏导的情况下不满足拉格朗日乘子约束条件,所以我们将固定某一个$\alpha_i$,将其他的$\alpha$令成0,使偏导等于0求出当前$\alpha_i$,然后在带入到对偶函数中求出最后的结果.比较所有的结果挑选出结果最小的值所对应的$\alpha_i$,在从中选出$\alpha_i>0$的去求我们最开始固定的其他alphas.
**算法:**
输入: 拉格朗日乘子数组,数组中不包括最开始固定的其他alphas
输出: 最优的拉格朗日乘子,也就是支持向量
(1) 将输入的拉格朗日数组扩增一行或者一列并初始化为0
- alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]
- alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]
(2) 将扩增后的数组进行"mask"掩模处理,目的是为了将一个$\alpha$保留,其他的$\alpha$全部为0.
- mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.
- mask_alpha.mask[i] = True # masked alpha
- 在sysmpy中使用掩模处理会报出一个警告:将掩模值处理为None,其实问题不大,应该不会改变对偶方程中的alpha对象
(3) 使用掩模后的数组放入对偶函数中求偏导$\alpha_i$,并令其等于0求出$\alpha_i$
(4) 将求出的$\alpha_i$和其他都等于0的alphas带入到对偶函数中求出值
(5) 比较所有的对偶函数中的值,选取最小值所对应的alpha组.计算最开始固定值的alphas.
```
def choose_best_alphas(alphas,s_alpha):
"""
Parameters:
alphas: Lagrange multiplier.
s_alpha: dual function
return:
best_vector: best support vector machine.
"""
# add col in alphas,and initialize value equal 0. about 2 lines.
alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]
alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]
# cache some parameters.
cache_alphas_add = np.zeros((alphas.shape[0],1))[:-1] # cache derivative alphas.
cache_alphas_compute_result = np.zeros((alphas.shape[0],1))[:-1] # cache value in dual function result
cache_alphas_to_compute = alphas_add_zeros.copy() # get minmux dual function value,cache this values.
for i in range(alphas_add_zeros.shape[0]):
mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.
mask_alpha.mask[i] = True # masked alpha
value = sym.solve(s_alpha.subs(mask_alpha).diff())[0] # calculate alpha_i
cache_alphas_add[i] = value
cache_alphas_to_compute[i][1] = value
cache_alphas_compute_result[i][0] = s_alpha.subs(cache_alphas_to_compute) # calculate finally dual function result.
cache_alphas_to_compute[i][1] = 0 # make sure other alphas equal 0.
min_alpha_value_index = cache_alphas_compute_result.argmin()
best_vector =np.array([cache_alphas_add[min_alpha_value_index]] + [- cache_alphas_add[min_alpha_value_index] / train_y[-1]])
return [min_alpha_value_index]+[2],best_vector
min_alpha_value_index,best_vector = choose_best_alphas(alphas,s_alpha)
print(min_alpha_value_index)
print('support vector machine is:',alphas[min_alpha_value_index])
```
$w^{*} = \sum_{i=1}^{N}\alpha_i^{*}y_ix_i$
```
w = np.sum(np.multiply(best_vector , train_y[min_alpha_value_index].T) * train_x[:,min_alpha_value_index],axis=1)
print("W is: ",w)
```
选择$\alpha^{*}$的一个正分量$\alpha_j^{*}>0$,计算
$b^{*}=y_j-\sum_{i=1}^{N}\alpha_i^{*}y_i<x_i \cdot x_j>$
这里我选alpha1
```
b = train_y[0]-np.sum(best_vector.T * np.dot(train_x[:,min_alpha_value_index].T,train_x[:,min_alpha_value_index])[0]
* train_y[min_alpha_value_index].T)
print("b is: ",b)
```
所以超平面为:
$f(x)=sign[wx+b]$
# SMO
这里实现简单版本的smo算法,这里所谓的简单版本指的是速度没有SVC快,参数自动选择没有SCV好等.但是通过调节参数一样可以达到和SVC差不多的结果
### 算法:
#### 1.SMO选择第一个变量的过程为选择一个违反KKT条件最严重的样本点为$\alpha_1$,即违反以下KKT条件:
$\alpha_i=0\Leftrightarrow y_ig(x_i)\geqslant1$
$0<\alpha_i<C\Leftrightarrow y_ig(x_i)=1$
$\alpha_i=C \Leftrightarrow y_ig(x_i)\leqslant1$
其中:
$g(x_i)=\sum_{j=1}^{N}\alpha_iy_iK(x_i,x_j)+b$
**注意:**
- 初始状态下$\alpha_i$定义为0,且和样本数量一致.
- 该检验是在$\varepsilon$范围内的
- 在检验过程中我们先遍历所有满足$0<\alpha_i<C$的样本点,即在间隔边界上的支持向量点,找寻违反KKT最严重的样本点
- 如果没有满足$0<\alpha_i<C$则遍历所有的样本点,找违反KKT最严重的样本点
- 这里的*违反KKT最严重的样本点*可以选择为$y_ig(x_i)$最小的点作为$\alpha_1$
#### 2.SMO选择第二个变量的过程为希望$\alpha_2$有足够的变化
因为$\alpha_2^{new}$是依赖于$|E_1-E_2|$的,并且使得|E_1-E_2|最大,为了加快计算,一种简单的做法是:
如果$E_1$是正的,那么选择最小的$E_i$作为$E_2$,如果$E_1$是负的,那么选择最大的$E_i$作为$E_2$,为了节省计算时间,将$E_i$保存在一个列表中
**注意:**
- 如果通过以上方法找到的$\alpha_2$不能使得目标函数有足够的下降,那么采用以下启发式方法继续选择$\alpha_2$,遍历在间隔边上的支持向量的点依次将其对应的变量作为$\alpha_2$试用,直到目标函数有足够的下降,若还是找不到使得目标函数有足够下降,则抛弃第一个$\alpha_1$,在重新选择另一个$\alpha_1$
- 这个简单版本的SMO算法并没有处理这种特殊情况
#### 3.计算$\alpha_1^{new},\alpha_2^{new}$
计算$\alpha_1^{new},\alpha_2^{new}$,是为了计算$b_i,E_i$做准备.
3.1 计算$\alpha_2$的边界:
- if $y_1 \neq y_2$:$L=max(0,\alpha_2^{old}-\alpha_1^{old})$,$H=min(C,C+\alpha_2^{old}-\alpha_1^{old})$
- if $y_1 = y_2$:$L=max(0,\alpha_2^{old}+\alpha_1^{old}-C)$,$H=min(C,C+\alpha_2^{old}+\alpha_1^{old})$
3.2 计算$\alpha_2^{new,unc} = \alpha_2^{old}+\frac{y_2(E_1-E_2)}{\eta}$
其中:
$\eta = K_{11}+K_{22}-2K_{12}$,这里的$K_n$值得是核函数,可以是高斯核,多项式核等.
3.3 修剪$\alpha_2$
$\alpha_2^{new}=\left\{\begin{matrix}
H, &\alpha_2^{new,unc}>H \\
\alpha_2^{new,unc},& L\leqslant \alpha_2^{new,unc}\leqslant H \\
L,& \alpha_2^{new,unc}<L
\end{matrix}\right.$
3.3 计算$\alpha_1^{new}$
$\alpha_1^{new}=\alpha_1^{old}+y_1y_2(\alpha_2^{old}-\alpha_2^{new})$
#### 4.计算阈值b和差值$E_i$
$b_1^{new}=-E_1-y_1K_{11}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{21}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$
$b_2^{new}=-E_2-y_1K_{12}(\alpha_1^{new}-\alpha_1^{old})-y_2K_{22}(\alpha_2^{new}-\alpha_2^{old})+b^{old}$
如果$\alpha_1^{new},\alpha_2^{new}$,同时满足条件$0<\alpha_i^{new}<C,i=1,2$,
那么$b_1^{new}=b_2^{new}=b^{new}$.
如果$\alpha_1^{new},\alpha_2^{new}$是0或者C,那么$b_1^{new},b_2^{new}$之间的数
都符合KKT条件阈值,此时取中点为$b^{new}$
$E_i^{new}=(\sum_sy_j\alpha_jK(x_i,x_j))+b^{new}-y_i$
其中s是所有支持向量$x_j$的集合.
#### 5. 更新参数
更新$\alpha_i,E_i,b_i$
#### 注意:
在训练完毕后,绝大部分的$\alpha_i$的分量都为0,只有极少数的分量不为0,那么那些不为0的分量就是支持向量
### SMO简单例子
加载数据,来自于scikit中的的鸢尾花数据,其每次请求是变化的
```
# data
def create_data():
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
data = np.array(df.iloc[:100, [0, 1, -1]])
for i in range(len(data)):
if data[i,-1] == 0:
data[i,-1] = -1
return data[:,:2], data[:,-1]
X, y = create_data()
# 划分训练样本和测试样本
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
plt.scatter(X[:,0],X[:,1],c=y)
```
### 开始搭建SMO算法代码
```
class SVM:
def __init__(self,max_iter = 100,kernel = 'linear',C=1.,is_print=False,sigma=1):
"""
Parameters:
max_iter:最大迭代数
kernel:核函数,这里只定义了"线性"和"高斯"
sigma:高斯核函数的参数
C:惩罚项,松弛变量
is_print:是否打印
"""
self.max_iter = max_iter
self.kernel = kernel
self.C = C # 松弛变量C
self.is_print = is_print
self.sigma = sigma
def init_args(self,features,labels):
"""
self.m:样本数量
self.n:特征数
"""
self.m,self.n = features.shape
self.X = features
self.Y = labels
self.b = 0.
# 将E_i 保存在一个列表中
self.alpha = np.zeros(self.m) + 0.0001
self.E = [self._E(i) for i in range(self.m)]
def _g(self,i):
"""
预测值g(x_i)
"""
g_x = np.sum(self.alpha*self.Y*self._kernel(self.X[i],self.X)) + self.b
return g_x
def _E(self,i):
"""
E(x) 为g(x) 对输入x的预测值和y的差值
"""
g_x = self._g(i) - self.Y[i]
return g_x
def _kernel(self,x1,x2):
"""
计算kernel
"""
if self.kernel == "linear":
return np.sum(np.multiply(x1,x2),axis=1)
if self.kernel == "Gaussion":
return np.sum(np.exp(-((x1-x2)**2)/(2*self.sigma)),axis=1)
def _KKT(self,i):
"""
判断KKT
"""
y_g = np.round(np.float64(np.multiply(self._g(i),self.Y[i]))) # 存在精度问题也就是说在epsilon范围内,所以这里使用round
if self.alpha[i] == 0:
return y_g >= 1,y_g
elif 0<self.alpha[i]<self.C:
return y_g == 1,y_g
elif self.alpha[i] == self.C:
return y_g <=1,y_g
else:
return ValueError
def _init_alpha(self):
"""
外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT
0<a<C的样本点为间隔边界上支持向量点
"""
index_array = np.where(np.logical_and(self.alpha>0,self.alpha<self.C))[0] # 因为这里where的特殊性,所以alpha必须是(m,)
if len(index_array) !=0:
cache_list = []
for i in index_array:
bool_,y_g = self._KKT(i)
if not bool_:
cache_list.append((y_g,i))
# 如果没有则遍历整个样本
else:
cache_list = []
for i in range(self.m):
bool_,y_g = self._KKT(i)
if not bool_:
cache_list.append((y_g,i))
#获取违反KKT最严重的样本点,也就是g(x_i)*y_i 最小的
min_i = sorted(cache_list,key=lambda x:x[0])[0][1]
# 选择第二个alpha2
E1 = self.E[min_i]
if E1 > 0:
j = np.argmin(self.E)
else:
j = np.argmax(self.E)
return min_i,j
def _prune(self,alpha,L,H):
"""
修剪alpha
"""
if alpha > H:
return H
elif L<=alpha<=H:
return alpha
elif alpha < L:
return L
else:
return ValueError
def fit(self,features, labels):
self.init_args(features, labels)
for t in range(self.max_iter):
# 开始寻找alpha1,和alpha2
i1,i2 = self._init_alpha()
# 计算边界
if self.Y[i1] == self.Y[i2]: # 同号
L = max(0,self.alpha[i2]+self.alpha[i1]-self.C)
H = min(self.C,self.alpha[i2]+self.alpha[i1])
else:
L = max(0,self.alpha[i2]-self.alpha[i1])
H = min(self.C,self.C+self.alpha[i2]-self.alpha[i1])
# 计算阈值b_i 和差值E_i
E1 = self.E[i1]
E2 = self.E[i2]
eta = self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1]) + \
self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2]) - \
2 * self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])
if eta <=0:
continue
alpha2_new_nuc = self.alpha[i2] + (self.Y[i2] * (E1-E2) /eta)
# 修剪 alpha2_new_nuc
alpha2_new = self._prune(alpha2_new_nuc,L,H)
alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2]-alpha2_new)
# 计算b_i
b1_new = -E1-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1])*(alpha1_new - self.alpha[i1])\
- self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i1])*(alpha2_new - self.alpha[i2]) + self.b
b2_new = -E2-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])*(alpha1_new - self.alpha[i1])\
- self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2])*(alpha2_new - self.alpha[i2]) + self.b
if 0 < alpha1_new < self.C:
b_new = b1_new
elif 0 < alpha2_new < self.C:
b_new = b2_new
else:
# 选择中点
b_new = (b1_new + b2_new) / 2
# 更新参数
self.alpha[i1] = alpha1_new
self.alpha[i2] = alpha2_new
self.b = b_new
self.E[i1] = self._E(i1)
self.E[i2] = self._E(i2)
if self.is_print:
print("Train Done!")
def predict(self,data):
predict_y = np.sum(self.alpha*self.Y*self._kernel(data,self.X)) + self.b
return np.sign(predict_y)[0]
def score(self,test_X,test_Y):
m,n = test_X.shape
count = 0
for i in range(m):
predict_i = self.predict(test_X[i])
if predict_i == np.float(test_Y[i]):
count +=1
return count / m
```
由于鸢尾花数据每次请求都会变化,我们在这里取正确率的均值与SVC进行对比
```
count = 0
failed2 = []
for i in range(20):
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
svm = SVM(max_iter=200,C=2,kernel='linear')
svm.fit(X_train,y_train)
test_accourate = svm.score(X_test,y_test)
train_accourate = svm.score(X_train,y_train)
if test_accourate < 0.8:
failed2.append((X_train, X_test, y_train, y_test)) # 储存正确率过低的样本集
print("Test accourate:",test_accourate)
print("Train accourate:",train_accourate)
print('--------------------------')
count += test_accourate
print("Test average accourate is: ",count/20)
```
可以发现,有些数据的正确率较高,有些正确率非常的底,我们将低正确率的样本保存,取出进行试验
```
failed2X_train, failed2X_test, failed2y_train, failed2y_test= failed2[2]
```
我们可以看出,在更改C后,正确率依然是客观的,这说明简单版本的SMO算法是可行的.只是我们在测算
平均正确率的时候,C的值没有改变,那么可能有些样本的C值不合适.
```
svm = SVM(max_iter=200,C=5,kernel='linear')
svm.fit(failed2X_train,failed2y_train)
accourate = svm.score(failed2X_test,failed2y_test)
accourate
```
使用Scikit-SVC测试
### Scikit-SVC
基于scikit-learn的[SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.decision_function)
例子1:
```
from sklearn.svm import SVC
count = 0
for i in range(10):
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
clf = SVC(kernel="linear",C=2)
clf.fit(X_train, y_train)
accourate = clf.score(X_test, y_test)
print("accourate",accourate)
count += accourate
print("average accourate is: ",count/10)
```
当然由于是简单版本的SMO算法,所以平均正确率肯定没有SVC高,但是我们可以调节C和kernel来使得正确率提高
## Multilabel classification
多标签:一个实例可以有多个标签比如一个电影可以是动作,也可以是爱情.
多类分类(multi-class classification):有多个类别需要分类,但一个样本只属于一个类别
多标签分类(multi-label classificaton):每个样本有多个标签
对于多类分类,最后一层使用softmax函数进行预测,训练阶段使用categorical_crossentropy作为损失函数
对于多标签分类,最后一层使用sigmoid函数进行预测,训练阶段使用binary_crossentropy作为损失函数
This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process:
- pick the number of labels: n ~ Poisson(n_labels)
- n times, choose a class c: c ~ Multinomial(theta)
- pick the document length: k ~ Poisson(length)
- k times, choose a word: w ~ Multinomial(theta_c)
In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles.
The classification is performed by projecting to the first two principal components found by [PCA](http://www.cnblogs.com/jerrylead/archive/2011/04/18/2020209.html) and [CCA](https://files-cdn.cnblogs.com/files/jerrylead/%E5%85%B8%E5%9E%8B%E5%85%B3%E8%81%94%E5%88%86%E6%9E%90.pdf) for visualisation purposes, followed by using the [sklearn.multiclass.OneVsRestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html#sklearn.multiclass.OneVsRestClassifier) metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one.
Note: in the plot, “unlabeled samples” does not mean that we don’t know the labels (as in semi-supervised learning) but that the samples simply do not have a label.
```
from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA
def plot_hyperplance(clf,min_x,max_x,linestyle,label):
# get the separating heyperplance
# 0 = w0*x0 + w1*x1 +b
w = clf.coef_[0]
a = -w[0] /w[1]
xx = np.linspace(min_x -5,max_x + 5)
yy = a * xx -(clf.intercept_[0]) / w[1] # clf.intercept_[0] get parameter b,
plt.plot(xx,yy,linestyle,label=label)
def plot_subfigure(X,Y,subplot,title,transform):
if transform == "pca": # pca执行无监督分析(不注重label)
X = PCA(n_components=2).fit_transform(X)
print("PCA",X.shape)
elif transform == "cca": # pca 执行监督分析(注重label),也即是说会分析label之间的关系
X = CCA(n_components=2).fit(X, Y).transform(X)
print("CCA",X.shape)
else:
raise ValueError
min_x = np.min(X[:, 0])
max_x = np.max(X[:, 0])
min_y = np.min(X[:, 1])
max_y = np.max(X[:, 1])
classif = OneVsRestClassifier(SVC(kernel='linear')) # 使用 one -reset 进行SVM训练
classif.fit(X, Y)
plt.subplot(2, 2, subplot)
plt.title(title)
zero_class = np.where(Y[:, 0]) # 找到第一类的label 索引
one_class = np.where(Y[:, 1]) # 找到第二类的
plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))
plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
facecolors='none', linewidths=2, label='Class 1')
plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
facecolors='none', linewidths=2, label='Class 2')
# classif.estimators_[0],获取第一个估算器,得到第一个决策边界
plot_hyperplance(classif.estimators_[0], min_x, max_x, 'k--',
'Boundary\nfor class 1')
# classif.estimators_[1],获取第二个估算器,得到第一个决策边界
plot_hyperplance(classif.estimators_[1], min_x, max_x, 'k-.',
'Boundary\nfor class 2')
plt.xticks(())
plt.yticks(())
plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
if subplot == 2:
plt.xlabel('First principal component')
plt.ylabel('Second principal component')
plt.legend(loc="upper left")
```
**make_multilabel_classification:**
make_multilabel_classification(n_samples=100, n_features=20, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None)
```
plt.figure(figsize=(8, 6))
# If ``True``, some instances might not belong to any class.也就是说某些实例可以并不属于任何标签([[0,0]]),使用hot形式
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=True,
random_state=1)
print("Original:",X.shape)
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=False,
random_state=1)
print("Original:",X.shape)
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")
plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
plt.show()
```
由于是使用多标签(也就是说一个实例可以有多个标签),无论是标签1还是标签2还是未知标签(“没有标签的样本”).图中直观来看应该是CCA会由于PCA(无论是有没有采用"没有标签的样本"),因为CCA考虑了label之间的关联.
因为我们有2个标签在实例中,所以我们能够绘制2条决策边界(使用classif.estimators_[index])获取,并使用$x_1 = \frac{w_0}{w_1}x_1-\frac{b}{w_1}$绘制决策边界
| github_jupyter |
```
/* These are a list of Redis commands that can be executed in Redis Cli or Scala with Redis_Scala*/
//Redis CLI
//launch by going to your redis file and using the cli command redis-cli
//to check your connection type
// PING
// you shoudl receive PONG in returns
/*
SET for one Key/Value Pair:
SET foo 100
GET foo
100
INCR foo
GET foo
101
DECR foo
GET Foo
100
SET bar "Sup"
GET bar
"Sup"
EXSISTS bar
1
DEL bar
EXSISTS bar
0
FLUSHALL
EXSISTS foo
0
*/
/*
Multipe k/v pair: MSET
MSET key1 "Hello" key2 "World"
GET key1
"Hello"
GET key2
"Hello"
APPEND key1 "World"
GET key1
"Hello"
"World"
*/
/*
LISTS: Ordered key value pairs
LPUSH key vale --> Beginning of list (index =0)
RPUSH key value--> End of list (index = last value)
LLEN key --> Length of list
LRANGE key starting_index ending_index --> returns values fro start to stop
LPUSH Eagles "Wentz"
LPUSH Eagles "Sanders"
LPUSH Eagles "Reagor"
LRANGE Eagles 1 2
"Sanders"
"Wentz"
LRANGE Eagles 0 1
"Reagor"
"Sanders"
RPUSH Eagles "Johnson"
LRANGE Eagles 0 -1
"Reagor"
"Sanders"
"Wentz"
"Johnson"
LLEN Eagles
4
LPOP Eagles
"Reagor"
RPOP Eagles
"Johnson"
LRANGE Eagles 0 -1
"Sanders"
"Wentz"
LINSERT Eagles BEFORE "Wentz" "Kelce"
LRANGE Eagles 0 -1
"Sanders"
"Kelce"
"Wentz"
*/
/*
SETS: Unordered key value pairs
SADD key value--> add a value to a key
SREM key value--> remove a value from a key
SMEMBERS key --> list all members in a set
SCARD key --> return lengths of set
SMOVE key new_key value --> moves value from one key to a new_key
SDIFF key1 key2 --> tell you what key1 has that key 2 does not
SINTER key1 key2 --> tell you what is in common between the two sets
SADD sixers "Simmons"
SADD sixers "Embiid"
SADD sixers "Harris"
SADD sixers "Butler"
SREM sixers "Butler"
SMEMBERS sixers
"Simmons"
"Embiid"
"Harris"
SISMEMBER "Butler"
0
SCARD sixers
3
SADD bulls "Michael Jordan"
SADD bulls "Scottie Pippen"
SADD whitesox "Michael Jordan"
SADD whitesox "Frank Thomas"
SDIFF bulls whitesox
"Scottie Pippen"
SDIFF whitesox bulls
"Frank Thomas"
SINTER whitesox bulls
"Michael Jordan"
/*
Sorted Sets: Same as sets but all members are associated with a score
ZADD key score value --> adds set with score to key
ZRANK key value --> returns ranking by score (higher score = higher rank )
ZRANGE key start stop --> return list of all values in that range
ZADD players 361 "Christian McCaffery"
ZADD players 283 "Derrick Henry"
ZADD players 269 "Aaron Jones"
ZADD players 137 "David Mongomery"
ZADD players 137 "LeVeon Bell"
ZRANK players "Christian McCaffery"
4
ZRANK players "LeVeon Bell"
1
ZRANK players "David Mongomery"
0
ZRANGE 0 -1
1) "David Mongomery"
2) "LeVeon Bell"
3) "Aaron Jones"
4) "Derrick Henry"
5) "Christian McCaffery"
ZINCRBY players 1 "David Mongomery"
138
ZRANGE player 0 -1
1) "LeVeon Bell"
2) "David Mongomery"
3) "Aaron Jones"
4) "Derrick Henry"
5) "Christian McCaffery"
*/
/*
HASHES - reminds me of classes
HSET hash key value --> Set a hash
HGET hash key value --> returns value
HGET ALL hash
HVALS hash --> returns all keys for hash
HKEYS hash --> returns all keys for hash
HMSET key1 hash value key1 hash 2 value key2 hash value .... --> hset multiples
HSET user:James name "James Kocher"
HGET user:James name
"James Kocher"
HSET user:James email "JKocher13@yahoo.com"
HGETALL
1) "name"
2) "James Kocher"
3) "email"
4) "JKocher13@yahoo.com"
*/
/* You will need to be runing sbt to build you scala if you wish to use Redis_Scal
you can copy and paste this to your sbt.build:
libraryDependencies ++= Seq(
"net.debasishg" %% "redisclient" % "3.20")
in your scala file run the follow import commands:
import com.redis._
Make the connections for your Redis data base adn then run the following command:
val r = new RedisClient("localhost", 6379)
*/
```
| github_jupyter |
### Python basic syntax
- 변수 선언, 식별자, 자료형, 형변환, 연산자 등
### 1. 주석(comment)과 출력(print)
```
# 주석 : 앞에 #을 붙이면 코드로 실행되지 않음
# 코드에 대한 설명이나 중간에 코드를 실행시키고 싶지 않을 때 사용
# 단축키 : command + /
# block 설정 : shift + 방향키
print(1)
# print(2)
print(3)
# 출력 : print 함수
# 코드 중간에 변수 안에 들어있는 값을 확인하고 싶을 때 사용함
a = 1
b = 2
print(b)
c = 3
b = 4
print(b)
# print 함수의 옵션
# docstring : 함수에 대한 설명. 단축키는 (shift + tab)
# 자동완성 : 함수 이름의 일부분을 입력하고 tab을 누르면 자동완성 가능
print(1, 2, sep="-", end='\t')
print(3)
```
### 2. 변수 선언
- RAM 저장공간에 값을 할당하는 행위
```
a = 1
b = 2
a + b
```
위의 코드 동작원리
1. a = 1을 선언하는 순간 ram 에 a 라는 이름의 index, 1 이라는 value가 생성됨
2. b = 2를 선언하는 순간 ram 에 b 라는 이름의 index, 2 라는 value가 생성됨
3. a + b 를 요청할 때, 각 저장공간에 있는 value를 참조하여 연산 수행
```
# 변수 선언은 여러가지 방식으로 가능함
d, e = 3, 4
f = g = 5
```
### 3. 식별자
- 변수, 함수, 클래스, 모듈 등의 이름
- 식별자 규칙
- 소문자, 대문자, 숫자, _(underscore)사용
- 가장 앞에 숫자 사용 불가
- 예약어의 사용 불가 ex) def, class, try, except...등
- convention(python 기준, 언어마다 조금씩 다르다)
- snake case : fast_campus : 변수 및 함수 선언할 때 사용
- camel case : fastCampus : 클래스 선언할 때 사용
### 4. 데이터 타입
- RAM 저장공간을 효율적으로 사용하기 위해서 저장공간의 타입을 설정
- 동적타이핑 : 변수 선언 시 저장되는 값에 따라서 자동으로 데이터 타입이 설정됨
- 기본 데이터 타입
- int, float, bool, str
- 컬렉션 데이터 타입
- list, tuple, dict
```
a = 1
b = "python"
type(a), type(b)
# 기본 데이터 타입 int, float, bool, str
a = 1
b = 1.2
c = True
d = "data"
type(a), type(b), type(c), type(d)
a + b
a + d # int 와 str은 연산이 불가능함
# 문자열 데이터 타입에서 사용할 수 있는 함수의 예
e = d.upper()
d, e
# 문자열에서 사용 가능한 주요한 함수 몇가지 예
f = " Fast campuS "
# lower: 소문자로 변환
f.lower()
# strip: 공백을 제거
f.strip()
# replace: 특정 문자열을 치환
f.replace("Fast","Slow")
# 적용가능한 함수 보는 방법: dir()
# 오프셋 인덱스 : 마스크, 마스킹 : []
# 문자열은 순서가 있는 문자들의 집합
g = "abcdefg"
# 인덱스는 0부터 시작. 음수로 지정할 경우 가장 마지막 문자를 -1로 인식함
# 인덱스 범위 설정할 때는 콜론(:) 사용
# 콜론을 두번씩 쓸 때는 (::) 건너뛰는 것을 의미
g[1], g[-2], g[2:5], g[:2], g[-2:], g[::2]
numbers = "123456789"
# 여기서 97531을 출력하기 위해서는
# 1. 우선 역순임, 2. 2씩 건너 뜀
numbers[::-2]
```
### 컬렉션 데이터 타입: list, tuple, dict
- list [] : 순서가 있고, 수정 가능한 데이터 타입
- tuple () : 순서가 있고, 수정이 불가능한 데이터 타입
- dict {} : 순서가 없고, (키:값) 으로 구성되어 있는 데이터 타입
```
# list
ls = [1, 2, 3, 4, "five", [5, 6], True, 1.2]
type(ls), ls
# list는 순서가 있기 때문에 offset index 사용 가능함
ls[3], ls[4], ls[1:3], ls[::-1]
# list 에서 사용되는 함수
ls2 = [1, 5, 2, 4]
# append : 가장 뒤에 값을 추가
ls2.append(3)
# sort : 오름차순으로 정렬
ls2.sort()
# 내림차순은 별도로 없기 때문에, 오름차순 정렬한 리스트를 역으로 다시 출력해야 함
# 결과값이 요상하게 나오는 이유는 append를 입력한 셀을 여러번 실행시켰기 때문에 3이 반복적으로 추가된 것
ls2[::-1]
# pop : 가장 마지막 데이터를 출력하고 그 값을 삭제
num = ls2.pop()
num, ls2
# 단축키: ctrl + enter -> 현재 셀에서 실행됨. 커서 안넘어감.
# 리스트의 복사
ls3 = [1, 2, 3]
ls4 = ls3
ls3, ls4
ls3[2] = 5
# ls3 내의 값을 바꿨는데, ls4의 값도 동시에 바뀌었음
ls4
# 단순하게 대입 연산자로 매칭을 시키게 되면, 실제 값을 복사하는 것이 아닌 주소값을 참조한다는 의미. 이를 얕은 복사라 지칭.
# 따라서 진짜 값을 복사하기 위해서는, list 에서 사용할 수 있는 함수 중 하나인 copy를 사용해야 함. 이를 깊은 복사라 지칭.
ls5 = ls3.copy()
ls3[2] = 7
ls3, ls5
```
### tuple : list와 같지만 수정이 불가능한 데이터 타입
- ( )을 통해서 선언하지만, 생략하고 그냥 쓰면 tuple로 인식함
- tuple은 list 대비해서 같은 데이터를 가졌을 때 저장공간을 작게 사용함. 따라서 변수값이 불변일 때는 가급적 tuple로 지정하는 것이 바람직.
```
tp1 = 1, 2, 3
tp2 = (4, 5, 6)
type(tp1), type(tp2), tp1, tp2
# tuple에서도 offset index 사용 가능
tp1[1], tp1[::-1]
# list와 tuple의 저장공간 차이 확인
import sys
ls = [1, 2, 3]
tp = (1, 2, 3)
print(sys.getsizeof(ls), sys.getsizeof(tp))
```
### dictionary
- { }로 선언
- 순서가 없고 {key:value} 형태로 구성되어 있는 데이터 타입
```
# 선언할 때: 키는 정수, 문자열 데이터 타입만 사용 가능
dicex = {
1 : "one",
"two" : 2,
"three" : [1, 2, 3,],
}
type(dicex), dicex
# dict에서는 index 가 key 임
dicex["two"], dicex["three"]
dicex["two"] = 22
dicex
# 아래의 데이터를 list와 dict으로 각각 선언할 경우
# list로는 두 번 할 일을, dict으로는 한 번에 처리 가능함
# 도시 : Seoul, Busan, Daegu
# 인구 : 9,700,000, 3,400,000, 2,400,000
ls_city = ["Seoul", "Busan", "Daegu"]
ls_pop = [9700000, 3400000, 2400000]
ls_city, ls_pop
dict_citypop = {
"Seoul" : 9700000,
"Busan" : 3400000,
"Daegu" : 2400000
}
dict_citypop
# list에 동일한 타입의 자료형만 있을 경우, 함수를 통해 쉽게 계산 가능함
sum(ls_pop)
# dict 타입은 key와 value가 공존하므로 계산을 위해서는 values 함수를 통해 별도로 지정해줘야 함
dict_citypop.values()
sum(dict_citypop.values())
```
### 5. 형변환
- 데이터 타입을 변환하는 방법
- int, float, bool, str, list, tuple, dict
```
# 같은 데이터 타입에서만 연산이 가능하다
a = 1
b = "2"
a + b
# 따라서 데이터 타입을 맞춰줘야 함
a + int(b)
# 만약 문자열로 합성을 원한다면
str(a) + b
# zip : 같은 인덱스 데이터끼리 묶어주는 함수. 데이터끼리는 tuple 형태로 묶임
zip(ls_city, ls_pop)
# 출력 형태를 각각 조절할 수 있음. collection data type 3개 중 한개로 지정.
list(zip(ls_city, ls_pop))
tuple(zip(ls_city, ls_pop))
newdict = dict(zip(ls_city, ls_pop))
newdict
# dict 형태로 묶인 것을 다시 풀기 위해서는 keys 혹은 values 함수를 사용
list(newdict.keys())
tuple(newdict.values())
```
### 6. 연산자
- 두 개(이상)의 데이터를 통해 특정한 연산을 위한 기호
- 산술연산자: +, -, *, /, //(몫), %(나머지), **(제곱). -> 가장 마지막 것부터 우선순위 낮아짐
- 할당연산자: 특정 변수에 누적시켜서 연산 : +=, //=, **=, 등..
- 비교연산자: <, >, <=, >=, !=, ==. True 혹은 False로 출력됨
- 논리연산자: True, False를 연산. and, or, not 총 3종류
- 멤버연산자: 특정 데이터가 있는지 확인할 때 사용. not in, in 사용
- 우선순위: 산술, 할당, 비교, 멤버, 논리 순
```
# 산술 연산: 기본 규칙과 동일함(괄호가 우선)
(1 + 4) / 2 ** 2
# 할당 연산
a = 10
a += 10
a
# 비교 연산
b = 30
a > b, a < b, a == b, a != b
# 논리 연산
True or False, True and False, not False
# 멤버 연산
ls_ex = ["Kim", "Lee", "Park"]
"Kim" in ls_ex, "Chang" in ls_ex, "Lee" not in ls_ex
# 번외: 랜덤 함수
import random
random.randint(1, 10)
# 번외: 입력 함수
data = input("insert string : ")
data
# 심심풀이: 해결의 책
# 1. 솔루션을 list로 작성
# 2. 질문 입력 받음
# 3. 솔루션의 갯수에 맞게 랜덤한 index 정수 값을 생성
# 4. index 해당하는 솔루션 list의 데이터를 출력
import random
solution = ["잠을 자라", "돈 벌어라", "운동 해라", "답 없다", "다시 태어나라"]
input("What's your problem? : ")
idx = random.randint(0, len(solution) - 1)
solution[idx]
```
| github_jupyter |
This notebooks finetunes VGG16 by adding a couple of Dense layers and trains it to classify between cats and dogs.
This gives a better classification of around 95% accuracy on the validation dataset
```
%load_ext autoreload
%autoreload 2
import numpy as np
import tensorflow as tf
from tensorflow.contrib.keras import layers
from tensorflow.contrib.keras import models
from tensorflow.contrib.keras import optimizers
from tensorflow.contrib.keras import applications
from tensorflow.contrib.keras.python.keras.preprocessing import image
from tensorflow.contrib.keras.python.keras.applications import imagenet_utils
def get_batches(dirpath, gen=image.ImageDataGenerator(), shuffle=True, batch_size=64, class_mode='categorical'):
return gen.flow_from_directory(dirpath, target_size=(224,224), class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
batch_size = 64
train_batches = get_batches('./data/train', batch_size=batch_size)
val_batches = get_batches('./data/valid', batch_size=batch_size)
```
Model creation
```
vgg16 = applications.VGG16(weights="imagenet", include_top=False, input_shape=(224, 224, 3))
##
finetune_in = vgg16.output
x = layers.Flatten(name='flatten')(finetune_in)
x = layers.Dense(4096, activation='relu', name='fc1')(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(4096, activation='relu', name='fc2')(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.5)(x)
predictions = layers.Dense(train_batches.num_class, activation='softmax', name='predictions')(x)
model = models.Model(inputs=vgg16.input, outputs=predictions)
##
```
We tell the model to train on the last 3 layers
```
for layer in model.layers[:-7]:
layer.trainable = False
model.summary()
for i, layer in enumerate(model.layers):
print(i, layer.name, layer.trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
epochs = 1
steps_per_epoch = train_batches.samples // train_batches.batch_size
validation_steps = val_batches.samples // val_batches.batch_size
model.fit_generator(train_batches, validation_data=val_batches, epochs=epochs,
steps_per_epoch=steps_per_epoch,validation_steps=validation_steps)
```
This give us a validation score of around: `val_loss: 0.2865 - val_acc: 0.9536`
## Gen submission file
```
import submission
test_batches, steps = submission.test_batches()
preds = model.predict_generator(test_batches, steps)
preds.shape
submission.gen_file(preds, test_batches)
```
This gave a score of around `0.39` on the public leaderboard
| github_jupyter |
# Using Simulation to Estimate the Power of an A/B experiment
> A tutorial on estimating power of an A/B experiment
- toc: false
- badges: true
- comments: true
- categories: [a/b testing, python]
- image: images/chart-preview.png
# About
This article was originally posted in my medium blog post [here](https://medium.com/analytics-vidhya/using-simulation-to-estimate-the-power-of-an-a-b-experiment-d38adf32b29c)
Power of an experiment measures the ability of the experiment to detect a specific alternate hypothesis. For example, an e-commerce company is trying to increase the time users spend on the website by changing the design of the website. They plan to use the well-known two-sample t-test. Power helps in answering the question: will the t-test be able to detect a difference in mean time spend (if it exists) by rejecting the null hypothesis?
Lets state the hypothesis
**Null Hypothesis H<sub>0</sub>**: New design has no effect on the time users spend on the website
**Alternate Hypothesis H<sub>a</sub>**: New design impacts the time users spend on the website
When an A/B experiment is run to measure the impact of the website redesign,
we want to ensure that the experiment has at least 80% power. The following parameters impact the power of the experiment:
**1. Sample size(n):** Larger the sample size, smaller the standard error becomes; and makes sampling distribution smaller. Increasing the sample size, increases the power of the experiment
**2. Effect size(𝛿):** Difference between the means sampling distribution of null and alternative hypothesis. Smaller the effect size, need more samples to detect an effect at predefined power
**3. Alpha(𝛼):** Significance value is typically set at 0.05; this is the cut off at which we accept or reject our null hypothesis. Making alpha smaller requires more samples to detect an effect at predefined power
**4. Beta(β):** Power is defined as 1-β
Why power analysis is done to determine sample size before running an experiment?
1. Running experiments is expensive and time consuming
2. Increases the chance of finding significant effect
3. Increases the chance of replicating an effect detected in an experiment
For example, the time users spend currently on the website is normally distributed with mean 2 minutes and standard deviation 1 minute. The product manager wants to design an experiment to understand if the redesigned website helps in increasing the time spent on the website.
The experiment should be able to detect a minimum of 5% change in time spent on the website. For a test like this, an exact solution is available to estimate sample size since sampling distribution is known. Here we will use the simulation method to estimate the sample and validate the same using exact method.
The following steps estimate the power of two-sample t-test:
1. Simulate data for the model under null 𝒩(2,1) and alternate hypothesis 𝒩(2+𝛿,1)
2. Perform t-test on the sample and record whether the t-test rejects the null hypothesis
3. Run the simulation multiple number of times and count the number of times the t-test rejects the null hypothesis.
### Code to compute power of experiment for a specified sample size, effect size and significance level:
Power of the experiment is 58.8% with sample size of 1000
```
import numpy as np
import scipy.stats as st
# Initialize delta(minimum lift the product manager expect), control_mean, control_sd
delta=0.05
control_mean=2
control_sd=1
sample_size=1000
alpha=0.05#significance of the experiment
n_sim=1000#Total number of samples to simulate
np.random.seed(123)#set seed
def simulate_data(control_mean,control_sd,sample_size,n_sim):
# Simulate the time spend under null hypothesis
control_time_spent = np.random.normal(loc=control_mean, scale=control_sd, size=(sample_size,n_sim))
# Simulate the time spend under alternate hypothesis
treatment_time_spent = np.random.normal(loc=control_mean*(1+delta), scale=control_sd, size=(sample_size,n_sim))
return control_time_spent,treatment_time_spent
# Run the t-test and get the p_value
control_time_spent, treatment_time_spent=simulate_data(control_mean,control_sd,sample_size,n_sim)
t_stat, p_value = st.ttest_ind(control_time_spent, treatment_time_spent)
power=(p_value<0.05).sum()/n_sim
print("Power of the experiment {:.1%}".format(power))
#Power of the experiment 58.8%
```
### Code to compute sample size required to reach 80% power for specified effect size and significance level:
Based on simulation methods we need 1560 users to reach power of 80% and this closely matches with sample size estimated using exact method
```
#increment sample size till required power is reached
sample_size=1000
np.random.seed(123)
while True:
control_time_spent, treatment_time_spent=simulate_data(control_mean,control_sd,sample_size,n_sim)
t_stat, p_value = st.ttest_ind(control_time_spent, treatment_time_spent)
power=(p_value<alpha).sum()/n_sim
if power>.80:
print("Minimum sample size required to reach significance {}".format(sample_size))
break
else:
sample_size+=10
#Minimum sample size required to reach significance 1560
```
### Code to compute sample size using exact method:
```
#Analtyical solution to compute sample size
from statsmodels.stats.power import tt_ind_solve_power
treat_mean=control_mean*(1+delta)
mean_diff=treat_mean-control_mean
cohen_d=mean_diff/np.sqrt((control_sd**2+control_sd**2)/2)
n = tt_ind_solve_power(effect_size=cohen_d, alpha=alpha, power=0.8, ratio=1, alternative='two-sided')
print('Minimum sample size required to reach significance: {:.0f}'.format(round(n)))
```
### Conclusion
This article explained how simulation can be used to estimate power of an A/B experiment when a closed form solution doesn’t exist.
| github_jupyter |
```
midx = '1a'
# midx = '1a1'
# midx = '1a2'
# midx = '1a3'
import socket
import timeit
import time
from datetime import datetime
import os
import glob
from collections import OrderedDict
import numpy as np
import pandas as pd
import pickle
import gc
import cv2
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import seaborn as sns
sns.set_style("white")
import random
import PIL
import pathlib
import pathlib
import torch
from torch.autograd import Variable
import torch.optim as optim
from torch.utils import data
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from torchvision.utils import make_grid
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import LambdaLR, ReduceLROnPlateau, StepLR
from torch.utils.data.sampler import WeightedRandomSampler
import torchvision
import albumentations as A
from skimage.exposure import histogram, equalize_hist, equalize_adapthist
from skimage.morphology import dilation, remove_small_objects, remove_small_holes, label
import pretrainedmodels
from xception import xception
from tensorboardX import SummaryWriter
from scipy.special import logit
from sklearn.metrics import jaccard_similarity_score, f1_score
from sklearn.preprocessing import MultiLabelBinarizer
import imgaug as ia
from imgaug import augmenters as iaa
import multiprocessing
import threading
from dataloaders import utils
from dataloaders import custom_transforms as tr
# from losses import CombinedLoss, BCELoss2d
import lovasz_losses as L
directory = './'
ori_size = 512
up_size = 512
image_size = 512
interp = cv2.INTER_AREA
# methods=[("area", cv2.INTER_AREA),
# ("nearest", cv2.INTER_NEAREST),
# ("linear", cv2.INTER_LINEAR),
# ("cubic", cv2.INTER_CUBIC),
# ("lanczos4", cv2.INTER_LANCZOS4)]
y_pad = image_size - up_size
y_min_pad = int(y_pad / 2)
y_max_pad = y_pad - y_min_pad
x_pad = image_size - up_size
x_min_pad = int(x_pad / 2)
x_max_pad = x_pad - x_min_pad
print(ori_size, up_size, image_size)
PATH_TO_TRAIN = './train/'
PATH_TO_TEST = './test/'
PATH_TO_EXTERNAL2 = './external_data2/'
PATH_TO_EXTERNAL3 = './external_data3/'
PATH_TO_TARGET = './train.csv'
PATH_TO_TARGETXX = './HPAv18Y.csv'
PATH_TO_SUB = './sample_submission.csv'
LABEL_MAP = {
0: "Nucleoplasm" ,
1: "Nuclear membrane" ,
2: "Nucleoli" ,
3: "Nucleoli fibrillar center",
4: "Nuclear speckles" ,
5: "Nuclear bodies" ,
6: "Endoplasmic reticulum" ,
7: "Golgi apparatus" ,
8: "Peroxisomes" ,
9: "Endosomes" ,
10: "Lysosomes" ,
11: "Intermediate filaments" ,
12: "Actin filaments" ,
13: "Focal adhesion sites" ,
14: "Microtubules" ,
15: "Microtubule ends" ,
16: "Cytokinetic bridge" ,
17: "Mitotic spindle" ,
18: "Microtubule organizing center",
19: "Centrosome",
20: "Lipid droplets" ,
21: "Plasma membrane" ,
22: "Cell junctions" ,
23: "Mitochondria" ,
24: "Aggresome" ,
25: "Cytosol" ,
26: "Cytoplasmic bodies",
27: "Rods & rings"}
LOC_MAP = {}
for k in LABEL_MAP.keys(): LOC_MAP[LABEL_MAP[k]] = k
# from Tomomi
dxx = pd.read_csv(PATH_TO_TARGETXX, index_col = None)
dxx.set_index('Id',inplace=True)
print(dxx.head())
print(dxx.shape)
# dataloader bombs out on iteration 63914, so limit size here
# dxx = dxx.iloc[:50000]
# dxx = dxx.iloc[50000:]
# dxx = dxx.iloc[37154:]
print(dxx.shape)
def image_histogram_equalization(image, number_bins=256):
# from http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html
# get image histogram
image_histogram, bins = np.histogram(image.flatten(), number_bins, density=True)
cdf = image_histogram.cumsum() # cumulative distribution function
cdf = 255 * cdf / cdf[-1] # normalize
# use linear interpolation of cdf to find new pixel values
image_equalized = np.interp(image.flatten(), bins[:-1], cdf)
# return image_equalized.reshape(image.shape), cdf
return image_equalized.reshape(image.shape)
def equalize(arr):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
arr[...,i] = image_histogram_equalization(arr[...,i])
return arr
def normalize(arr, q=0.01):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
# print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
ai = arr[...,i]
# print('ai ' + str(i) + ' before',i,ai.shape,ai.min(),ai.mean(),ai.max())
qlow = np.percentile(ai,100*q)
qhigh = np.percentile(ai,100*(1.0-q))
if qlow == qhigh:
arr[...,i] = 0.
else:
arr[...,i] = 255.*(np.clip(ai,qlow,qhigh) - qlow)/(qhigh - qlow)
# print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())
# print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())
return arr
def standardize(arr):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
# print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
ai = (arr[...,i] - arr.mean())/(arr.std() + 1e-6)
# print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())
# print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())
return arr
class MultiBandMultiLabelDataset(Dataset):
# BANDS_NAMES = ['_red.png','_green.png','_blue.png','_yellow.png']
BANDS_NAMES = ['_red','_green','_blue']
def __len__(self):
return len(self.images_df)
def __init__(self, images_df,
base_path,
image_transform=None,
augmentator=None,
train_mode=True,
external=0
):
if not isinstance(base_path, pathlib.Path):
base_path = pathlib.Path(base_path)
self.images_df = images_df.reset_index()
self.image_transform = image_transform
self.augmentator = augmentator
self.images_df.Id = self.images_df.Id.apply(lambda x: base_path / x)
self.mlb = MultiLabelBinarizer(classes=list(LABEL_MAP.keys()))
self.train_mode = train_mode
self.external = external
if self.external == 2: self.suffix = '.jpg'
else: self.suffix = '.png'
self.cache = {}
def __getitem__(self, index):
# print('index class',index.__class__)
if isinstance(index, torch.Tensor): index = index.item()
if index in self.cache:
X, y = self.cache[index]
else:
y = None
X = self._load_multiband_image(index)
if self.train_mode:
y = self._load_multilabel_target(index)
self.cache[index] = (X,y)
# augmentator can be for instance imgaug augmentation object
if self.augmentator is not None:
# print('getitem before aug',X.shape,np.min(X),np.mean(X),np.max(X))
# X = self.augmentator(np.array(X))
X = self.augmentator(image=X)['image']
# print('getitem after aug',X.shape,np.min(X),np.mean(X),np.max(X))
if self.image_transform is not None:
X = self.image_transform(X)
return X, y
def _load_multiband_image(self, index):
row = self.images_df.iloc[index]
if self.external == 1:
p = str(row.Id.absolute()) + self.suffix
band3image = PIL.Image.open(p)
else:
image_bands = []
for i,band_name in enumerate(self.BANDS_NAMES):
p = str(row.Id.absolute()) + band_name + self.suffix
pil_channel = PIL.Image.open(p)
if self.external == 2:
pa = np.array(pil_channel)[...,i]
# pa = np.array(pil_channel)
# print(i,band_name,pil_channel.mode,pa.shape,pa.min(),pa.mean(),pa.max())
if pa.max() > 0:
pil_channel = PIL.Image.fromarray(pa.astype('uint8'),'L')
pil_channel = pil_channel.convert("L")
image_bands.append(pil_channel)
# pretend its a RBGA image to support 4 channels
# band4image = PIL.Image.merge('RGBA', bands=image_bands)
band3image = PIL.Image.merge('RGB', bands=image_bands)
band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)
# normalize each channel
# arr = np.array(band4image)
arr = np.array(band3image)
# # average red and yellow channels, orange
# arr[...,0] = (arr[...,0] + arr[...,3])/2.0
# arr = arr[...,:3]
# arr = np.array(band3image)
# print('arr shape',arr.shape)
# if index==0: print(index,'hist before',histogram(arr))
# arr = normalize(arr)
# arr = standardize(arr)
# arr = equalize(arr)
# # average red and yellow channels, orange
# arr[...,0] = (arr[...,0] + arr[...,3])/2.0
# arr = arr[...,:3]
# if index==0: print(index,'hist after',histogram(arr))
band3image = PIL.Image.fromarray(arr.astype('uint8'),'RGB')
# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')
# histogram equalize each channel
# arr = np.array(band4image)
# # print('arr',arr.shape)
# # if index==0: print(index,'hist before',histogram(arr))
# arr = equalize(arr)
# # if index==0: print(index,'hist after',histogram(arr))
# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')
# return band4image
return band3image
# return arr
# band3image = PIL.Image.new("RGB", band4image.size, (255, 255, 255))
# band3image.paste(band4image, mask=band4image.split()[3])
# band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)
# return band3image
def _load_multilabel_target(self, index):
y = self.images_df.iloc[index].Target.split(' ')
# print(y)
try:
yl = list(map(int, y))
except:
yl = []
return yl
def collate_func(self, batch):
labels = None
images = [x[0] for x in batch]
if self.train_mode:
labels = [x[1] for x in batch]
labels_one_hot = self.mlb.fit_transform(labels)
labels = torch.FloatTensor(labels_one_hot)
# return torch.stack(images)[:,:4,:,:], labels
return torch.stack(images), labels
imean = (0.08069, 0.05258, 0.05487)
istd = (0.13704, 0.10145, 0.15313)
train_aug = A.Compose([
# A.Rotate((0,30),p=0.75),
A.RandomRotate90(p=1),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(p=0.9),
# A.RandomBrightness(0.05),
# A.RandomContrast(0.05),
A.Normalize(mean=imean, std=istd,max_pixel_value=255.)
])
test_aug = A.Compose([
A.Normalize(mean=imean, std=istd, max_pixel_value=255.)
])
composed_transforms_train = transforms.Compose([
# transforms.Resize(size=final_size),
# # transforms.RandomResizedCrop(size=224),
# transforms.RandomHorizontalFlip(p=0.5),
# transforms.RandomVerticalFlip(p=0.5),
# # transforms.RandomRotation(degrees=45),
# transforms.RandomAffine(degrees=45, translate=(0.1,0.1), shear=10, scale=(0.9,1.1)),
transforms.ToTensor()
# transforms.Normalize(mean=[0.456]*4, std=[0.224]*4)
])
composed_transforms_test = transforms.Compose([
# transforms.Resize(size=final_size),
transforms.ToTensor()
# transforms.Normalize(mean=[0.456]*4, std=[0.224]*4)
])
eps = 1e-5
gpu_id = 0
thresh = 0.1
# save_dir_root = os.path.join(os.path.dirname(os.path.abspath(__file__)))
# exp_name = os.path.dirname(os.path.abspath(__file__)).split('/')[-1]
save_dir_root = './'
gc.collect()
fold = -1
if gpu_id >= 0:
print('Using GPU: {} '.format(gpu_id))
torch.cuda.set_device(device=gpu_id)
torch.cuda.empty_cache()
from os import listdir
from os.path import isfile, join
file_list_x = [f for f in listdir(PATH_TO_EXTERNAL2) if isfile(join(PATH_TO_EXTERNAL2, f))]
print(file_list_x[:15],len(file_list_x))
db_xx = MultiBandMultiLabelDataset(dxx,
base_path=PATH_TO_EXTERNAL2,
# augmentator=test_aug,
image_transform=composed_transforms_test,
external=2)
xxloader = DataLoader(db_xx, collate_fn=db_xx.collate_func,
batch_size=1, shuffle=False,
num_workers=1)
id_list = []
im_list = []
y_list = []
for i, (im, y) in enumerate(xxloader):
# if i % 1000 == 0: print(i,id)
# if i < 63914: continue
id = str(db_xx.images_df.Id[i])
im = im.cpu().detach().numpy()[0].transpose(1,2,0)*255
# print(im.shape,im.min(),im.mean(),im.max())
im = PIL.Image.fromarray(im.astype('uint8'),'RGB')
id = PATH_TO_EXTERNAL3 + id[15:]
im.save(id+'.png',"PNG")
# y = y.cpu().detach().numpy()
# id_list.append(id)
# im_list.append(im)
# y_list.append(y)
if i % 1000 == 0: print(i,id)
# if i % 1000 == 0: print(i,id,s,y)
# if i==10: break
```
| github_jupyter |
```
#default_exp per_patient
```
# Per-Patient
> Create a FHIR bundle of medications for a single patient.
As the first step in converting FHIR resources to the CDISC "Concomitant/Prior Medications" CM domain, we'll create a `Bundle` containing one `Patient` and any number of `MedicationAdministration`, `MedicationDispense`, `MedicationRequest` and `MedicationStatement` resources.
So that subsequent use of this bundle doesn't have to make any FHIR server requests, the bundle will also contain;
- `Medication` referenced by `medicationReference`
- `Condition` and/or `Observation` referenced by `reasonReference`
## See also
[vulcan_medication_bundle_getting_started.ipynb](https://colab.research.google.com/github/pete88b/smart-on-fhir-client-py-demo/blob/main/vulcan_medication_bundle_getting_started.ipynb) explains;
- Why we are not using `List` and
- Why we are reading FHIR resources as raw JSON
## TODO: Remove non-concomitant medications from the list
Identifying concomitant medications might get quite complicated - I'm assuming we won't be able to cover all logic needed when pulling data from the FHIR servers. I think it makes sense to pull all medications, then add a concomitant medication filter as a subsequent step.
### How are we defining concomitant medications?
Any medication
- that is not the medication being investigated
- that is being taken while a patient is participating in a study
We might also want to list subset of concomitant medications - i.e. thoes listed in exclusion criteria, relevant medications that the study would like to follow (e.g. concomitant use of ACE inhibitors might be important but single dose paracetamil might not).
To know if the medication was being taken while the patient was/is participating in a study, we could
could compare study participation
- study participation from `ResearchSubject.period`
- study duration from `ResearchStudy.period`
- if either start or end date are missing from `ResearchSubject.period`
- user specified start and end date
- if `ResearchStudy` etc are not in FHIR?
with start and end time of medication "administration"
- `MedicationStatement.effectiveX`, `MedicationStatement.dateAsserted`, `MedicationStatement.dosage`
- don't forget `MedicationX.status` not-taken etc
- `MedicationRequest.authoredOn`, `MedicationRequest.encounter`, `MedicationRequest.dosageInstruction`, `MedicationRequest.basedOn`, `MedicationRequest.dispenseRequest` ...
- Don't forget `MedicationRequest.doNotPerform`
- `MedicationDispense.daysSupply`, `MedicationDispense.whenPrepared`, `MedicationDispense.whenHandedOver`, `MedicationDispense.dosageInstruction`, `MedicationDispense.partOf`, `MedicationDispense.authorizingPrescription`
- `MedicationAdministration.effectiveX`, partOf, supportingInformation ...
## TODO: think about a "human in the loop" to help with things ↑ that will be hard to reliably automate
## Next steps
Might we want to
- define some kind of order of entries in the bundle
- think about how we handle resources that fail validation
- We can use https://inferno.healthit.gov/validator/ to validate the bundes created
- TODO: Can we discuss how we want to action this output?
## Required resources?
The CM tab (https://wiki.cdisc.org/display/FHIR2CDISCUG/FHIR+to+CDISC+Mapping+User+Guide+Home FHIR-to-CDISC Mappings xlsx) lists the following resources;
- ~~`ResearchSubject` with `ResearchStudy`~~ NOT YET?
- `Subject`
- `MedicationStatement`
- `MedicationRequest`
- `MedicationDispense`
- `MedicationAdministration`
- ~~`Immunization`~~ don't think we're doing immunization yet?
- `Medication` referenced by `medicationReference`
- `Condition` or `Observation` referenced by `reasonReference`
TODO: For now, I'm just pulling the resources that Jay highlighted as required - we can easily add the others (o:
```
#export
from vulcan_medication_bundle.core import *
from pathlib import Path
import json
from io import StringIO
import requests
#demo
import pandas as pd
# api_base = 'http://hapi.fhir.org/baseR4'
api_base, patient_id = 'https://r4.smarthealthit.org', '11f2b925-43b2-45e4-ac34-7811a9eb9c1b'
bundle = get_as_raw_json(api_base, 'MedicationRequest', dict(subject=patient_id))
print('Patient', patient_id, 'has', len(bundle['entry']), 'MedicationRequest resources')
# uncomment and run this cell to see the bundle as raw JSON
# bundle
```
## Create and save a single patient medication bundle
TODO: extract ref for `statusReasonReference` - see https://www.hl7.org/fhir/medicationdispense.html? maybe
```
#export
def create_single_patient_medication_bundle(api_base, patient_id):
"Return a Bundle containing one Patient and any number of MedicationX resources"
result = new_bundle()
references = []
for resource_type, url_suffix in [
['Patient', dict(_id=patient_id)],
['MedicationRequest', dict(subject=f'Patient/{patient_id}')],
['MedicationDispense', dict(subject=f'Patient/{patient_id}')],
['MedicationAdministration', dict(subject=f'Patient/{patient_id}')],
['MedicationStatement', dict(subject=f'Patient/{patient_id}')]]:
try:
single_resource_bundle = get_as_raw_json(api_base, resource_type, url_suffix)
while single_resource_bundle is not None and single_resource_bundle['total'] > 0:
result['entry'].extend(single_resource_bundle['entry'])
# TODO: xxx medicationReference and reasonReference might not be enough
references.extend(extract_references(single_resource_bundle, ['medicationReference', 'reasonReference']))
single_resource_bundle = get_next_as_raw_json(single_resource_bundle)
except Exception as ex:
print(f'Failed to get {resource_type}, {url_suffix} from {api_base}\n{ex}')
for reference in set(references):
try:
result['entry'].extend(get_by_reference(api_base, reference))
except Exception as ex:
print(f'Failed to reference {reference} from {api_base}\n{ex}')
return result
bundle = create_single_patient_medication_bundle(api_base, patient_id)
bundle
```
### What should we do about "bad" references?
Some examples of "bad" references
- Invalid reference value – points to a server that doesn't exist
- Server can’t find resource by ID
- Unknown reference format ...
We could
- Raise an error as soon as we hit a bad reference
- you won't get any data if there is even 1 problem with a reference )o:
- Silently ignore problems and just get what we can
- you will know that references are missing (o: but won't know why )o:
- Build up a list of issues that can be retured with the patient bundle
- you can choose what to do about each kind of issue
- TODO: this is probably the preferred option
```
#export
def save_single_patient_medication_bundle(bundle, output_path='data'):
"Write a patient medication bundle to file."
Path(output_path).mkdir(exist_ok=True)
patient = bundle['entry'][0]['resource']
if patient['resourceType'] != 'Patient':
raise Exception(f'expected a patient but found {patient}')
patient_id = patient['id']
f_name = f'{output_path}/patient_medication_bundle_{patient_id}.json'
with open(f_name, 'w') as f:
json.dump(bundle, f, indent=2)
print('Bundle saved to', f_name)
```
Now we can save the JSON bundle to file to pass on to the next step of the process (o:
```
save_single_patient_medication_bundle(bundle)
```
## Bundle cleanup
The result of `create_single_patient_medication_bundle` is a `collection`, so we need to remove `search` elements from each `entry`. This removes some validation errors reported by https://inferno.healthit.gov/validator/ - thanks Mike (o:
TODO: Do we care what the `search` element is telling us? i.e. what if it's not `match`?
```
#export
def handle_entry_search(bundle):
"Remove `search` elements from each `entry`"
for entry in bundle['entry']:
if 'search' in entry: del entry['search']
return bundle
```
## Bundle filtering
TODO: Moved to 20a_status_filter - clean this up
### Medication status filter
Remove medication if the status tells us it was not or will not be taken.
- https://www.hl7.org/fhir/valueset-medicationrequest-status.html
- https://www.hl7.org/fhir/valueset-medicationdispense-status.html
- https://www.hl7.org/fhir/valueset-medication-admin-status.html
- https://www.hl7.org/fhir/valueset-medication-statement-status.html
#### Statuses that we want to remove from the bundle
- MedicationRequest (Include: active, on-hold, completed, entered-in-error, unknown)
- cancelled
- The prescription has been withdrawn before any administrations have occurred
- stopped
- Actions implied by the prescription are to be permanently halted, before all of the administrations occurred.
- TODO: This is a ? **halted, before all ...** i.e. might some of the administrations occured
- draft
- The prescription is not yet 'actionable'
- MedicationDispense (Include: on-hold, completed, unknown)
- preparation
- The core event has not started yet,
- in-progress
- The dispensed product is ready for pickup
- cancelled
- The dispensed product was not and will never be picked up by the patient
- entered-in-error
- The dispense was entered in error and therefore nullified
- stopped
- Actions implied by the dispense have been permanently halted, before all of them occurred
- TODO: This is a ? **hatled, before all ...** i.e. might some of the actions occured
- declined
- The dispense was declined and not performed.
- MedicationAdministration (Include: in-progress, on-hold, completed, stopped, unknown)
- not-done
- The administration was terminated prior to any impact on the subject
- entered-in-error
- The administration was entered in error and therefore nullified
- MedicationStatement (Include: active, completed, entered-in-error, intended, stopped, on-hold, unknown)
- not-taken
- The medication was not consumed by the patient
#### What if these statuses are not appropriate for every study?
It's possible that a study needs to see medication records confirming that a medication was not taken.
i.e. If previous treatment with a medication is an exclusion criteria, absense of a medication record might not be enough to be sure the patient didn't take it.
So we'll need to make filters configurable ...
#### Should we always run the status filter?
The CMOCCUR part of FHIR-to-CDISC Mappings xlsx includes status filtering instructions.
i.e We might not want to implement status filtering on the patient medication bundle.
So we'll need to make filters optional ...
```
#export
def medication_status_filter(entry):
"Remove medications if the status tells us the medication was not or will not be taken"
statuses_to_remove_map = dict(
MedicationRequest=['cancelled','stopped','draft'],
MedicationDispense=['preparation','in-progress','cancelled','entered-in-error','stopped','declined'],
MedicationAdministration=['not-done','entered-in-error'],
MedicationStatement=['not-taken'])
resource = entry.get('resource', {})
resourceType, status = resource.get('resourceType'), resource.get('status')
statuses_to_remove = statuses_to_remove_map.get(resourceType)
if statuses_to_remove is not None and status in statuses_to_remove:
print('Removing', resourceType, 'with status', status)
return False
return True
```
### "Do Not Perform" filter
```
#export
def do_not_perform_filter(entry):
"Remove medications that have the `doNotPerform` flag set to true"
resource = entry.get('resource', {})
if resource.get('doNotPerform', False):
print('Removing', resource.get('resourceType'), 'with doNotPerform = true')
return False
return True
```
## Create medication bundles for all subjects in a study
When the HAPI FHIR server is available, we should be able to do something like
```
api_base = 'http://hapi.fhir.org/baseR4'
```
Find a patient in a study
```
get_as_raw_json(api_base, 'ResearchSubject')
```
List all resources associated with a study
```
research_study_id = 1171831
# Note: &_revinclude=* gives us everything refering to the study
get_as_raw_json(api_base, 'ResearchStudy', dict(_id=research_study_id, _revinclude='*'))
```
Pick a patient from the above bundle and pull medication requests ↓
```
# 'subject': {'reference': 'Patient/0c4a1143-8d1c-42ed-b509-eac97d77c9b2'
get_as_raw_json(api_base, 'MedicationRequest', dict(subject='0c4a1143-8d1c-42ed-b509-eac97d77c9b2'))
```
Create medication bundles for all subjects in a study ↓
```
study_and_subject_bundle = get_as_raw_json(
api_base, 'ResearchStudy',
dict(_id=research_study_id, _revinclude='ResearchSubject:study'))
for i, entry in enumerate(study_and_subject_bundle['entry']):
resource = entry.get('resource', {})
if resource.get('resourceType', 'unk') != 'ResearchSubject': continue
patient_reference = resource.get('individual',{}).get('reference')[8:]
bundle = create_single_patient_medication_bundle(api_base, patient_reference)
bundle = handle_entry_search(bundle)
bundle = filter_bundle(bundle, medication_status_filter)
bundle = filter_bundle(bundle, do_not_perform_filter)
save_single_patient_medication_bundle(bundle)
if i>1: break # stop early (o:
```
Note: ↑ We're starting to build a bundle processing pipeline (by adding calls to `handle_entry_search` and `filter_bundle`) - and we'll add more functions like this to remove non-concomitant medications etc
## Convert FHIR bundle to SDTM csv
Jay Gustafson built https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle that allows parsing a FHIR bundle into SDTM csv content.
Also, you can POST a raw json string to https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle/Process and it will return a JSON object containing the SDTM csv content in the following structure:
```
{'cmcsv': '"STUDYID","DOMAIN","USUBJID",...\r\n"RWD-STUDY-01","CM","RWD-SUBJECT-01-30",...\r\n',
'suppcmcsv': '"STUDYID","RDOMAIN","USUBJID","IDVAR","IDVARVAL","QNAM","QLABEL","QVAL"\r\n"RWD-STUDY-01","CM","RWD-SUBJECT-01-30","CMSEQ","1","CMSOURCE","Resource Name","MedicationRequest"\r\n...',
'dmcsv': '"STUDYID","DOMAIN","USUBJID",...\r\n"RWD-STUDY-01","DM","RWD-SUBJECT-01-30",...\r\n'}
```
```
#demo
response = requests.post('https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle/Process', json=bundle)
```
### View the response as a table
```
#demo
pd.read_csv(StringIO(response.json()['cmcsv']))
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
## Introduction
**Offer Recommender example:**
___
In this example we will show how to:
- Setup the required environment for accessing the ecosystem prediction server.
- View and track business performance of the Offer Recommender.
## Setup
**Setting up import path:**
___
Add path of ecosystem notebook wrappers. It needs to point to the ecosystem notebook wrapper to allow access to the packages required for running the prediction server via python.
- **notebook_path:** Path to notebook repository.
```
notebook_path = "/path of to ecosystem notebook repository"
# ---- Uneditible ----
import sys
sys.path.append(notebook_path)
# ---- Uneditible ----
```
**Import required packages:**
___
Import and load all packages required for the following usecase.
```
# ---- Uneditible ----
import pymongo
from bson.son import SON
import pprint
import pandas as pd
import json
import numpy
import operator
import datetime
import time
import os
import matplotlib.pyplot as plt
from prediction import jwt_access
from prediction import notebook_functions
from prediction.apis import functions
from prediction.apis import data_munging_engine
from prediction.apis import data_management_engine
from prediction.apis import worker_h2o
from prediction.apis import prediction_engine
from prediction.apis import worker_file_service
%matplotlib inline
# ---- Uneditible ----
```
**Setup prediction server access:**
___
Create access token for prediction server.
- **url:** Url for the prediction server to access.
- **username:** Username for prediction server.
- **password:** Password for prediction server.
```
url = "http://demo.ecosystem.ai:3001/api"
username = "user@ecosystem.ai"
password = "cd486be3-9955-4364-8ccc-a9ab3ffbc168"
# ---- Uneditible ----
auth = jwt_access.Authenticate(url, username, password)
# ---- Uneditible ----
database = "master"
collection = "bank_customer"
field = "{}"
limit = 100
projections = "{}"
skip = 0
output = data_management_engine.get_data(auth, database, collection, field, limit, projections, skip)
df = pd.DataFrame(output)
df.head()
counts = df["education"].value_counts()
counts.plot(kind="bar")
counts = df["gender"].value_counts()
counts.plot(kind="bar")
counts = df["changeIndicatorThree"].value_counts()
counts.plot(kind="bar")
counts = df["language"].value_counts()
counts.plot(kind="bar")
counts = df["numberOfProducts"].value_counts()
counts.plot(kind="bar")
counts = df["changeIndicatorSix"].value_counts()
counts.plot(kind="bar")
counts = df["numberOfChildren"].value_counts()
counts.plot(kind="bar")
counts = df["changeIndicatorSix"].value_counts()
counts.plot(kind="bar")
counts = df["numberOfChildren"].value_counts()
counts.plot(kind="bar")
counts = df["numberOfAddresses"].value_counts()
counts.plot(kind="bar")
counts = df["segment_enum"].value_counts()
counts.plot(kind="bar")
counts = df["region"].value_counts()
counts.plot(kind="bar")
counts = df["age"].value_counts()
counts.plot(kind="bar")
counts = df["proprtyOwnership"].value_counts()
counts.plot(kind="bar")
```
| github_jupyter |
```
import jax.numpy as jnp
import numpy as np
import jax
import urdf_loader
chain = urdf_loader.read_chain_from_urdf('data/kuka_iiwa.urdf', 'lbr_iiwa_link_0', 'lbr_iiwa_link_7')
kinematics = urdf_loader.make_kinematic_chain_function(chain)
kinematics_j = jax.jit(kinematics)
zero_pose = jnp.array([0., 0., 0., 0., 0., 0., 0.])
kinematics(zero_pose)
joint_configurations = np.random.rand(100, 7)
ans = jax.vmap(kinematics_j)(joint_configurations)
%timeit jax.vmap(kinematics_j)(joint_configurations)
%timeit jax.vmap(kinematics)(joint_configurations)
%timeit kinematics_j(zero_pose)
```
The problem that I nominally want to solve is:
$$
\min_{x(t)} \int_0^T L\left(x(t), u(t)\right)dt + \Phi\left(x(T)\right) \\
s.t. \\
\dot{x}(t) = f\left(x(t), u(t)\right)\\
x_l < x(t) < x_u
$$
We can discretize the problem with $h=t_{k+1} - t_k$:
$$
\min_{\hat{x}, \hat{u}} \sum_k^{N-1} \frac{h}{2}\left(L_{k+1} + L_k\right) + \Phi\left(x_N\right)
\\
s.t.\\
x_c = \left(\frac{1}{2}\left(x_{k+1} + x_k\right) - \frac{h}{8}\left(f_{k+1} - f_k\right)\right) \\
0 = \frac{3}{2h}\left(\left(x_{k+1} - x_k\right) - \frac{h}{6}\left(f_{k+1} + 4f_c + f_k\right) \right) \\
x_l < x_k < x_u
$$
Including the equality constraint gives the following problem:
$$
\min_{\hat{x}, \hat{u}, \hat{\lambda}} \sum_k^{N-1} \frac{h}{2}\left(L_{k+1} + L_k\right) + \sum_k^{N-1} \lambda_k \left(\left(x_{k+1} - x_k\right) - \frac{h}{6}\left(f_{k+1} + 4f_c + f_k\right) \right) + \Phi\left(x_N\right)
\\
s.t.\\
x_l < x_k < x_u
$$
The first-order conditions are then:
$$
\frac{\partial V}{\partial x_0} =
\frac{h}{2}\frac{\partial L_0}{\partial x_0} +
\lambda_0 \left(-1 - \frac{h}{6}\left(4\frac{\partial f_c}{\partial x_0} + \frac{\partial f_0}{\partial x_0}\right)\right)
$$
$$
\frac{\partial V}{\partial x_{k+1}} =
\frac{h}{2}\frac{\partial L_{k+1}}{\partial x_{k+1}} +
\lambda_{k+1} \left(-1 - \frac{h}{6}\left(4\frac{\partial f_{c+1}}{\partial x_{k+1}}+\frac{\partial f_{k+1}}{\partial x_{k+1}}\right)\right)
+
\frac{h}{2}\frac{\partial L_{k+1}}{\partial x_{k+1}} +
\lambda_k \left(1 - \frac{h}{6}\left(\frac{\partial f_{k+1}}{\partial x_{k+1}} + 4\frac{\partial f_c}{\partial x_{k+1}}\right)\right)
$$
$$
\frac{\partial V}{\partial x_N} = \frac{\partial \Phi}{\partial x_N}
$$
$$
\frac{\partial V}{\partial u_0} =
\frac{h}{2}\frac{\partial L_0}{\partial u_0} +
\lambda_0 \left(- \frac{h}{6}\left(4\frac{\partial f_c}{\partial u_0} + \frac{\partial f_0}{\partial u_0}\right)\right)
$$
$$
\frac{\partial V}{\partial u_{k+1}} =
\frac{h}{2}\frac{\partial L_{k+1}}{\partial u_{k+1}} +
\lambda_{k+1} \left(- \frac{h}{6}\left(4\frac{\partial f_{c+1}}{\partial u_{k+1}}+\frac{\partial f_{k+1}}{\partial u_{k+1}}\right)\right)
+
\frac{h}{2}\frac{\partial L_{k+1}}{\partial u_{k+1}} +
\lambda_k \left(- \frac{h}{6}\left(\frac{\partial f_{k+1}}{\partial u_{k+1}} + 4\frac{\partial f_c}{\partial u_{k+1}}\right)\right)
$$
$$
\frac{\partial V}{\partial \lambda_k} = \left(x_{k+1} - x_k\right) - \frac{h}{6}\left(f_{k+1} + 4f_c + f_k\right)
$$
```
Plan is to just take the hamiltonian
and discretize it temporaly, then use the magic of
```
| github_jupyter |
> Code to accompany **Chapter 10: Defending Against Adversarial Inputs**
# Fashion-MNIST - Generating Adversarial Examples on a Drop-out Network
This notebook demonstrates how to generate adversarial examples using a network that incorporates randomised drop-out.
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images/255.0
test_images = test_images/255.0
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Create a Simple Network with drop-out for Image Classification
We need to use the Keras __functional API__ (rather than the sequential API) to access the
dropout capability with `training = True` at test time.
The cell below has drop-out enabled at training time only. You can experiment by moving the drop-out layer
or adding drop-out to test time by replacing the `Dropout` line as indicated in the comments.
```
from tensorflow.keras.layers import Input, Dense, Flatten, Dropout
from tensorflow.keras.models import Model
inputs = Input(shape=(28,28))
x = Flatten()(inputs)
x = Dense(56, activation='relu')(x)
x = Dropout(0.2)(x) # Use this line for drop-out at training time only
# x = Dropout(0.2)(x, training=True) # Use this line instead for drop-out at test and training time
x = Dense(56, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
model = Model(inputs=inputs, outputs=predictions)
print(model)
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
```
Train the model and evaluate it.
If drop-out is included at test time, the model will be unpredictable.
```
model.fit(train_images, train_labels, epochs=6)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Model accuracy based on test data:', test_acc)
```
## Create Some Adversarial Examples Using the Model
```
# Import helper function
import sys
sys.path.append('..')
from strengtheningdnns.adversarial_utils import generate_adversarial_data
import foolbox
fmodel = foolbox.models.TensorFlowModel.from_keras(model, bounds=(0, 255))
num_images = 1000
x_images = train_images[0:num_images, :]
attack_criterion = foolbox.criteria.Misclassification()
attack_fn = foolbox.attacks.GradientSignAttack(fmodel, criterion=attack_criterion)
x_adv_images, x_adv_perturbs, x_labels = generate_adversarial_data(original_images = x_images,
predictions = model.predict(x_images),
attack_fn = attack_fn)
```
## Take a Peek at some Results
The adversarial examples plotted should all be misclassified. However, if the model is running with drop-out at test
time also (see model creation above), they may be classified correctly due to uncertainty of the model's behaviour.
```
images_to_plot = x_adv_images
import matplotlib.pyplot as plt
adversarial_predictions = model.predict(images_to_plot)
plt.figure(figsize=(15, 30))
for i in range(30):
plt.subplot(10,5,i+1)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(images_to_plot[i], cmap=plt.cm.binary)
predicted_label = np.argmax(adversarial_predictions[i])
original_label = x_labels[i]
if predicted_label == original_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} ({})".format(class_names[predicted_label],
class_names[original_label]),
color=color)
```
Save the images if you wish so you can load them later.
```
np.save('../resources/test_images_GSAttack_dropout', x_adv_images)
```
| github_jupyter |
```
# Install TensorFlow
# !pip install -q tensorflow-gpu==2.0.0-beta1
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
# Load in the data
from sklearn.datasets import load_breast_cancer
# load the data
data = load_breast_cancer()
# check the type of 'data'
type(data)
# note: it is a Bunch object
# this basically acts like a dictionary where you can treat the keys like attributes
data.keys()
# 'data' (the attribute) means the input data
data.data.shape
# it has 569 samples, 30 features
# 'targets'
data.target
# note how the targets are just 0s and 1s
# normally, when you have K targets, they are labeled 0..K-1
# their meaning is not lost
data.target_names
# there are also 569 corresponding targets
data.target.shape
# you can also determine the meaning of each feature
data.feature_names
# normally we would put all of our imports at the top
# but this lets us tell a story
from sklearn.model_selection import train_test_split
# split the data into train and test sets
# this lets us simulate how our model will perform in the future
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.33)
N, D = X_train.shape
# Scale the data
# you'll learn why scaling is needed in a later course
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Now all the fun Tensorflow stuff
# Build the model
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(D,)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Alternatively, you can do:
# model = tf.keras.models.Sequential()
# model.add(tf.keras.layers.Dense(1, input_shape=(D,), activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Train the model
r = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100)
# Evaluate the model - evaluate() returns loss and accuracy
print("Train score:", model.evaluate(X_train, y_train))
print("Test score:", model.evaluate(X_test, y_test))
# Plot what's returned by model.fit()
import matplotlib.pyplot as plt
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot the accuracy too
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
```
# Part 2: Making Predictions
This goes with the lecture "Making Predictions"
```
# Make predictions
P = model.predict(X_test)
print(P) # they are outputs of the sigmoid, interpreted as probabilities p(y = 1 | x)
# Round to get the actual predictions
# Note: has to be flattened since the targets are size (N,) while the predictions are size (N,1)
import numpy as np
P = np.round(P).flatten()
print(P)
# Calculate the accuracy, compare it to evaluate() output
print("Manually calculated accuracy:", np.mean(P == y_test))
print("Evaluate output:", model.evaluate(X_test, y_test))
```
# Part 3: Saving and Loading a Model
This goes with the lecture "Saving and Loading a Model"
```
# Let's now save our model to a file
model.save('linearclassifier.h5')
# Check that the model file exists
!ls -lh
# Let's load the model and confirm that it still works
# Note: there is a bug in Keras where load/save only works if you DON'T use the Input() layer explicitly
# So, make sure you define the model with ONLY Dense(1, input_shape=(D,))
# At least, until the bug is fixed
# https://github.com/keras-team/keras/issues/10417
model = tf.keras.models.load_model('linearclassifier.h5')
print(model.layers)
model.evaluate(X_test, y_test)
# Download the file - requires Chrome (at this point)
from google.colab import files
files.download('linearclassifier.h5')
```
| github_jupyter |
# Sklearn
## sklearn.datasets
документация: http://scikit-learn.org/stable/datasets/
```
from sklearn import datasets
%pylab inline
```
### Генерация выборок
**Способы генерации данных:**
* make_classification
* make_regression
* make_circles
* make_checkerboard
* etc
#### datasets.make_circles
```
circles = datasets.make_circles()
print("features: {}".format(circles[0][:10]))
print("target: {}".format(circles[1][:10]))
from matplotlib.colors import ListedColormap
colors = ListedColormap(['blue', 'green'])
pyplot.figure(figsize(8, 8))
pyplot.scatter(list(map(lambda x: x[0], circles[0])), list(map(lambda x: x[1], circles[0])), c = circles[1], cmap = colors)
def plot_2d_dataset(data, colors):
pyplot.figure(figsize(8, 8))
pyplot.scatter(list(map(lambda x: x[0], data[0])), list(map(lambda x: x[1], data[0])), c = data[1], cmap = colors)
noisy_circles = datasets.make_circles(noise = 0.15)
plot_2d_dataset(noisy_circles, colors)
```
#### datasets.make_classification
```
simple_classification_problem = datasets.make_classification(n_features = 2, n_informative = 1,
n_redundant = 1, n_clusters_per_class = 1,
random_state = 1 )
plot_2d_dataset(simple_classification_problem, colors)
classification_problem = datasets.make_classification(n_features = 2, n_informative = 2, n_classes = 4,
n_redundant = 0, n_clusters_per_class = 1, random_state = 1)
colors = ListedColormap(['red', 'blue', 'green', 'orange'])
plot_2d_dataset(classification_problem, colors)
```
### "Игрушечные" наборы данных
**Наборы данных:**
* load_iris
* load_boston
* load_diabetes
* load_digits
* load_linnerud
* etc
#### datasets.load_iris
```
iris = datasets.load_iris()
iris
iris.keys()
print(iris.DESCR)
print("feature names: {}".format(iris.feature_names))
print("target names: {names}".format(names = iris.target_names))
iris.data[:10]
iris.target
```
### Визуализация выбокри
```
from pandas import DataFrame
iris_frame = DataFrame(iris.data)
iris_frame.columns = iris.feature_names
iris_frame['target'] = iris.target
iris_frame.head()
iris_frame.target = iris_frame.target.apply(lambda x : iris.target_names[x])
iris_frame.head()
iris_frame[iris_frame.target == 'setosa'].hist('sepal length (cm)')
pyplot.figure(figsize(20, 24))
plot_number = 0
for feature_name in iris['feature_names']:
for target_name in iris['target_names']:
plot_number += 1
pyplot.subplot(4, 3, plot_number)
pyplot.hist(iris_frame[iris_frame.target == target_name][feature_name])
pyplot.title(target_name)
pyplot.xlabel('cm')
pyplot.ylabel(feature_name[:-4])
```
### Бонус: библиотека seaborn
```
import seaborn as sns
sns.pairplot(iris_frame, hue = 'target')
?sns.set()
sns.set(font_scale = 1.3)
data = sns.load_dataset("iris")
sns.pairplot(data, hue = "species")
```
#### **Если Вас заинтересовала библиотека seaborn:**
* установка: https://stanford.edu/~mwaskom/software/seaborn/installing.html
* установка c помощью анаконды: https://anaconda.org/anaconda/seaborn
* руководство: https://stanford.edu/~mwaskom/software/seaborn/tutorial.html
* примеры: https://stanford.edu/~mwaskom/software/seaborn/examples/
| github_jupyter |
# Responsible ML - Homomorphic Encryption
## Install prerequisites
Before running the notebook, make sure the correct versions of these libraries are installed.
```
!pip install encrypted-inference --upgrade
```
## Setup Azure ML
In the next cell, we create a new Workspace config object using the `<subscription_id>`, `<resource_group_name>`, and `<workspace_name>`. This will fetch the matching Workspace and prompt you for authentication. Please click on the link and input the provided details.
For more information on **Workspace**, please visit: [Microsoft Workspace Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py)
`<subscription_id>` = You can get this ID from the landing page of your Resource Group.
`<resource_group_name>` = This is the name of your Resource Group.
`<workspace_name>` = This is the name of your Workspace.
```
from azureml.core.workspace import Workspace
import warnings
warnings.filterwarnings('ignore')
try:
ws = Workspace(
subscription_id = '<subscription_id>',
resource_group = '<resource_group>',
workspace_name = '<workspace_name>')
# Writes workspace config file
ws.write_config()
print('Library configuration succeeded')
except Exception as e:
print(e)
print('Workspace not found')
```
# Homomorphic Encryption
Homomorphic Encryption refers to a new type of encryption technology that allows computation to be directly on encrypted data, without requiring any decryption in the process.
<img src="./images/encrypted.png" alt="Forest" style="display: inline-block;margin-left: auto;margin-right: auto;width:45%">
## Fetch Model from registry
Next, fetch the latest model from our model registry.
```
from azureml.core.model import Model
from scripts.utils import *
tabular = fetch_registered_dataset(ws)
synth_df, Y = prepareDataset(tabular)
X_train, X_test, Y_train, Y_test, A_train, A_test = split_dataset(synth_df, Y)
model = Model(ws, 'loan_approval_grid_model_30')
model.version
```
## Create managed-endpoints directory
Create a new directory to hold the configuration files for deploying a managed endpoint.
```
import os
managed_endpoints = './managed-endpoints'
# Working directory
if not os.path.exists(managed_endpoints):
os.makedirs(managed_endpoints)
if os.path.exists(os.path.join(managed_endpoints,".amlignore")):
os.remove(os.path.join(managed_endpoints,".amlignore"))
```
## Create Scoring File
Creating the scoring file is next step before deploying the service. This file is responsible for the actual generation of predictions using the model. The values or scores generated can represent predictions of future values, but they might also represent a likely category or outcome.
The first thing to do in the scoring file is to fetch the model. This is done by calling `Model.get_model_path()` and passing the model name as a parameter.
After the model has been loaded, the function `model.predict()` function should be called to start the scoring process.
For more information on **Machine Learning - Score**, please visit: [Microsoft Machine Learning - Score Documentation](https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/machine-learning-score)
```
%%writefile $managed_endpoints/score.py
import os
import json
import pandas as pd
from azureml.core.model import Model
import joblib
from azure.storage.blob import BlobServiceClient
from encrypted.inference.eiserver import EIServer
def init():
global model
# this name is model.id of model that we want to deploy
model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), "loan_approval_grid_model_30.pkl")
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
global server
server = EIServer(model.coef_, model.intercept_, verbose=True)
def run(raw_data):
json_properties = json.loads(raw_data)
key_id = json_properties['key_id']
conn_str = json_properties['conn_str']
container = json_properties['container']
data = json_properties['data']
# download the Galois keys from blob storage
blob_service_client = BlobServiceClient.from_connection_string(conn_str=conn_str)
blob_client = blob_service_client.get_blob_client(container=container, blob=key_id)
public_keys = blob_client.download_blob().readall()
result = {}
# make prediction
result = server.predict(data, public_keys)
# you can return any data type as long as it is JSON-serializable
return result
```
## Create the environment definition
The following file contains the details of the environment to host the model and code.
```
%%writefile $managed_endpoints/score-new.yml
name: loan-managed-env
channels:
- conda-forge
dependencies:
- python=3.7
- numpy
- pip
- scikit-learn==0.22.1
- scipy
- pip:
- azureml-defaults
- azureml-sdk[notebooks,automl]
- pandas
- inference-schema[numpy-support]
- joblib
- numpy
- scipy
- encrypted-inference==0.9
- azure-storage-blob
```
## Define the endpoint configuration
Specific inputs are required to deploy a model on an online endpoint:
1. Model files.
1. The code that's required to score the model.
1. An environment in which your model runs.
1. Settings to specify the instance type and scaling capacity.
```
%%writefile $managed_endpoints/endpointconfig.yml
name: loan-managed-endpoint
type: online
auth_mode: key
traffic:
blue: 100
deployments:
#blue deployment
- name: blue
model: azureml:loan_approval_grid_model_30:1
code_configuration:
code:
local_path: ./
scoring_script: score.py
environment:
name: loan-managed-env
version: 1
path: ./
conda_file: file:./score-new.yml
docker:
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
instance_type: Standard_DS3_v2
scale_settings:
scale_type: manual
instance_count: 1
min_instances: 1
max_instances: 2
```
## Deployment
<img align="center" src="./images/MLOPs-2.gif"/>
## Deploy your managed online endpoint to Azure
This deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
```
!az ml endpoint create -g [your resource group name] -w [your AML workspace name] -n loan-managed-endpoint -f ./managed-endpoints/endpointconfig.yml
```
## Create public and private keys
In order to work with Homomorphic Encryption we need to generate our private and public keys to satisfy the encryption process.
`EILinearRegressionClient` allows us to create a homomorphic encryption based client, and public keys.
To register our training data with our Workspace we need to get the data into the data store. The Workspace will already have a default data store. The function `ws.get_default_datastore()` returns an instance of the data store associated with the Workspace.
For more information on **Datastore**, please visit: [Microsoft Datastore Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore?view=azure-ml-py)
For more information on **How to deploy an encrypted inferencing web service**, please visit: [Microsoft How to deploy an encrypted inferencing web service Documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-homomorphic-encryption-seal)
```
import os
import azureml.core
from azureml.core import Workspace, Datastore
from encrypted.inference.eiclient import EILinearRegressionClient
# Create a new Encrypted inference client and a new secret key.
edp = EILinearRegressionClient(verbose=True)
public_keys_blob, public_keys_data = edp.get_public_keys()
datastore = ws.get_default_datastore()
container_name = datastore.container_name
# Create a local file and write the keys to it
public_keys = open(public_keys_blob, "wb")
public_keys.write(public_keys_data)
public_keys.close()
# Upload the file to blob store
datastore.upload_files([public_keys_blob])
# Delete the local file
os.remove(public_keys_blob)
sample_index = 4
print(X_test.iloc[sample_index].to_frame())
inputData = X_test.iloc[sample_index]
sample_data = (X_test.to_numpy())
raw_data = edp.encrypt(sample_data[sample_index])
```
## Testing the Service with Encrypted data
Now with test data, we can get it into a suitable format to consume the web service. First an instance of the web service should be obtained by calling the constructor `Webservice()` with the Workspace object and the service name as parameters.
For more information on **Webservice**, please visit: [Microsoft Webservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice?view=azure-ml-py)
```
import json
#pass the connection string for blob storage to give the server access to the uploaded public keys
conn_str_template = 'DefaultEndpointsProtocol={};AccountName={};AccountKey={};EndpointSuffix=core.windows.net'
conn_str = conn_str_template.format(datastore.protocol, datastore.account_name, datastore.account_key)
#build the json
data = json.dumps({"data": raw_data, "key_id" : public_keys_blob, "conn_str" : conn_str, "container" : container_name })
```
## Generate a sample request JSON file
Export some test data to a JSON file we can send to the endpoint.
```
with open(os.path.join(managed_endpoints, 'sample-request.json'), 'w') as file:
file.write(data)
```
## Invoke the endpoint to score data by using your model
You can use either the invoke command or a REST client of your choice to invoke the endpoint and score against it.
```
!az ml endpoint invoke -g [your resource group name] -w [your AML workspace name] -n loan-managed-endpoint --request-file ./managed-endpoints/sample-request.json > ./managed_endpoints/sample-response.json
```
## Decrypting Service Response
The below cell uses the `decrypt()` function to decrypt the response from the deployed service.
```
import numpy as np
import json
eresult = None
with open(os.path.join(managed_endpoints, 'sample-response.json'), 'r') as file:
eresult = json.loads(json.loads(file.read()))
results = edp.decrypt(eresult)
print ('Decrypted the results ', results)
#Apply argmax to identify the prediction result
prediction = 'Deny'
if results[0] > 0:
prediction = 'Approve'
actual = 'Deny'
if Y_test[sample_index] == 1:
actual = 'Approve'
print ( ' Prediction : ', prediction)
print( 'Actual : ', actual)
```
## Optional: Deploy to Azure Container Instance
```
!cp $managed_endpoints/score.py ./score.py
```
## Deployment dependencies
The first step is to define the dependencies that are needed for the service to run and they are defined by calling `CondaDependencies.create()`. This create function will receive as parameters the pip and conda packages to install on the remote machine. Secondly, the output of this function is persisted into a `.yml` file that will be leveraged later on the process.
Now it's time to create a `InferenceConfig` object by calling its constructor and passing the runtime type, the path to the `entry_script` (score.py), and the `conda_file` (the previously created file that holds the environment dependencies).
The `CondaDependencies.create()` function initializes a new CondaDependencies object.
For more information on **CondaDependencies**, please visit: [Microsoft CondaDependencies Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py)
For more information on **InferenceConfig**, please visit: [Microsoft InferenceConfig Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py)
```
from azureml.core.model import InferenceConfig, Model
from azureml.core.conda_dependencies import CondaDependencies
azureml_pip_packages = ['azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret', 'azureml-dataprep','azureml-dataprep[fuse,pandas]','joblib',
'matplotlib','scikit-learn==0.22.1','seaborn','fairlearn','encrypted-inference==0.9','azure-storage-blob']
# Define dependencies needed in the remote environment
myenv = CondaDependencies.create(pip_packages=azureml_pip_packages)
# Write dependencies to yml file
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
# Create an inference config object based on the score.py and myenv.yml from previous steps
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml")
```
## Deploy model to Azure Container Instance
In order to deploy the to an Azure Container Instance, the function `Model.deploy()` should be called, passing along the workspace object, service name and list of models to deploy.
`Webservice` defines base functionality for deploying models as web service endpoints in Azure Machine Learning. Webservice constructor is used to retrieve a cloud representation of a Webservice object associated with the provided Workspace.
The `AciWebService` represents a machine learning model deployed as a web service endpoint on Azure Container Instances. A deployed service is created from a model, script, and associated files. The resulting web service is a load-balanced, HTTP endpoint with a REST API. You can send data to this API and receive the prediction returned by the model.
For more information on **Model**, please visit: [Microsoft Model Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py)
For more information on **Webservice**, please visit: [Microsoft Webservice Class Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py)
For more information on **AciWebservice**, please visit: [Microsoft AciWebservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice?view=azure-ml-py)
**Note:** Please wait for the execution of the cell to finish before moving forward.
```
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.exceptions import WebserviceException
from azureml.core.model import Model
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
description = "Loan approval service")
service_name_aci = 'loan-approval-aci'
print(service_name_aci)
try:
aci_service = Webservice(ws, service_name_aci)
print(aci_service.state)
except WebserviceException:
aci_service = Model.deploy(ws, service_name_aci, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
## Testing the Service with Encrypted data
Now with test data, we can get it into a suitable format to consume the web service. First an instance of the web service should be obtained by calling the constructor `Webservice()` with the Workspace object and the service name as parameters.
For more information on **Webservice**, please visit: [Microsoft Webservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice?view=azure-ml-py)
```
import json
from azureml.core import Webservice
service = Webservice(ws, service_name_aci)
#pass the connection string for blob storage to give the server access to the uploaded public keys
conn_str_template = 'DefaultEndpointsProtocol={};AccountName={};AccountKey={};EndpointSuffix=core.windows.net'
conn_str = conn_str_template.format(datastore.protocol, datastore.account_name, datastore.account_key)
#build the json
data = json.dumps({"data": raw_data, "key_id" : public_keys_blob, "conn_str" : conn_str, "container" : container_name })
data = bytes(data, encoding='ASCII')
print ('Making an encrypted inference web service call ')
eresult = service.run(input_data=data)
print ('Received encrypted inference results')
print (f'Encrypted results: ...', eresult[0][0:100], '...')
```
## Decrypting Service Response
The below cell uses the `decrypt()` function to decrypt the response from the deployed ACI Service.
```
import numpy as np
results = edp.decrypt(eresult)
print ('Decrypted the results ', results)
#Apply argmax to identify the prediction result
prediction = 'Deny'
if results[0] > 0:
prediction = 'Approve'
actual = 'Deny'
if Y_test[sample_index] == 1:
actual = 'Approve'
print ( ' Prediction : ', prediction)
print( 'Actual : ', actual)
```
| github_jupyter |
# Writing OER sets to file for
---
### Import Modules
```
import os
print(os.getcwd())
import sys
import time; ti = time.time()
import json
import pandas as pd
import numpy as np
# #########################################################
from methods import (
get_df_features_targets,
get_df_jobs,
get_df_jobs_paths,
get_df_atoms_sorted_ind,
)
from methods import create_name_str_from_tup
from methods import get_df_jobs_paths, get_df_jobs_data
# #########################################################
from local_methods import write_other_jobs_in_set
from methods import isnotebook
isnotebook_i = isnotebook()
if isnotebook_i:
from tqdm.notebook import tqdm
verbose = True
else:
from tqdm import tqdm
verbose = False
```
### Read Data
```
df_jobs = get_df_jobs()
df_jobs_paths = get_df_jobs_paths()
df_features_targets = get_df_features_targets()
df_atoms = get_df_atoms_sorted_ind()
df_jobs_paths = get_df_jobs_paths()
df_jobs_data = get_df_jobs_data()
df_atoms = df_atoms.set_index("job_id")
```
### Main loop | writing OER sets
```
# # TEMP
# name_i = ('slac', 'wufulafe_03', 58.0)
# df_features_targets = df_features_targets.loc[[name_i]]
# # TEMP
# print(111 * "TEMP | ")
# # df_features_targets.index[329]
# indices = [
# ('slac', 'relovalu_12', 24.0),
# ]
# df_features_targets = df_features_targets.loc[indices]
# for name_i, row_i in df_features_targets.iterrows():
iterator = tqdm(df_features_targets.index, desc="1st loop")
for i_cnt, index_i in enumerate(iterator):
row_i = df_features_targets.loc[index_i]
# if verbose:
# print(name_i)
# #####################################################
job_id_o_i = row_i.data.job_id_o.iloc[0]
job_id_bare_i = row_i.data.job_id_bare.iloc[0]
job_id_oh_i = row_i.data.job_id_oh.iloc[0]
# #####################################################
if job_id_bare_i is None:
continue
oh_exists = False
if job_id_oh_i is not None:
oh_exists = True
# #####################################################
df_atoms__o = df_atoms.loc[job_id_o_i]
df_atoms__bare = df_atoms.loc[job_id_bare_i]
# #####################################################
atoms__o = df_atoms__o.atoms_sorted_good
atoms__bare = df_atoms__bare.atoms_sorted_good
if oh_exists:
df_atoms__oh = df_atoms.loc[job_id_oh_i]
atoms__oh = df_atoms__oh.atoms_sorted_good
# #########################################################
# #########################################################
# dir_name = create_name_str_from_tup(name_i)
dir_name = create_name_str_from_tup(index_i)
dir_path = os.path.join(
os.environ["PROJ_irox_oer"],
"dft_workflow/job_analysis/prepare_oer_sets",
"out_data/oer_group_files",
dir_name)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
# #####################################################
atoms__o.write(
os.path.join(dir_path, "atoms__o.traj"))
atoms__o.write(
os.path.join(dir_path, "atoms__o.cif"))
atoms__bare.write(
os.path.join(dir_path, "atoms__bare.traj"))
atoms__bare.write(
os.path.join(dir_path, "atoms__bare.cif"))
if oh_exists:
atoms__oh.write(
os.path.join(dir_path, "atoms__oh.traj"))
atoms__oh.write(
os.path.join(dir_path, "atoms__oh.cif"))
# #####################################################
data_dict_to_write = dict(
job_id_o=job_id_o_i,
job_id_bare=job_id_bare_i,
job_id_oh=job_id_oh_i,
)
data_path = os.path.join(dir_path, "data.json")
with open(data_path, "w") as outfile:
json.dump(data_dict_to_write, outfile, indent=2)
# #####################################################
# Write other jobs in OER set
write_other_jobs_in_set(
job_id_bare_i,
dir_path=dir_path,
df_jobs=df_jobs, df_atoms=df_atoms,
df_jobs_paths=df_jobs_paths,
df_jobs_data=df_jobs_data,
)
```
# Writing top systems to file ROUGH TEMP
```
# TOP SYSTEMS
if False:
# if True:
df_features_targets = df_features_targets.loc[
[
("slac", "tefovuto_94", 16.0),
# slac__nifupidu_92__032
# sherlock__bihetofu_24__036
('slac', 'hobukuno_29', 16.0),
('sherlock', 'ramufalu_44', 56.0),
('slac', 'nifupidu_92', 32.0),
('sherlock', 'bihetofu_24', 36.0),
('slac', 'dotivela_46', 32.0),
('slac', 'vovumota_03', 33.0),
('slac', 'ralutiwa_59', 32.0),
('sherlock', 'bebodira_65', 16.0),
('sherlock', 'soregawu_05', 62.0),
('slac', 'hivovaru_77', 26.0),
('sherlock', 'vegarebo_06', 50.0),
('slac', 'ralutiwa_59', 30.0),
('sherlock', 'kamevuse_75', 49.0),
('nersc', 'hesegula_40', 94.0),
('slac', 'fewirefe_11', 39.0),
('sherlock', 'vipikema_98', 60.0),
('slac', 'gulipita_22', 48.0),
('sherlock', 'rofetaso_24', 48.0),
('slac', 'runopeno_56', 32.0),
('slac', 'magiwuni_58', 26.0),
]
]
for name_i, row_i in df_features_targets.iterrows():
# #####################################################
job_id_o_i = row_i.data.job_id_o.iloc[0]
job_id_bare_i = row_i.data.job_id_bare.iloc[0]
job_id_oh_i = row_i.data.job_id_oh.iloc[0]
# #####################################################
oh_exists = False
if job_id_oh_i is not None:
oh_exists = True
# #####################################################
df_atoms__o = df_atoms.loc[job_id_o_i]
df_atoms__bare = df_atoms.loc[job_id_bare_i]
# #####################################################
atoms__o = df_atoms__o.atoms_sorted_good
atoms__bare = df_atoms__bare.atoms_sorted_good
if oh_exists:
df_atoms__oh = df_atoms.loc[job_id_oh_i]
atoms__oh = df_atoms__oh.atoms_sorted_good
# #########################################################
# #########################################################
dir_name = create_name_str_from_tup(name_i)
dir_path = os.path.join(
os.environ["PROJ_irox_oer"],
"dft_workflow/job_analysis/prepare_oer_sets",
"out_data/top_overpot_sys")
# dir_name)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
# atoms__o.write(
# os.path.join(dir_path, dir_name + "_o.cif"))
# atoms__bare.write(
# os.path.join(dir_path, dir_name + "_bare.cif"))
if oh_exists:
atoms__oh.write(
os.path.join(dir_path, dir_name + "_oh.cif"))
```
# MISC | Writing random cifs to file to open in VESTA
```
df_subset = df_features_targets.sample(n=6)
if False:
for name_i, row_i in df_subset.iterrows():
tmp = 42
job_id_oh_i = row_i[("data", "job_id_oh", "", )]
# # #####################################################
# job_id_o_i = row_i.data.job_id_o.iloc[0]
# job_id_bare_i = row_i.data.job_id_bare.iloc[0]
# job_id_oh_i = row_i.data.job_id_oh.iloc[0]
# # #####################################################
# if job_id_bare_i is None:
# continue
oh_exists = False
if job_id_oh_i is not None:
oh_exists = True
# # #####################################################
# df_atoms__o = df_atoms.loc[job_id_o_i]
# df_atoms__bare = df_atoms.loc[job_id_bare_i]
# # #####################################################
# atoms__o = df_atoms__o.atoms_sorted_good
# atoms__bare = df_atoms__bare.atoms_sorted_good
if oh_exists:
df_atoms__oh = df_atoms.loc[job_id_oh_i]
atoms__oh = df_atoms__oh.atoms_sorted_good
# #########################################################
# #########################################################
file_name_i = create_name_str_from_tup(name_i)
print(file_name_i)
dir_path = os.path.join(
os.environ["PROJ_irox_oer"],
"dft_workflow/job_analysis/prepare_oer_sets",
"out_data/misc_cif_files_oh")
# dir_name)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
# #####################################################
# atoms__o.write(
# os.path.join(dir_path, "atoms__o.traj"))
# atoms__o.write(
# os.path.join(dir_path, "atoms__o.cif"))
# atoms__bare.write(
# os.path.join(dir_path, "atoms__bare.traj"))
# atoms__bare.write(
# os.path.join(dir_path, "atoms__bare.cif"))
if oh_exists:
atoms__oh.write(
os.path.join(dir_path, file_name_i + ".cif"))
# os.path.join(dir_path, "atoms__oh.traj"))
# atoms__oh.write(
# os.path.join(dir_path, "atoms__oh.cif"))
# #########################################################
print(20 * "# # ")
print("All done!")
print("Run time:", np.round((time.time() - ti) / 60, 3), "min")
print("write_oer_sets.ipynb")
print(20 * "# # ")
# #########################################################
```
```
# import os
# print(os.getcwd())
# import sys
# import pickle
# pd.set_option('display.max_columns', None)
# # pd.set_option('display.max_rows', None)
```
| github_jupyter |
# Multistage Hypersolver pre-train (separated training and ablation study)
```
%load_ext autoreload
%autoreload 2
import time
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from math import pi as π
import sys; sys.path.append(2*'../') # go n dirs back
from src import *
from torchdyn.core import NeuralODE
from torchdyn.datasets import *
from torchdyn.numerics import odeint, Euler, HyperEuler
from torchdyn.numerics.solvers import Midpoint, SolverTemplate
from torchdyn.numerics.hypersolvers import HyperMidpoint
device = 'cpu' # feel free to change!
class MultiStageHypersolver(SolverTemplate):
"""
Explicit multistage ODE stepper: inner stage is a vector field corrector
while the outer stage is a residual approximator of the ODE solver
"""
def __init__(self, inner_stage: nn.Module, outer_stage: nn.Module,
base_solver=Midpoint, dtype=torch.float32):
super().__init__(order=base_solver().order)
self.dtype = dtype
self.stepping_class = 'fixed'
self.base_solver = base_solver
self.inner_stage = inner_stage
self.outer_stage = outer_stage
def step(self, f, x, t, dt, k1=None):
# Correct vector field with inner stage and propagate
self.vector_field = f
_, _x_sol, _ = self.base_solver().step(self.corrected_vector_field, x, t, dt, k1=k1)
# Residual correction with outer stage
x_sol = _x_sol + dt**self.base_solver().order * self.outer_stage(t, f(t, x))
return _, x_sol, _
def corrected_vector_field(self, t, x):
return self.vector_field(t, x) + self.inner_stage(t, x)
class HyperNetwork(nn.Module):
"""Simple hypernetwork using as input the current state, vector field and controller"""
def __init__(self, net, sys):
super().__init__()
self.net = net
self.sys = sys
def forward(self, t, x):
xfu = torch.cat([x, self.sys.cur_f, self.sys.cur_u], -1)
return self.net(xfu)
# System we have
sys = CartPole(RandConstController())
# Real system
sys_nominal = CartPole(RandConstController())
sys_nominal.frictioncart = 0.1
sys_nominal.frictionpole = 0.03
# Initial distribution
x0 = 2*π # limit of the state distribution (in rads and rads/second)
init_dist = torch.distributions.Uniform(torch.Tensor([-x0, -x0, -x0, -x0]), torch.Tensor([x0, x0, x0, x0]))
u_min, u_max = -30, 30
```
## Training loop (inner stage only)
We train via stochastic exploration
(This will take some time)
```
hdim = 32
base_net_inner = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))
base_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))
# base_net_inner = nn.Sequential(nn.Linear(9, 32), nn.Softplus(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 4))
# base_net_outer = nn.Sequential(nn.Linear(9, 32), nn.Softplus(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 4))
inner_stage = HyperNetwork(base_net_inner, sys)
outer_stage = HyperNetwork(base_net_outer, sys)
# Avoid outer stage being trained
def dummy_stage(t, x):
return 0
# Use only inner stage
multistagehs = MultiStageHypersolver(inner_stage, dummy_stage, base_solver=Midpoint)
opt = torch.optim.Adam(multistagehs.inner_stage.parameters(), lr=1e-2) # only train inner
scheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)
# opt = torch.optim.Adam(multistagehs.inner_stage.parameters(), lr=1e-3)
loss_func = nn.MSELoss()
epochs = 50000
bs = 128
dt = 0.05
span = torch.linspace(0, dt, 2)
losses = []
for i in range(epochs):
x0 = init_dist.sample((bs,)).to(device)
val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)
sys.u.u0 = val
sys_nominal.u.u0 = val
# Compute residuals
_, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]
_, sol_hs = odeint(sys._dynamics, x0, span, solver=multistagehs)[-1]
loss = loss_func(sol_gt, sol_hs)
# Optimization step
loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()
print(f'Step: {i}, Residual loss: {loss:.8f}', end='\r')
losses.append(loss.detach().cpu().item())
fig, ax = plt.subplots(1, 1)
ax.plot(losses)
ax.set_yscale('log')
```
## Outer stage training only
This is basically finetuning once we get the first stage
```
base_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))
outer_stage = HyperNetwork(base_net_outer, sys)
# Use only inner stage
multistagehs.outer_stage = outer_stage # new outer stage in pre-trained hypersolver
opt = torch.optim.Adam(multistagehs.outer_stage.parameters(), lr=1e-2) # only train outer stage
scheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)
loss_func = nn.MSELoss()
epochs = 50000
bs = 128
dt = 0.05
span = torch.linspace(0, dt, 2)
losses = []
for i in range(epochs):
x0 = init_dist.sample((bs,)).to(device)
val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)
sys.u.u0 = val
sys_nominal.u.u0 = val
# Compute residuals
_, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]
_, sol_hs = odeint(sys._dynamics, x0, span, solver=multistagehs)[-1]
loss = loss_func(sol_gt, sol_hs)
# Optimization step
loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()
print(f'Step: {i}, Residual loss: {loss:.8f}', end='\r')
losses.append(loss.detach().cpu().item())
fig, ax = plt.subplots(1, 1)
ax.plot(losses)
ax.set_yscale('log')
# Save the model
torch.save(multistagehs, 'saved_models/hs_multistage_separated_snake.pt')
```
## Training residual one-step dynamic model
We can use the same `multistage hypersolver` with the inner stage corrector set to zero
```
hdim = 32
base_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))
outer_stage = HyperNetwork(base_net_outer, sys)
# Avoid outer stage being trained
def dummy_stage(t, x):
return 0
# Use only inner outer stage (residual dynamics)
residual_dynamics_solver = MultiStageHypersolver(dummy_stage, outer_stage, base_solver=Midpoint)
opt = torch.optim.Adam(residual_dynamics_solver.outer_stage.parameters(), lr=1e-2)
scheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)
loss_func = nn.MSELoss()
epochs = 50000
bs = 128
dt = 0.05
span = torch.linspace(0, dt, 2)
losses = []
for i in range(epochs):
x0 = init_dist.sample((bs,)).to(device)
val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)
sys.u.u0 = val
sys_nominal.u.u0 = val
# Compute residuals
_, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]
_, sol_hs = odeint(sys._dynamics, x0, span, solver=residual_dynamics_solver)[-1]
loss = loss_func(sol_gt, sol_hs)
# Optimization step
loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()
print(f'Step: {i}, Residual loss: {loss:.8f}', end='\r')
losses.append(loss.detach().cpu().item())
fig, ax = plt.subplots(1, 1)
ax.plot(losses)
ax.set_yscale('log')
# Save the model
torch.save(residual_dynamics_solver, 'saved_models/residual_dynamics_solver.pt')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColabStable.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### **<font color='blue'> Stable Colorizer </font>**
#◢ DeOldify - Colorize your own photos!
####**Credits:**
Special thanks to:
Matt Robinson and María Benavente for pioneering the DeOldify image colab notebook.
Dana Kelley for doing things, breaking stuff & having an opinion on everything.
---
#◢ Verify Correct Runtime Settings
**<font color='#FF000'> IMPORTANT </font>**
In the "Runtime" menu for the notebook window, select "Change runtime type." Ensure that the following are selected:
* Runtime Type = Python 3
* Hardware Accelerator = GPU
#◢ Git clone and install DeOldify
```
!git clone https://github.com/jantic/DeOldify.git DeOldify
cd DeOldify
```
#◢ Setup
```
#NOTE: This must be the first call in order to work properly!
from deoldify import device
from deoldify.device_id import DeviceId
#choices: CPU, GPU0...GPU7
device.set(device=DeviceId.GPU0)
import torch
if not torch.cuda.is_available():
print('GPU not available.')
!pip install -r colab_requirements.txt
import fastai
from deoldify.visualize import *
torch.backends.cudnn.benchmark = True
!mkdir 'models'
!wget https://www.dropbox.com/s/mwjep3vyqk5mkjc/ColorizeStable_gen.pth?dl=0 -O ./models/ColorizeStable_gen.pth
!wget https://media.githubusercontent.com/media/jantic/DeOldify/master/resource_images/watermark.png -O ./resource_images/watermark.png
colorizer = get_image_colorizer(artistic=False)
```
#◢ Instructions
### source_url
Type in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, upload it first to a site like Imgur.
### render_factor
The default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out.
### watermarked
Selected by default, this places a watermark icon of a palette at the bottom left corner of the image. This is intended to be a standard way to convey to others viewing the image that it is colorized by AI. We want to help promote this as a standard, especially as the technology continues to improve and the distinction between real and fake becomes harder to discern. This palette watermark practice was initiated and lead by the company MyHeritage in the MyHeritage In Color feature (which uses a newer version of DeOldify than what you're using here).
#### How to Download a Copy
Simply right click on the displayed image and click "Save image as..."!
## Pro Tips
You can evaluate how well the image is rendered at each render_factor by using the code at the bottom (that cell under "See how well render_factor values perform on a frame here").
## Troubleshooting
If you get a 'CUDA out of memory' error, you probably have the render_factor too high.
#◢ Colorize!!
```
source_url = '' #@param {type:"string"}
render_factor = 35 #@param {type: "slider", min: 7, max: 45}
watermarked = True #@param {type:"boolean"}
if source_url is not None and source_url !='':
image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, watermarked=watermarked)
show_image_in_notebook(image_path)
else:
print('Provide an image url and try again.')
```
## See how well render_factor values perform on the image here
```
for i in range(10,45,2):
colorizer.plot_transformed_image('test_images/image.png', render_factor=i, display_render_factor=True, figsize=(8,8))
```
---
#⚙ Recommended image sources
* [/r/TheWayWeWere](https://www.reddit.com/r/TheWayWeWere/)
| github_jupyter |
```
# testing scRFE
pip list
from scRFE import scRFE
from scRFE import scRFEimplot
from scRFE.scRFE import makeOneForest
import numpy as np
import pandas as pd
from anndata import read_h5ad
adata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')
madeForest = makeOneForest(dataMatrix=adata, classOfInterest='age', labelOfInterest='3m', nEstimators=10,
randomState=0, min_cells=15, keep_small_categories=True,
nJobs=-1, oobScore=True, Step=0.2, Cv=3, verbosity=True)
type(madeForest[4])
from scRFE.scRFE import scRFEimplot
scRFEimplot(X_new=madeForest[3], y = madeForest[4])
from scRFE.scRFE import scRFE
from scRFE.scRFE import scRFEimplot
from scRFE.scRFE import makeOneForest
scRFE(adata, classOfInterest = 'age', nEstimators = 10, Cv = 3)
```
# scRFE
```
# Imports
import numpy as np
import pandas as pd
import scanpy as sc
import random
from anndata import read_h5ad
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import accuracy_score
from sklearn.feature_selection import RFE
from sklearn.feature_selection import RFECV
import seaborn as sns
import matplotlib.pyplot as plt
import scanpy.external as sce
import logging as logg
adata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')
def columnToString (dataMatrix):
cat_columns = dataMatrix.obs.select_dtypes(['category']).columns
dataMatrix.obs[cat_columns] = dataMatrix.obs[cat_columns].astype(str)
return dataMatrix
def filterNormalize (dataMatrix, classOfInterest, verbosity):
np.random.seed(644685)
# sc.pp.filter_cells(dataMatrix, min_genes=0)
# sc.pp.filter_genes(dataMatrix, min_cells=0)
dataMatrix = dataMatrix[dataMatrix.obs[classOfInterest]!='nan']
dataMatrix = dataMatrix[~dataMatrix.obs[classOfInterest].isna()]
if verbosity == True:
print ('na data removed')
return dataMatrix
filterNormalize(dataMatrix = adata, classOfInterest = 'age', verbosity = True)
def labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity):
dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)
dataMatrix.obs['classification_group'] = 'B'
dataMatrix.obs.loc[dataMatrix.obs[dataMatrix.obs[classOfInterest]==labelOfInterest]
.index,'classification_group'] = 'A' #make labels based on A/B of
# classofInterest
return dataMatrix
def downsampleToSmallestCategory(dataMatrix, random_state, min_cells,
keep_small_categories, verbosity,
classOfInterest = 'classification_group'
) -> sc.AnnData:
"""
returns an annData object in which all categories in 'classOfInterest' have
the same size
classOfInterest
column with the categories to downsample
min_cells
Minimum number of cells to downsample.
Categories having less than `min_cells` are discarded unless
keep_small_categories is True
keep_small_categories
Be default categories with less than min_cells are discarded.
Set to true to keep them
"""
counts = dataMatrix.obs[classOfInterest].value_counts(sort=False)
if len(counts[counts < min_cells]) > 0 and keep_small_categories is False:
logg.warning(
"The following categories have less than {} cells and will be "
"ignored: {}".format(min_cells, dict(counts[counts < min_cells]))
)
min_size = min(counts[counts >= min_cells])
sample_selection = None
for sample, num_cells in counts.items():
if num_cells <= min_cells:
if keep_small_categories:
sel = dataMatrix.obs.index.isin(
dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample].index)
else:
continue
else:
sel = dataMatrix.obs.index.isin(
dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample]
.sample(min_size, random_state=random_state)
.index
)
if sample_selection is None:
sample_selection = sel
else:
sample_selection |= sel
logg.info(
"The cells in category {!r} had been down-sampled to have each {} cells. "
"The original counts where {}".format(classOfInterest, min_size, dict(counts))
)
return dataMatrix[sample_selection].copy()
def makeOneForest (dataMatrix, classOfInterest, labelOfInterest, nEstimators,
randomState, min_cells, keep_small_categories,
nJobs, oobScore, Step, Cv, verbosity):
"""
Builds and runs a random forest for one label in a class of interest
Parameters
----------
dataMatrix : anndata object
The data file of interest
classOfInterest : str
The class you will split the data by in the set of dataMatrix.obs
labelOfInterest : str
The specific label within the class that the random forezt will run a
"one vs all" classification on
nEstimators : int
The number of trees in the forest
randomState : int
Controls random number being used
nJobs : int
The number of jobs to run in parallel
oobScore : bool
Whether to use out-of-bag samples to estimate the generalization accuracy
Step : float
Corresponds to percentage of features to remove at each iteration
Cv : int
Determines the cross-validation splitting strategy
Returns
-------
feature_selected : list
list of top features from random forest
selector.estimator_.feature_importances_ : list
list of top ginis corresponding to to features
"""
splitDataMatrix = labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity)
downsampledMatrix = downsampleToSmallestCategory (dataMatrix = splitDataMatrix,
random_state = randomState, min_cells = min_cells,
keep_small_categories = keep_small_categories, verbosity = verbosity,
classOfInterest = 'classification_group', )
feat_labels = downsampledMatrix.var_names
X = downsampledMatrix.X
y = downsampledMatrix.obs['classification_group'] #'A' or 'B' labels from labelSplit
clf = RandomForestClassifier(n_estimators = nEstimators, random_state = randomState,
n_jobs = nJobs, oob_score = oobScore)
selector = RFECV(clf, step = Step, cv = Cv)
clf.fit(X, y)
selector.fit(X, y)
feature_selected = feat_labels[selector.support_]
dataMatrix.obs['classification_group'] = 'B'
return feature_selected, selector.estimator_.feature_importances_
def resultWrite (classOfInterest, results_df, labelOfInterest,
feature_selected, feature_importance):
column_headings = []
column_headings.append(labelOfInterest)
column_headings.append(labelOfInterest + '_gini')
resaux = pd.DataFrame(columns = column_headings)
resaux[labelOfInterest] = feature_selected
resaux[labelOfInterest + '_gini'] = feature_importance
resaux = resaux.sort_values(by = [labelOfInterest + '_gini'], ascending = False)
resaux.reset_index(drop = True, inplace = True)
results_df = pd.concat([results_df, resaux], axis=1)
return results_df
def scRFE (adata, classOfInterest, nEstimators = 5000, randomState = 0, min_cells = 15,
keep_small_categories = True, nJobs = -1, oobScore = True, Step = 0.2, Cv = 5,
verbosity = True):
"""
Builds and runs a random forest with one vs all classification for each label
for one class of interest
Parameters
----------
adata : anndata object
The data file of interest
classOfInterest : str
The class you will split the data by in the set of dataMatrix.obs
nEstimators : int
The number of trees in the forest
randomState : int
Controls random number being used
min_cells : int
Minimum number of cells in a given class to downsample.
keep_small_categories : bool
Whether to keep classes with small number of observations, or to remove.
nJobs : int
The number of jobs to run in parallel
oobScore : bool
Whether to use out-of-bag samples to estimate the generalization accuracy
Step : float
Corresponds to percentage of features to remove at each iteration
Cv : int
Determines the cross-validation splitting strategy
Returns
-------
results_df : pd.DataFrame
Dataframe with results for each label in the class, formatted as
"label" for one column, then "label + gini" for the corresponding column
"""
dataMatrix = adata.copy()
dataMatrix = columnToString (dataMatrix)
dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)
results_df = pd.DataFrame()
for labelOfInterest in np.unique(dataMatrix.obs[classOfInterest]):
dataMatrix_labelOfInterest = dataMatrix.copy()
feature_selected, feature_importance = makeOneForest(
dataMatrix = dataMatrix_labelOfInterest, classOfInterest = classOfInterest,
labelOfInterest = labelOfInterest,
nEstimators = nEstimators, randomState = randomState, min_cells = min_cells,
keep_small_categories = keep_small_categories, nJobs = nJobs,
oobScore = oobScore, Step = Step, Cv = Cv, verbosity=verbosity)
results_df = resultWrite (classOfInterest, results_df,
labelOfInterest = labelOfInterest,
feature_selected = feature_selected,
feature_importance = feature_importance)
return results_df
adata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')
scRFE (adata, classOfInterest = 'age', nEstimators = 10, Cv = 3)
import logging
logging.info('%s before you %s', 'Look', 'leap!')
def logprint (verbosity):
if verbosity == True:
print('hi')
logprint(verbosity=True)
```
| github_jupyter |
```
import pickle
PIK = 'data/sirt6/final/20191217_m87e_counts.pkl'
with open(PIK, 'rb') as f:
m87e_clobs = pickle.load(f)
m87e_clobs
import pandas as pd
def extract_panda(clob_list):
dictlist = []
for i in range(len(clob_list)):
dictlist += [clob_list[i].to_dict()]
DF = pd.DataFrame(dictlist)
return DF
m87e_clobs[1].to_dict()
m87df = extract_panda(m87e_clobs)
m87df
check = ['ctx', 'hip']
for idx, row in m87df.iterrows():
for c in check:
if c in row['name']:
m87df.ix[idx, 'brain_loc'] = c
m87df
tn = m87df.name[20]
tn
'ctx' in tn
m87df.brain_loc[20]
ctx_m87df = m87df[m87df.brain_loc == 'ctx']
hip_m87df = m87df[m87df.brain_loc == 'hip']
ctx_m87df
ctx_m87_mean_cells_per_um = ctx_m87df.cells_per_area.mean()
hip_m87_mean_cells_per_um = hip_m87df.cells_per_area.mean()
print('87E CTX mean cells per um:', ctx_m87_mean_cells_per_um)
print('87E HIP mean cells per um:', hip_m87_mean_cells_per_um)
ctx_m87_mean_cells_per_mm2 = ctx_m87_mean_cells_per_um * 10e5
hip_m87_mean_cells_per_mm2 = hip_m87_mean_cells_per_um * 10e5
print('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2)
print('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)
PIK = 'data/sirt6/final/20191217_m91e_counts.pik'
with open(PIK, 'rb') as f:
m91e_clobs = pickle.load(f)
m91df = extract_panda(m91e_clobs)
m91df
m91e_clobs[1].to_dict()
check = ['Ctx', 'Hip']
for idx, row in m91df.iterrows():
for c in check:
if c in row['name']:
m91df.ix[idx, 'brain_loc'] = c
m91df
ctx_m91df = m91df[m91df.brain_loc == 'Ctx']
hip_m91df = m91df[m91df.brain_loc == 'Hip']
ctx_m91_mean_cells_per_um = ctx_m91df.cells_per_area.mean()
hip_m91_mean_cells_per_um = hip_m91df.cells_per_area.mean()
print('91E CTX mean cells per um:', ctx_m91_mean_cells_per_um)
print('91E HIP mean cells per um:', hip_m91_mean_cells_per_um)
ctx_m91_mean_cells_per_mm2 = ctx_m91_mean_cells_per_um * 10e5
hip_m91_mean_cells_per_mm2 = hip_m91_mean_cells_per_um * 10e5
print('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2)
print('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)
print('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2)
print('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)
print('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2)
print('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)
ctx_m87_sd = ctx_m87df.cells_per_area.std() * 10e5
hip_m87_sd = hip_m87df.cells_per_area.std() * 10e5
ctx_m91_sd = ctx_m91df.cells_per_area.std() * 10e5
hip_m91_sd = hip_m91df.cells_per_area.std() * 10e5
print('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2)
print('87E CTX std cells per mm2: ', ctx_m87_sd)
print('')
print('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2)
print('91E CTX std cells per mm2: ', ctx_m91_sd)
print('')
print('')
print('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)
print('87E HIP std cells per mm2: ', hip_m87_sd)
print('')
print('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)
print('91E HIP std cells per mm2: ', hip_m91_sd)
print('')
m87e_ctx = ctx_m87df.cells_per_area * 10e5
m87e_hip = hip_m87df.cells_per_area * 10e5
m91e_ctx = ctx_m91df.cells_per_area * 10e5
m91e_hip = hip_m91df.cells_per_area * 10e5
combined = pd.DataFrame(m87e_ctx, m87e_hip, m91e_ctx, m91e_hip)
combined
m91e_ctx.plot.box()
m91e_hip.plot.box()
import matplotlib.pyplot as plt
data_to_plot = [m87e_ctx, m87e_hip, m91e_ctx, m91e_hip]
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data_to_plot)
# Save the figure
fig.savefig('fig1.png', bbox_inches='tight')
ax.set_xticklabels(['Control Cortex', 'Control Hip', 'S6cKO Cortex', 'S6cKO Hip'])
ax.set_ylabel('Average Cell Count per mm^2')
ax.set_title('Average Cell Counts Per mm^2 in Sirt6cKO vs Littermate Control', fontsize = 12, fontweight = 'bold')
```
| github_jupyter |
```
#-------------------------------------------------------------------------------------------------------------------------------
# By Alexandra Lee
# (updated October 2018)
#
# Main
#
# Dataset: Pseudomonas aeruginosa gene expression from compendium
# referenced in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5069748/
#
# Condition: expression of PA1673 gene
#
# Task: To predict the expression of other (non-PA1673) genes by:
# 1. Define offset vector = avg(expression of genes corresponding to high levels of PA1673)
# - avg(expression of genes corresponding to low levels of PA1673)
# 2. scale factor = how far along the gradient of low-high PA1673 expression
# 3. prediction = baseline expression + scale factor * offset vector
#-------------------------------------------------------------------------------------------------------------------------------
import os
import pandas as pd
import numpy as np
from functions import generate_input, vae, def_offset, interpolate, plot
randomState = 123
from numpy.random import seed
seed(randomState)
# Name of analysis
analysis_name = 'PA1673_gradient_test'
# Create list of base directories
base_dirs = [os.path.join(os.path.dirname(os.getcwd()), 'data'),
os.path.join(os.path.dirname(os.getcwd()), 'encoded'),
os.path.join(os.path.dirname(os.getcwd()), 'models'),
os.path.join(os.path.dirname(os.getcwd()), 'output'),
os.path.join(os.path.dirname(os.getcwd()), 'stats'),
os.path.join(os.path.dirname(os.getcwd()), 'viz')
]
# Check if directory exist otherwise create
for each_dir in base_dirs:
analysis_dir = os.path.join(each_dir, analysis_name)
if os.path.exists(analysis_dir):
print('directory already exists: {}'.format(analysis_dir))
else:
os.mkdir(analysis_dir)
print('creating new directory: {}'.format(analysis_dir))
# Pre-process input
data_dir = os.path.join(base_dirs[0], analysis_name)
generate_input.generate_input_PA1673_gradient(data_dir)
# Run Tybalt
learning_rate = 0.001
batch_size = 100
epochs = 200
kappa = 0.01
intermediate_dim = 100
latent_dim = 10
epsilon_std = 1.0
base_dir = os.path.dirname(os.getcwd())
vae.tybalt_2layer_model(learning_rate, batch_size, epochs, kappa, intermediate_dim, latent_dim, epsilon_std, base_dir, analysis_name)
# Define offset vectors in gene space and latent space
data_dir = os.path.join(base_dirs[0], analysis_name)
target_gene = "PA1673"
percent_low = 5
percent_high = 95
def_offset.gene_space_offset(data_dir, target_gene, percent_low, percent_high)
model_dir = os.path.join(base_dirs[2], analysis_name)
encoded_dir = os.path.join(base_dirs[1], analysis_name)
def_offset.latent_space_offset(data_dir, model_dir, encoded_dir, target_gene, percent_low, percent_high)
# Predict gene expression using offset in gene space and latent space
out_dir = os.path.join(base_dirs[3], analysis_name)
interpolate.interpolate_in_gene_space(data_dir, target_gene, out_dir, percent_low, percent_high)
interpolate.interpolate_in_latent_space(data_dir, model_dir, encoded_dir, target_gene, out_dir, percent_low, percent_high)
# Plot prediction per sample along gradient of PA1673 expression
viz_dir = os.path.join(base_dirs[5], analysis_name)
plot.plot_corr_gradient(out_dir, viz_dir)
```
| github_jupyter |
# Performing the Hyperparameter tuning
**Learning Objectives**
1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
## Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
**1. Manual**
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
**2. Grid Search**
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
**3. Random Search**
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
**4. Bayesian Optimization**
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization).
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
**AI Platform HyperTune**
AI Platform HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms) Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
```
# Use the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Installing the latest version of the package
!pip install --user google-cloud-bigquery==1.25.0
```
**Note**: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
```
# Importing the necessary module
import os
from google.cloud import bigquery
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
REGION = "<YOUR REGION>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = "2.6"
%%bash
# Setting up cloud SDK properties
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
```
## Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
1. Upload data to Google Cloud Storage
2. Move code into a trainer Python package
4. Submit training job with `gcloud` to train on AI Platform
## Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
## Create BigQuery tables
If you haven not already created a BigQuery dataset for our data, run the following cell:
```
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
# Creating a dataset
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
```
Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
```
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
```
Make the validation dataset be 1/10 the size of the training dataset.
```
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
```
## Export the tables as CSV files
```
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
# List the files of the bucket
gsutil ls -l $OUTDIR
# Here, it shows the short header for each object
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
```
If all ran smoothly, you should be able to list the data bucket by running the following command:
```
# List the files of the bucket
!gsutil ls gs://$BUCKET/taxifare/data
```
## Move code into python package
Here, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory:
- `__init__.py`
- `model.py`
- `task.py`
```
# It will list all the files in the mentioned directory with a long listing format
!ls -la taxifare/trainer
```
To use hyperparameter tuning in your training job you must perform the following steps:
1. Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.
2. Include the following code in your training application:
- Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.
Add your hyperparameter metric to the summary for your graph.
- To submit a hyperparameter tuning job, we must modify `model.py` and `task.py` to expose any variables we want to tune as command line arguments.
### Modify model.py
```
%%writefile ./taxifare/trainer/model.py
# Importing the necessary modules
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# Splits features and labels from feature dictionary
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# Loads dataset using the tf.data API from CSV files
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
# Prefetch overlaps the preprocessing and model execution of a training step
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
# Parse a string and return a datetime.datetime
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
# Here, tf.sqrt Computes element-wise square root of the input tensor
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
# Timestamp.weekday() function return the day of the week represented by the date in the given Timestamp object
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
# It wraps a python function into a TensorFlow op that executes it eagerly
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
# Here, tf.sqrt Computes element-wise square root of the input tensor
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
# Define train and evaluate method to evaluate performance of the model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = history.history['val_rmse'][num_evals-1]
# TODO 1
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=num_evals
)
return history
```
### Modify task.py
```
%%writefile taxifare/trainer/task.py
# Importing the necessary module
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
```
### Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:
```
%%writefile hptuning_config.yaml
# Setting parameters for hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10 # TODO 2
maxParallelTrials: 2 # TODO 2
hyperparameterMetricTag: rmse # TODO 2
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO 2
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
# TODO 2
type: INTEGER
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterName: batch_size
# TODO 2
type: DISCRETE
discreteValues:
- 15
- 30
- 50
```
#### Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
You may need to install `cloudml-hypertune` on your machine to run this code locally.
```
# Installing the latest version of the package
!pip install cloudml-hypertune
```
Kindly ignore, if you get the version warnings related to pip install command.
```
%%bash
# Testing our training code locally
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
ls taxifare-model/tensorboard
```
The below hyperparameter training job step will take **upto 45 minutes** to complete.
```
%%bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID
REGION="us-central1"
TFVERSION="2.4"
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO 3
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--config=hptuning_config.yaml \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
```
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# SimPy: Treatment Centre
> **To run all code in this notebook go to menu item `Run -> Run All Cells`.**
`simpy` uses process based model worldview. Given its simplicity it is a highly flexible discrete-event simulation package.
One of the benefits of a package like `simpy` is that it is written in standard python and is free and open for others to use.
* For research this is highly beneficial:
* models and methods tested against them can be shared without concerns for commerical licensing.
* experimental results (either from model or method) can be recreated by other research teams.
* The version of `simpy` in use can also be controlled. This avoids backwards compatibility problems if models are returned to after several years.
Here we will take a look at code that implements a full `simpy` model including, time dependent arrivals, results collection, control of random numbers and multiple replications.
> The full scope of what is possible in `simpy` it out of scope. Detailed documentation for `simpy` and additional models can be found here: https://simpy.readthedocs.io/en/latest/
---
## Imports
It is recommended that you use the provided conda virtual environment `treat-sim`.
>If you are running this code in **Google Colab** then `simpy` can be pip installed.
```
# install simpy if running in Google Colab
import sys
if 'google.colab' in sys.modules:
!pip install simpy==4.0.1
import simpy
simpy.__version__
import numpy as np
import pandas as pd
import itertools
import math
import matplotlib.pyplot as plt
```
---
## FirstTreatment: A health clinic based in the US.
**This example is based on exercise 13 from Nelson (2013) page 170.**
> *Nelson. B.L. (2013). [Foundations and methods of stochastic simulation](https://www.amazon.co.uk/Foundations-Methods-Stochastic-Simulation-International/dp/1461461596/ref=sr_1_1?dchild=1&keywords=foundations+and+methods+of+stochastic+simulation&qid=1617050801&sr=8-1). Springer.*
Patients arrive to the health clinic between 6am and 12am following a non-stationary poisson process. After 12am arriving patients are diverted elsewhere and remaining WIP is completed. On arrival, all patients quickly sign-in and are **triaged**.
The health clinic expects two types of patient arrivals:
**Trauma arrivals:**
* patients with severe illness and trauma that must first be stablised in a **trauma room**.
* these patients then undergo **treatment** in a cubicle before being discharged.
**Non-trauma arrivals**
* patients with minor illness and no trauma go through **registration** and **examination** activities
* a proportion of non-trauma patients require **treatment** in a cubicle before being dicharged.
> In this model treatment of trauma and non-trauma patients is modelled seperately.
## Constants and defaults for modelling **as-is**
### Distribution parameters
```
# sign-in/triage parameters
DEFAULT_TRIAGE_MEAN = 3.0
# registration parameters
DEFAULT_REG_MEAN = 5.0
DEFAULT_REG_VAR= 2.0
# examination parameters
DEFAULT_EXAM_MEAN = 16.0
DEFAULT_EXAM_VAR = 3.0
# trauma/stabilisation
DEFAULT_TRAUMA_MEAN = 90.0
# Trauma treatment
DEFAULT_TRAUMA_TREAT_MEAN = 30.0
DEFAULT_TRAUMA_TREAT_VAR = 4.0
# Non trauma treatment
DEFAULT_NON_TRAUMA_TREAT_MEAN = 13.3
DEFAULT_NON_TRAUMA_TREAT_VAR = 2.0
# prob patient requires treatment given trauma
DEFAULT_NON_TRAUMA_TREAT_P = 0.60
# proportion of patients triaged as trauma
DEFAULT_PROB_TRAUMA = 0.12
```
### Time dependent arrival rates data
The data for arrival rates varies between clinic opening at 6am and closure at 12am.
```
NSPP_PATH = 'https://raw.githubusercontent.com/TomMonks/' \
+ 'open-science-for-sim/main/src/notebooks/01_foss_sim/data/ed_arrivals.csv'
# visualise
ax = pd.read_csv(NSPP_PATH).plot(y='arrival_rate', x='period', rot=45,
kind='bar',figsize=(12,5), legend=False)
ax.set_xlabel('hour of day')
ax.set_ylabel('mean arrivals');
```
### Resource counts
> Inter count variables representing the number of resources at each activity in the processes.
```
DEFAULT_N_TRIAGE = 1
DEFAULT_N_REG = 1
DEFAULT_N_EXAM = 3
DEFAULT_N_TRAUMA = 2
# Non-trauma cubicles
DEFAULT_N_CUBICLES_1 = 1
# trauma pathway cubicles
DEFAULT_N_CUBICLES_2 = 1
```
### Simulation model run settings
```
# default random number SET
DEFAULT_RNG_SET = None
N_STREAMS = 20
# default results collection period
DEFAULT_RESULTS_COLLECTION_PERIOD = 60 * 19
# number of replications.
DEFAULT_N_REPS = 5
# Show the a trace of simulated events
# not recommended when running multiple replications
TRACE = True
```
## Utility functions
```
def trace(msg):
'''
Utility function for printing a trace as the
simulation model executes.
Set the TRACE constant to False, to turn tracing off.
Params:
-------
msg: str
string to print to screen.
'''
if TRACE:
print(msg)
```
## Distribution classes
To help with controlling sampling `numpy` distributions are packaged up into classes that allow easy control of random numbers.
**Distributions included:**
* Exponential
* Log Normal
* Bernoulli
* Normal
* Uniform
```
class Exponential:
'''
Convenience class for the exponential distribution.
packages up distribution parameters, seed and random generator.
'''
def __init__(self, mean, random_seed=None):
'''
Constructor
Params:
------
mean: float
The mean of the exponential distribution
random_seed: int, optional (default=None)
A random seed to reproduce samples. If set to none then a unique
sample is created.
'''
self.rng = np.random.default_rng(seed=random_seed)
self.mean = mean
def sample(self, size=None):
'''
Generate a sample from the exponential distribution
Params:
-------
size: int, optional (default=None)
the number of samples to return. If size=None then a single
sample is returned.
'''
return self.rng.exponential(self.mean, size=size)
class Bernoulli:
'''
Convenience class for the Bernoulli distribution.
packages up distribution parameters, seed and random generator.
'''
def __init__(self, p, random_seed=None):
'''
Constructor
Params:
------
p: float
probability of drawing a 1
random_seed: int, optional (default=None)
A random seed to reproduce samples. If set to none then a unique
sample is created.
'''
self.rng = np.random.default_rng(seed=random_seed)
self.p = p
def sample(self, size=None):
'''
Generate a sample from the exponential distribution
Params:
-------
size: int, optional (default=None)
the number of samples to return. If size=None then a single
sample is returned.
'''
return self.rng.binomial(n=1, p=self.p, size=size)
class Lognormal:
"""
Encapsulates a lognormal distirbution
"""
def __init__(self, mean, stdev, random_seed=None):
"""
Params:
-------
mean: float
mean of the lognormal distribution
stdev: float
standard dev of the lognormal distribution
random_seed: int, optional (default=None)
Random seed to control sampling
"""
self.rng = np.random.default_rng(seed=random_seed)
mu, sigma = self.normal_moments_from_lognormal(mean, stdev**2)
self.mu = mu
self.sigma = sigma
def normal_moments_from_lognormal(self, m, v):
'''
Returns mu and sigma of normal distribution
underlying a lognormal with mean m and variance v
source: https://blogs.sas.com/content/iml/2014/06/04/simulate-lognormal
-data-with-specified-mean-and-variance.html
Params:
-------
m: float
mean of lognormal distribution
v: float
variance of lognormal distribution
Returns:
-------
(float, float)
'''
phi = math.sqrt(v + m**2)
mu = math.log(m**2/phi)
sigma = math.sqrt(math.log(phi**2/m**2))
return mu, sigma
def sample(self):
"""
Sample from the normal distribution
"""
return self.rng.lognormal(self.mu, self.sigma)
class Normal:
'''
Convenience class for the normal distribution.
packages up distribution parameters, seed and random generator.
'''
def __init__(self, mean, sigma, random_seed=None):
'''
Constructor
Params:
------
mean: float
The mean of the normal distribution
sigma: float
The stdev of the normal distribution
random_seed: int, optional (default=None)
A random seed to reproduce samples. If set to none then a unique
sample is created.
'''
self.rng = np.random.default_rng(seed=random_seed)
self.mean = mean
self.sigma = sigma
def sample(self, size=None):
'''
Generate a sample from the normal distribution
Params:
-------
size: int, optional (default=None)
the number of samples to return. If size=None then a single
sample is returned.
'''
return self.rng.normal(self.mean, self.sigma, size=size)
class Uniform():
'''
Convenience class for the Uniform distribution.
packages up distribution parameters, seed and random generator.
'''
def __init__(self, low, high, random_seed=None):
'''
Constructor
Params:
------
low: float
lower range of the uniform
high: float
upper range of the uniform
random_seed: int, optional (default=None)
A random seed to reproduce samples. If set to none then a unique
sample is created.
'''
self.rand = np.random.default_rng(seed=random_seed)
self.low = low
self.high = high
def sample(self, size=None):
'''
Generate a sample from the uniform distribution
Params:
-------
size: int, optional (default=None)
the number of samples to return. If size=None then a single
sample is returned.
'''
return self.rand.uniform(low=self.low, high=self.high, size=size)
```
## Model parameterisation
For convienience a container class is used to hold the large number of model parameters. The `Scenario` class includes defaults these can easily be changed and at runtime to experiments with different designs.
```
class Scenario:
'''
Container class for scenario parameters/arguments
Passed to a model and its process classes
'''
def __init__(self, random_number_set=DEFAULT_RNG_SET):
'''
The init method sets up our defaults.
Parameters:
-----------
random_number_set: int, optional (default=DEFAULT_RNG_SET)
Set to control the initial seeds of each stream of pseudo
random numbers used in the model.
'''
# sampling
self.random_number_set = random_number_set
self.init_sampling()
# count of each type of resource
self.init_resourse_counts()
def set_random_no_set(self, random_number_set):
'''
Controls the random sampling
Parameters:
----------
random_number_set: int
Used to control the set of psuedo random numbers
used by the distributions in the simulation.
'''
self.random_number_set = random_number_set
self.init_sampling()
def init_resourse_counts(self):
'''
Init the counts of resources to default values...
'''
self.n_triage = DEFAULT_N_TRIAGE
self.n_reg = DEFAULT_N_REG
self.n_exam = DEFAULT_N_EXAM
self.n_trauma = DEFAULT_N_TRAUMA
# non-trauma (1), trauma (2) treatment cubicles
self.n_cubicles_1 = DEFAULT_N_CUBICLES_1
self.n_cubicles_2 = DEFAULT_N_CUBICLES_2
def init_sampling(self):
'''
Create the distributions used by the model and initialise
the random seeds of each.
'''
# create random number streams
rng_streams = np.random.default_rng(self.random_number_set)
self.seeds = rng_streams.integers(0, 999999999, size=N_STREAMS)
# create distributions
# Triage duration
self.triage_dist = Exponential(DEFAULT_TRIAGE_MEAN,
random_seed=self.seeds[0])
# Registration duration (non-trauma only)
self.reg_dist = Lognormal(DEFAULT_REG_MEAN,
np.sqrt(DEFAULT_REG_VAR),
random_seed=self.seeds[1])
# Evaluation (non-trauma only)
self.exam_dist = Normal(DEFAULT_EXAM_MEAN,
np.sqrt(DEFAULT_EXAM_VAR),
random_seed=self.seeds[2])
# Trauma/stablisation duration (trauma only)
self.trauma_dist = Exponential(DEFAULT_TRAUMA_MEAN,
random_seed=self.seeds[3])
# Non-trauma treatment
self.nt_treat_dist = Lognormal(DEFAULT_NON_TRAUMA_TREAT_MEAN,
np.sqrt(DEFAULT_NON_TRAUMA_TREAT_VAR),
random_seed=self.seeds[4])
# treatment of trauma patients
self.treat_dist = Lognormal(DEFAULT_TRAUMA_TREAT_MEAN,
np.sqrt(DEFAULT_TRAUMA_TREAT_VAR),
random_seed=self.seeds[5])
# probability of non-trauma patient requiring treatment
self.nt_p_treat_dist = Bernoulli(DEFAULT_NON_TRAUMA_TREAT_P,
random_seed=self.seeds[6])
# probability of non-trauma versus trauma patient
self.p_trauma_dist = Bernoulli(DEFAULT_PROB_TRAUMA,
random_seed=self.seeds[7])
# init sampling for non-stationary poisson process
self.init_nspp()
def init_nspp(self):
# read arrival profile
self.arrivals = pd.read_csv(NSPP_PATH)
self.arrivals['mean_iat'] = 60 / self.arrivals['arrival_rate']
# maximum arrival rate (smallest time between arrivals)
self.lambda_max = self.arrivals['arrival_rate'].max()
# thinning exponential
self.arrival_dist = Exponential(60.0 / self.lambda_max,
random_seed=self.seeds[8])
# thinning uniform rng
self.thinning_rng = Uniform(low=0.0, high=1.0,
random_seed=self.seeds[9])
```
## Patient Pathways Process Logic
`simpy` uses a process based worldview. We can easily create whatever logic - simple or complex for the model. Here the process logic for trauma and non-trauma patients is seperated into two classes `TraumaPathway` and `NonTraumaPathway`.
```
class TraumaPathway:
'''
Encapsulates the process a patient with severe injuries or illness.
These patients are signed into the ED and triaged as having severe injuries
or illness.
Patients are stabilised in resus (trauma) and then sent to Treatment.
Following treatment they are discharged.
'''
def __init__(self, identifier, env, args):
'''
Constructor method
Params:
-----
identifier: int
a numeric identifier for the patient.
env: simpy.Environment
the simulation environment
args: Scenario
Container class for the simulation parameters
'''
self.identifier = identifier
self.env = env
self.args = args
# metrics
self.arrival = -np.inf
self.wait_triage = -np.inf
self.wait_trauma = -np.inf
self.wait_treat = -np.inf
self.total_time = -np.inf
self.triage_duration = -np.inf
self.trauma_duration = -np.inf
self.treat_duration = -np.inf
def execute(self):
'''
simulates the major treatment process for a patient
1. request and wait for sign-in/triage
2. trauma
3. treatment
'''
# record the time of arrival and entered the triage queue
self.arrival = self.env.now
# request sign-in/triage
with self.args.triage.request() as req:
yield req
# record the waiting time for triage
self.wait_triage = self.env.now - self.arrival
trace(f'patient {self.identifier} triaged to trauma '
f'{self.env.now:.3f}')
# sample triage duration.
self.triage_duration = args.triage_dist.sample()
yield self.env.timeout(self.triage_duration)
self.triage_complete()
# record the time that entered the trauma queue
start_wait = self.env.now
# request trauma room
with self.args.trauma.request() as req:
yield req
# record the waiting time for trauma
self.wait_trauma = self.env.now - start_wait
# sample stablisation duration.
self.trauma_duration = args.trauma_dist.sample()
yield self.env.timeout(self.trauma_duration)
self.trauma_complete()
# record the time that entered the treatment queue
start_wait = self.env.now
# request treatment cubicle
with self.args.cubicle_2.request() as req:
yield req
# record the waiting time for trauma
self.wait_treat = self.env.now - start_wait
trace(f'treatment of patient {self.identifier} at '
f'{self.env.now:.3f}')
# sample treatment duration.
self.treat_duration = args.trauma_dist.sample()
yield self.env.timeout(self.treat_duration)
self.treatment_complete()
# total time in system
self.total_time = self.env.now - self.arrival
def triage_complete(self):
'''
Triage complete event
'''
trace(f'triage {self.identifier} complete {self.env.now:.3f}; '
f'waiting time was {self.wait_triage:.3f}')
def trauma_complete(self):
'''
Patient stay in trauma is complete.
'''
trace(f'stabilisation of patient {self.identifier} at '
f'{self.env.now:.3f}')
def treatment_complete(self):
'''
Treatment complete event
'''
trace(f'patient {self.identifier} treatment complete {self.env.now:.3f}; '
f'waiting time was {self.wait_treat:.3f}')
class NonTraumaPathway(object):
'''
Encapsulates the process a patient with minor injuries and illness.
These patients are signed into the ED and triaged as having minor
complaints and streamed to registration and then examination.
Post examination 40% are discharged while 60% proceed to treatment.
Following treatment they are discharged.
'''
def __init__(self, identifier, env, args):
'''
Constructor method
Params:
-----
identifier: int
a numeric identifier for the patient.
env: simpy.Environment
the simulation environment
args: Scenario
Container class for the simulation parameters
'''
self.identifier = identifier
self.env = env
self.args = args
# triage resource
self.triage = args.triage
# metrics
self.arrival = -np.inf
self.wait_triage = -np.inf
self.wait_reg = -np.inf
self.wait_exam = -np.inf
self.wait_treat = -np.inf
self.total_time = -np.inf
self.triage_duration = -np.inf
self.reg_duration = -np.inf
self.exam_duration = -np.inf
self.treat_duration = -np.inf
def execute(self):
'''
simulates the non-trauma/minor treatment process for a patient
1. request and wait for sign-in/triage
2. patient registration
3. examination
4.1 40% discharged
4.2 60% treatment then discharge
'''
# record the time of arrival and entered the triage queue
self.arrival = self.env.now
# request sign-in/triage
with self.triage.request() as req:
yield req
# record the waiting time for triage
self.wait_triage = self.env.now - self.arrival
trace(f'patient {self.identifier} triaged to minors '
f'{self.env.now:.3f}')
# sample triage duration.
self.triage_duration = args.triage_dist.sample()
yield self.env.timeout(self.triage_duration)
trace(f'triage {self.identifier} complete {self.env.now:.3f}; '
f'waiting time was {self.wait_triage:.3f}')
# record the time that entered the registration queue
start_wait = self.env.now
# request registration clert
with self.args.registration.request() as req:
yield req
# record the waiting time for registration
self.wait_reg = self.env.now - start_wait
trace(f'registration of patient {self.identifier} at '
f'{self.env.now:.3f}')
# sample registration duration.
self.reg_duration = args.reg_dist.sample()
yield self.env.timeout(self.reg_duration)
trace(f'patient {self.identifier} registered at'
f'{self.env.now:.3f}; '
f'waiting time was {self.wait_reg:.3f}')
# record the time that entered the evaluation queue
start_wait = self.env.now
# request examination resource
with self.args.exam.request() as req:
yield req
# record the waiting time for registration
self.wait_exam = self.env.now - start_wait
trace(f'examination of patient {self.identifier} begins '
f'{self.env.now:.3f}')
# sample examination duration.
self.exam_duration = args.exam_dist.sample()
yield self.env.timeout(self.exam_duration)
trace(f'patient {self.identifier} examination complete '
f'at {self.env.now:.3f};'
f'waiting time was {self.wait_exam:.3f}')
# sample if patient requires treatment?
self.require_treat = self.args.nt_p_treat_dist.sample()
if self.require_treat:
# record the time that entered the treatment queue
start_wait = self.env.now
# request treatment cubicle
with self.args.cubicle_1.request() as req:
yield req
# record the waiting time for treatment
self.wait_treat = self.env.now - start_wait
trace(f'treatment of patient {self.identifier} begins '
f'{self.env.now:.3f}')
# sample treatment duration.
self.treat_duration = args.nt_treat_dist.sample()
yield self.env.timeout(self.treat_duration)
trace(f'patient {self.identifier} treatment complete '
f'at {self.env.now:.3f};'
f'waiting time was {self.wait_treat:.3f}')
# total time in system
self.total_time = self.env.now - self.arrival
```
## Main model class
The main class that a user interacts with to run the model is `TreatmentCentreModel`. This implements a `.run()` method, contains a simple algorithm for the non-stationary poission process for patients arrivals and inits instances of `TraumaPathway` or `NonTraumaPathway` depending on the arrival type.
```
class TreatmentCentreModel:
'''
The treatment centre model
Patients arrive at random to a treatment centre, are triaged
and then processed in either a trauma or non-trauma pathway.
'''
def __init__(self, args):
self.env = simpy.Environment()
self.args = args
self.init_resources()
self.patients = []
self.trauma_patients = []
self.non_trauma_patients = []
self.rc_period = None
self.results = None
def init_resources(self):
'''
Init the number of resources
and store in the arguments container object
Resource list:
1. Sign-in/triage bays
2. registration clerks
3. examination bays
4. trauma bays
5. non-trauma cubicles (1)
6. trauma cubicles (2)
'''
# sign/in triage
self.args.triage = simpy.Resource(self.env,
capacity=self.args.n_triage)
# registration
self.args.registration = simpy.Resource(self.env,
capacity=self.args.n_reg)
# examination
self.args.exam = simpy.Resource(self.env,
capacity=self.args.n_exam)
# trauma
self.args.trauma = simpy.Resource(self.env,
capacity=self.args.n_trauma)
# non-trauma treatment
self.args.cubicle_1 = simpy.Resource(self.env,
capacity=self.args.n_cubicles_1)
# trauma treatment
self.args.cubicle_2 = simpy.Resource(self.env,
capacity=self.args.n_cubicles_2)
def run(self, results_collection_period=DEFAULT_RESULTS_COLLECTION_PERIOD):
'''
Conduct a single run of the model in its current
configuration
Parameters:
----------
results_collection_period, float, optional
default = DEFAULT_RESULTS_COLLECTION_PERIOD
warm_up, float, optional (default=0)
length of initial transient period to truncate
from results.
Returns:
--------
None
'''
# setup the arrival generator process
self.env.process(self.arrivals_generator())
# store rc perio
self.rc_period = results_collection_period
# run
self.env.run(until=results_collection_period)
def arrivals_generator(self):
'''
Simulate the arrival of patients to the model
Patients either follow a TraumaPathway or
NonTraumaPathway simpy process.
Non stationary arrivals implemented via Thinning acceptance-rejection
algorithm.
'''
for patient_count in itertools.count():
# this give us the index of dataframe to use
t = int(self.env.now // 60) % self.args.arrivals.shape[0]
lambda_t = self.args.arrivals['arrival_rate'].iloc[t]
#set to a large number so that at least 1 sample taken!
u = np.Inf
interarrival_time = 0.0
# reject samples if u >= lambda_t / lambda_max
while u >= (lambda_t / self.args.lambda_max):
interarrival_time += self.args.arrival_dist.sample()
u = self.args.thinning_rng.sample()
# iat
yield self.env.timeout(interarrival_time)
trace(f'patient {patient_count} arrives at: {self.env.now:.3f}')
# sample if the patient is trauma or non-trauma
trauma = self.args.p_trauma_dist.sample()
if trauma:
# create and store a trauma patient to update KPIs.
new_patient = TraumaPathway(patient_count, self.env, self.args)
self.trauma_patients.append(new_patient)
else:
# create and store a non-trauma patient to update KPIs.
new_patient = NonTraumaPathway(patient_count, self.env,
self.args)
self.non_trauma_patients.append(new_patient)
# start the pathway process for the patient
self.env.process(new_patient.execute())
```
### Logic to process end of run results.
the class `SimulationSummary` accepts a `TraumaCentreModel`. At the end of a run it can be used calculate mean queuing times and the percentage of the total run that a resource was in use.
```
class SimulationSummary:
'''
End of run result processing logic of the simulation model
'''
def __init__(self, model):
'''
Constructor
Params:
------
model: TraumaCentreModel
The model.
'''
self.model = model
self.args = model.args
self.results = None
def process_run_results(self):
'''
Calculates statistics at end of run.
'''
self.results = {}
# list of all patients
patients = self.model.non_trauma_patients + self.model.trauma_patients
# mean triage times (both types of patient)
mean_triage_wait = self.get_mean_metric('wait_triage', patients)
# triage utilisation (both types of patient)
triage_util = self.get_resource_util('triage_duration',
self.args.n_triage,
patients)
# mean waiting time for registration (non_trauma)
mean_reg_wait = self.get_mean_metric('wait_reg',
self.model.non_trauma_patients)
# registration utilisation (trauma)
reg_util = self.get_resource_util('reg_duration',
self.args.n_reg,
self.model.non_trauma_patients)
# mean waiting time for examination (non_trauma)
mean_wait_exam = self.get_mean_metric('wait_exam',
self.model.non_trauma_patients)
# examination utilisation (non-trauma)
exam_util = self.get_resource_util('exam_duration',
self.args.n_exam,
self.model.non_trauma_patients)
# mean waiting time for treatment (non-trauma)
mean_treat_wait = self.get_mean_metric('wait_treat',
self.model.non_trauma_patients)
# treatment utilisation (non_trauma)
treat_util1 = self.get_resource_util('treat_duration',
self.args.n_cubicles_1,
self.model.non_trauma_patients)
# mean total time (non_trauma)
mean_total = self.get_mean_metric('total_time',
self.model.non_trauma_patients)
# mean waiting time for trauma
mean_trauma_wait = self.get_mean_metric('wait_trauma',
self.model.trauma_patients)
# trauma utilisation (trauma)
trauma_util = self.get_resource_util('trauma_duration',
self.args.n_trauma,
self.model.trauma_patients)
# mean waiting time for treatment (rauma)
mean_treat_wait2 = self.get_mean_metric('wait_treat',
self.model.trauma_patients)
# treatment utilisation (trauma)
treat_util2 = self.get_resource_util('treat_duration',
self.args.n_cubicles_2,
self.model.trauma_patients)
# mean total time (trauma)
mean_total2 = self.get_mean_metric('total_time',
self.model.trauma_patients)
self.results = {'00_arrivals':len(patients),
'01a_triage_wait': mean_triage_wait,
'01b_triage_util': triage_util,
'02a_registration_wait':mean_reg_wait,
'02b_registration_util': reg_util,
'03a_examination_wait':mean_wait_exam,
'03b_examination_util': exam_util,
'04a_treatment_wait(non_trauma)':mean_treat_wait,
'04b_treatment_util(non_trauma)':treat_util1,
'05_total_time(non-trauma)':mean_total,
'06a_trauma_wait':mean_trauma_wait,
'06b_trauma_util':trauma_util,
'07a_treatment_wait(trauma)':mean_treat_wait2,
'07b_treatment_util(trauma)':treat_util2,
'08_total_time(trauma)':mean_total2,
'09_throughput': self.get_throughput(patients)}
def get_mean_metric(self, metric, patients):
'''
Calculate mean of the performance measure for the
select cohort of patients,
Only calculates metrics for patients where it has been
measured.
Params:
-------
metric: str
The name of the metric e.g. 'wait_treat'
patients: list
A list of patients
'''
mean = np.array([getattr(p, metric) for p in patients
if getattr(p, metric) > -np.inf]).mean()
return mean
def get_resource_util(self, metric, n_resources, patients):
'''
Calculate proportion of the results collection period
where a resource was in use.
Done by tracking the duration by patient.
Only calculates metrics for patients where it has been
measured.
Params:
-------
metric: str
The name of the metric e.g. 'treatment_duration'
patients: list
A list of patients
'''
total = np.array([getattr(p, metric) for p in patients
if getattr(p, metric) > -np.inf]).sum()
return total / (self.model.rc_period * n_resources)
def get_throughput(self, patients):
'''
Returns the total number of patients that have successfully
been processed and discharged in the treatment centre
(they have a total time record)
Params:
-------
patients: list
list of all patient objects simulated.
Returns:
------
float
'''
return len([p for p in patients if p.total_time > -np.inf])
def summary_frame(self):
'''
Returns run results as a pandas.DataFrame
Returns:
-------
pd.DataFrame
'''
#append to results df
if self.results is None:
self.process_run_results()
df = pd.DataFrame({'1':self.results})
df = df.T
df.index.name = 'rep'
return df
```
## Executing a model
We note that there are **many ways** to setup a `simpy` model and execute it (that is part of its fantastic flexibility). The organisation of code we show below is based on our experience of using the package in practice. The approach also allows for easy parallisation over multiple CPU cores using `joblib`.
We include two functions. `single_run()` and `multiple_replications`. The latter is used to repeatedly call and process the results from `single_run`.
```
def single_run(scenario, rc_period=DEFAULT_RESULTS_COLLECTION_PERIOD,
random_no_set=DEFAULT_RNG_SET):
'''
Perform a single run of the model and return the results
Parameters:
-----------
scenario: Scenario object
The scenario/paramaters to run
rc_period: int
The length of the simulation run that collects results
random_no_set: int or None, optional (default=DEFAULT_RNG_SET)
Controls the set of random seeds used by the stochastic parts of the
model. Set to different ints to get different results. Set to None
for a random set of seeds.
Returns:
--------
pandas.DataFrame:
results from single run.
'''
# set random number set - this controls sampling for the run.
scenario.set_random_no_set(random_no_set)
# create an instance of the model
model = TreatmentCentreModel(scenario)
# run the model
model.run(results_collection_period=rc_period)
# run results
summary = SimulationSummary(model)
summary_df = summary.summary_frame()
return summary_df
def multiple_replications(scenario, rc_period=DEFAULT_RESULTS_COLLECTION_PERIOD,
n_reps=5):
'''
Perform multiple replications of the model.
Params:
------
scenario: Scenario
Parameters/arguments to configurethe model
rc_period: float, optional (default=DEFAULT_RESULTS_COLLECTION_PERIOD)
results collection period.
the number of minutes to run the model to collect results
n_reps: int, optional (default=DEFAULT_N_REPS)
Number of independent replications to run.
Returns:
--------
pandas.DataFrame
'''
results = [single_run(scenario, rc_period, random_no_set=rep)
for rep in range(n_reps)]
#format and return results in a dataframe
df_results = pd.concat(results)
df_results.index = np.arange(1, len(df_results)+1)
df_results.index.name = 'rep'
return df_results
```
### Single run of the model
The script below performs a single replication of the simulation model.
**Try:**
* Changing the `random_no_set` of the `single_run` call.
* Assigning the value `True` to `TRACE`
```
# Change this to True to see a trace...
TRACE = False
# create the default scenario
args = Scenario()
# use the single_run() func
# try changing `random_no_set` to see different run results
print('Running simulation ...', end=' => ')
results = single_run(args, random_no_set=42)
print('simulation complete.')
# show results (transpose replication for easier view)
results.T
```
### Multiple independent replications
Given the set up it is now easy to perform multiple replications of the model.
**Try**:
* Changing `n_reps`
```
%%time
args = Scenario()
#run multiple replications.
#by default it runs 5 replications.
print('Running multiple replications', end=' => ')
results = multiple_replications(args, n_reps=50)
print('done.\n')
results.head(3)
# summarise the results (2.dp)
results.mean().round(2)
```
### Visualise replications
```
fig, ax = plt.subplots(2, 1, figsize=(12,4))
ax[0].hist(results['01a_triage_wait']);
ax[0].set_ylabel('wait for triage')
ax[1].hist(results['02a_registration_wait']);
ax[1].set_ylabel('wait for registration');
```
## Scenario Analysis
The structured approach we took to organising our `simpy` model allows us to easily experiment with alternative scenarios. We could employ a formal experimental design if needed. For simplicity here we will limit ourselves by running user chosen competing scenarios and compare their mean performance to the base case.
> Note that we have our `simpy` model includes an implementation of **Common Random Numbers** across scenarios.
```
def get_scenarios():
'''
Creates a dictionary object containing
objects of type `Scenario` to run.
Returns:
--------
dict
Contains the scenarios for the model
'''
scenarios = {}
scenarios['base'] = Scenario()
# extra triage capacity
scenarios['triage+1'] = Scenario()
scenarios['triage+1'].n_triage += 1
# extra examination capacity
scenarios['exam+1'] = Scenario()
scenarios['exam+1'].n_exam += 1
# extra non-trauma treatment capacity
scenarios['treat+1'] = Scenario()
scenarios['treat+1'].n_cubicles_1 += 1
scenarios['triage+exam'] = Scenario()
scenarios['triage+exam'].n_triage += 1
scenarios['triage+exam'].n_exam += 1
return scenarios
def run_scenario_analysis(scenarios, rc_period, n_reps):
'''
Run each of the scenarios for a specified results
collection period and replications.
Params:
------
scenarios: dict
dictionary of Scenario objects
rc_period: float
model run length
n_rep: int
Number of replications
'''
print('Scenario Analysis')
print(f'No. Scenario: {len(scenarios)}')
print(f'Replications: {n_reps}')
scenario_results = {}
for sc_name, scenario in scenarios.items():
print(f'Running {sc_name}', end=' => ')
replications = multiple_replications(scenario, rc_period=rc_period,
n_reps=n_reps)
print('done.\n')
#save the results
scenario_results[sc_name] = replications
print('Scenario analysis complete.')
return scenario_results
```
### Script to run scenario analysis
```
#number of replications
N_REPS = 20
#get the scenarios
scenarios = get_scenarios()
#run the scenario analysis
scenario_results = run_scenario_analysis(scenarios,
DEFAULT_RESULTS_COLLECTION_PERIOD,
N_REPS)
def scenario_summary_frame(scenario_results):
'''
Mean results for each performance measure by scenario
Parameters:
----------
scenario_results: dict
dictionary of replications.
Key identifies the performance measure
Returns:
-------
pd.DataFrame
'''
columns = []
summary = pd.DataFrame()
for sc_name, replications in scenario_results.items():
summary = pd.concat([summary, replications.mean()], axis=1)
columns.append(sc_name)
summary.columns = columns
return summary
# as well as rounding you may want to rename the cols/rows to
# more readable alternatives.
summary_frame = scenario_summary_frame(scenario_results)
summary_frame.round(2)
```
## End
| github_jupyter |
# Step 1 - Read and Import the Archived Twitter Data
The first step is to read and import all the archived data. Download "spritzer" archived data from
Raw data from: https://archive.org/details/twitterstream
The data is quite large, and is stored in directory/file format as:
./yyyy/mm/dd/hh/{00-99}.json.bz2
Since our dataframes will overload the computer memory if we read it all in at once, we'll need
to be careful about memory management. For example, we can read in one directory at a time,
discard data that we don't want or need in foreseeable future, and save to a csv file; then
dump or re-use memory and go again.
After downloading hte data you want to analyze, run the portions of this file first, then garbage collect
or refresh your kernel to free up memory. A csv from this ipynb file will be used as the basis for
further analysis in parts 2 and 3.
```
# import necssary modules
import pandas as pd
import csv
import json
import os
import bz2
import time
# Function to check if a 'place' or a 'coordinates' are included in a tweet
# One or both can exist in a tweet. For this code, 'place' is checked first and,
# if it exists, returns true before 'coordinates' is checked
def does_this_tweet_have_a_place(tweet):
"""Function to check if a 'place' or a 'coordinates' are included in a tweet"""
if tweet['place']:
country_code = (tweet['place']['country_code'])
#print("country code: " + country_code)
return True
elif tweet['coordinates']:
#print("geo coordinates: {}".format(tweet['coordinates']))
return True
else:
return False
# Function to read in all the tweets from any one bz-zipped json file
def read_tweets_from_bzfile(filename):
"""Function read in all the tweets from any one bz-zipped json file"""
# local variables
tweets = []
read_count = 0
kept_count = 0
# open and unzip the bz2 file
with bz2.open(filename, "rb") as data_file:
for line in data_file:
try:
# load the tweet on this line of the file
tweet = json.loads(line)
read_count += 1
#print(tweet['text'])
# check if the tweet has a place or geo coordinates
if does_this_tweet_have_a_place(tweet) :
tweet['file_path'] = filename
tweets.append(tweet)
kept_count += 1
except:
pass
# print some outputs so we can watch it working
print("file read: {}".format(filename))
print(" tweets read in file: {}".format(read_count))
print(" tweets kept from file: {} ".format(kept_count))
if read_count != 0:
print(" kept tweets rate: {:0>2f} %".format(100*kept_count/read_count))
return tweets, read_count, kept_count
### uncomment and run this to test/debug the read_tweets_from_bzfile function using a single file
#tweets = []
#read_tweets_from_bzfile("00.json.bz2", tweets)
# Function to iterate through a directory, get all the archive files, and then
# read them in one at a time
def read_tweets_from_datetimehour_dir(rootdir):
"""Function to iterate through a directory, get all the archive files, and then read them in one at a time"""
# declare variables
tweets = [] # keep tweets as an array for now for mem management
num_read = 0
num_kept = 0
# will count the number of files as we go
num_files_read = 0
# iterate through the directories
for directory, subdirectory, filenames in os.walk(rootdir):
#print(directory)
# iterate through the filenames
for filename in filenames:
full_path_filename = os.path.join(directory, filename)
# call the read tweets function and keep track of counters
tw, nr, nk = read_tweets_from_bzfile(full_path_filename)
# append to the tweets array
tweets.extend( tw ) # important to use "extend" method
# increment the counters
num_files_read += 1
num_read += nr
num_kept += nk
print(" files read so far in this dir: {}".format(num_files_read))
print(" results so far in this dir: {} tweets".format(len(tweets)))
print("done. size of tweets array: {}".format(len(tweets)))
return tweets, num_read, num_kept, num_files_read # return stats with the tweets array
# Function to check if a file exists; was used in development
def check_if_output_file_exists(filepath):
if os.path.exists(filepath):
print("file {} exists.".format(filepath))
while True:
if os.path.isfile( filepath ):
overwrite = input('Delete old file? (If no, output will be appended)\n Y = yes, N = no\n')
if 'y' in overwrite.lower():
os.remove(filepath)
return False
elif 'n' in overwrite.lower():
return True
else:
print('input not understood; please try again')
```
# Main run block
Now that we've defined some key functions, we can run through it all. This will take a while.
The current set up is to run a week's worth of data. The data should be defined in the initial declarations of
this block, with year, month, day, and hour. Change according to the data you downloaded and want to analyze.
It's a good idea to restrict this to a smaller range for testing and verification before embarking on the
entire run you want to do.
```
# MAIN BLOCK
# instead of using os.walk, we'll specifically declare what we want to iterate through
# so that we have control of the size of this job, and to be flexible when we want to do
# smaller test runs
# we'll also create the df and save off the results by the hour, which is about the right size to not crash
# everything on a pentium i5 with 8GB of RAM
# but for sanity we'll make csv files by the day, so 7 files for the week
# set these variables to determine which directories will be read
# in this example, we are going with 1 week in December 2020
year = 2020
month = 12
day_start = 1
day_end = 7
hour_start = 0 # possible range: 0-23
hour_end = 23
# counters
total_tweets_read = 0
total_tweets_kept = 0
total_files_read = 0
# other variables
dir = ""
output_csv_file = "tweets_with_places"
tic = time.perf_counter() # start a timer
# now start iterating through files
for day in range(day_start, day_end +1):
# the dir/file structure is hard coded
output_csv_file = "tweets_with_places_" + \
str('{:0>4d}').format(year) + \
str('{:0>2d}').format(month) + \
str('{:0>2d}').format(day) + \
".csv"
write_csv_header = True # start with true, change to false after first write-out
for hour in range (hour_start, hour_end +1):
dir = os.path.join(str('{:0>4d}').format(year), \
str('{:0>2d}').format(month), \
str('{:0>2d}').format(day), \
str('{:0>2d}').format(hour))
print("starting new directory: " + dir)
if os.path.exists(dir) == False:
print("directory does not exist; moving on")
break
# read the file and get back only those witih places or geo coordinates
tweets, tweets_read, tweets_kept, files_read = read_tweets_from_datetimehour_dir(dir)
tweets_df = pd.DataFrame( tweets )
# print some outputs and statistics
#print(tweets_df.columns)
print("total tweets: {}".format(tweets_read))
tweets_df['created_at'] = pd.to_datetime(tweets_df['created_at'])
print("date time range: {} to {}".format(\
tweets_df['created_at'].min(),tweets_df['created_at'].max()))
try:
print(" percentage tweets kept for {} d {} h: {:0>2f} %".format(day, hour, 100*tweets_kept/tweets_read ))
except:
print(" no tweets read ")
# increment the counters
total_tweets_read = total_tweets_read + tweets_read
total_tweets_kept = total_tweets_kept + tweets_kept
total_files_read = total_files_read + files_read
# we can still keep lots of information from the tweet while dropping lots of extraneous or
# repeated information; this saves file size
if len(tweets_df) > 0:
filtered_df = tweets_df[[\
'created_at','id','text','source','user',\
'geo','coordinates','place','entities','lang','file_path']]
# write to the csv file
filtered_df.to_csv(output_csv_file, mode='a', header=write_csv_header)
write_csv_header = False # don't write headers after the first time
print("wrote to file")
else:
print("nothing written to file")
# print stats for the 'hour' read in
print("hour {} ended".format(hour))
print("TOTAL tweets kept, tweets read: {}, {}".format(total_tweets_kept, total_tweets_read))
print("TOTAL files read: {}".format(total_files_read))
print("TOTAL percentage tweets kept: {:0>2f} %".format( 100*total_tweets_kept/total_tweets_read ))
# print stats for the 'day' read in
print("day {} ended".format(day))
print("TOTAL tweets kept, tweets read: {}, {}".format(total_tweets_kept, total_tweets_read))
print("TOTAL files read: {}".format(total_files_read))
print("TOTAL percentage tweets kept: {:0>2f} %".format( 100*total_tweets_kept/total_tweets_read ))
# print overall stats
print("all files read")
print("TOTAL tweets kept, tweets read: {}, {}".format(total_tweets_kept, total_tweets_read))
print("TOTAL files read: {}".format(total_files_read))
print("TOTAL percentage tweets kept: {:0>2f} %".format( 100*total_tweets_kept/total_tweets_read ))
# how long did that take?
toc = time.perf_counter()
print(f"iterating and determining place from geo coords took {toc - tic:0.4f} seconds")
```
| github_jupyter |
## _*Using Qiskit Aqua for exact cover problems*_
In mathematics, given a collection $S$ of subsets of a set $X$.
An exact cover is a subcollection $S_{ec} \subseteq S$ such that each element in $X$ is contained in exactly one subset $\in S_{ec}$.
We will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE.
We will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut.
### The problem and the brute-force method.
first, let us take a look at the list of subsets.
```
import numpy as np
import json
from qiskit import Aer
from qiskit_aqua import run_algorithm
from qiskit_aqua.input import EnergyInput
from qiskit_aqua.translators.ising import exactcover
from qiskit_aqua.algorithms import ExactEigensolver
input_file = 'sample.exactcover'
with open(input_file) as f:
list_of_subsets = json.load(f)
print(list_of_subsets)
qubitOp, offset = exactcover.get_exactcover_qubitops(list_of_subsets)
algo_input = EnergyInput(qubitOp)
```
Then we apply the brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a subset is either 0 (meaning the subset is not in the cover) or 1 (meaning the subset is in the cover). We print the binary assignment that satisfies the definition of the exact cover.
```
def brute_force():
# brute-force way: try every possible assignment!
has_sol = False
def bitfield(n, L):
result = np.binary_repr(n, L)
return [int(digit) for digit in result] # [2:] to chop off the "0b" part
L = len(list_of_subsets)
max = 2**L
for i in range(max):
cur = bitfield(i, L)
cur_v = exactcover.check_solution_satisfiability(cur, list_of_subsets)
if cur_v:
has_sol = True
break
return has_sol, cur
has_sol, cur = brute_force()
if has_sol:
print("solution is", cur)
else:
print("no solution is found")
```
### Part I: run the optimization in the non-programming way
```
params = {
'problem': {'name': 'ising'},
'algorithm': {'name': 'ExactEigensolver'}
}
result = run_algorithm(params, algo_input)
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
np.testing.assert_array_equal(ising_sol, [0, 1, 1, 0])
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
### Part II: run the optimization in the programming way
```
algo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[])
result = algo.run()
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
np.testing.assert_array_equal(ising_sol, [0, 1, 1, 0])
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
### Part III: run the optimization with VQE
```
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'COBYLA'
}
var_form_cfg = {
'name': 'RYRZ',
'depth': 5
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg
}
backend = Aer.get_backend('statevector_simulator')
result = run_algorithm(params, algo_input, backend=backend)
x = exactcover.sample_most_likely(len(list_of_subsets), result['eigvecs'][0])
ising_sol = exactcover.get_solution(x)
if exactcover.check_solution_satisfiability(ising_sol, list_of_subsets):
print("solution is", ising_sol)
else:
print("no solution is found")
```
| github_jupyter |
```
import os
print(os.getcwd())
def update_working_directory():
from pathlib import Path
p = Path(os.getcwd()).parents[0]
os.chdir(p)
print(p)
update_working_directory()
import dill
from src.utils.plots import get_info_params
from src.visualization.rl_plots_comparison import plot_comparison_evolution_reward, plot_comparison_evolution_steps
from src.visualization.plot_happiness import plot_comparison_evolution_happiness
list_methods = ['sarsa_1', 'sarsa_2']
info_params = get_info_params({
'method': list_methods
, "n_episodes": 1000
, 'nmax_steps': 1440
, "gamma": 0.95
})
evo_training__evo_avg_reward_per_step = {}
evo_training__evo_n_steps = {}
evo_training__evo_avg_happiness = {}
for method in list_methods:
print(method)
with open("models/{}__evo_training.pkl".format(method), "rb") as input_file:
evo_training = dill.load(input_file)
evo_training__evo_avg_reward_per_step[method] = evo_training['evo_avg_reward_per_step']
evo_training__evo_n_steps[method] = evo_training['evo_n_steps']
evo_training__evo_avg_happiness[method] = evo_training['evo_avg_happiness']
print('Reward')
plot_comparison_evolution_reward(evo_training__evo_avg_reward_per_step, info_params)
print('Steps')
plot_comparison_evolution_steps(evo_training__evo_n_steps, nmax_steps, info_params)
print('Happiness')
plot_comparison_evolution_happiness(evo_training__evo_avg_happiness, info_params)
evolution_steps = [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5]
evolution_steps
from src.utils.maths import get_avg_n_points
avg_n_points = get_avg_n_points(evolution_steps, n_points=5)
avg_n_points
n_points_batch = len(avg_n_points) - 2
n_points_batch
import numpy as np
n_points = 5
n_moving_points = int(np.ceil(len(evolution_steps) / n_points))
n_moving_points
for i_point_batch in range(n_points_batch):
print(evolution_steps[i_point_batch*n_moving_points:(i_point_batch+1)*n_moving_points])
list_points = evolution_steps
n_points = 5
def get_min_n_points(list_points, n_points):
n_moving_points = int(np.ceil(len(list_points) / n_points))
min_n_points = []
for i_point_batch in range(n_points):
point_batch = evolution_steps[i_point_batch*n_moving_points:(i_point_batch+1)*n_moving_points]
min_n_points.append(min(point_batch))
return [list_points[0]] + min_n_points + [list_points[-1]]
def get_max_n_points(list_points, n_points):
n_moving_points = int(np.ceil(len(list_points) / n_points))
min_n_points = []
for i_point_batch in range(n_points):
point_batch = evolution_steps[i_point_batch*n_moving_points:(i_point_batch+1)*n_moving_points]
min_n_points.append(max(point_batch))
return [list_points[0]] + min_n_points + [list_points[-1]]
get_min_n_points(list_points, n_points)
get_max_n_points(list_points, n_points)
```
| github_jupyter |
# Overlays
Spatial overlays allow you to compare two GeoDataFrames containing polygon or multipolygon geometries
and create a new GeoDataFrame with the new geometries representing the spatial combination *and*
merged properties. This allows you to answer questions like
> What are the demographics of the census tracts within 1000 ft of the highway?
The basic idea is demonstrated by the graphic below but keep in mind that overlays operate at the dataframe level,
not on individual geometries, and the properties from both are retained

Now we can load up two GeoDataFrames containing (multi)polygon geometries...
```
%matplotlib inline
from shapely.geometry import Point
from geopandas import datasets, GeoDataFrame, read_file
from geopandas.tools import overlay
# NYC Boros
zippath = datasets.get_path('nybb')
polydf = read_file(zippath)
# Generate some circles
b = [int(x) for x in polydf.total_bounds]
N = 10
polydf2 = GeoDataFrame([
{'geometry': Point(x, y).buffer(10000), 'value1': x + y, 'value2': x - y}
for x, y in zip(range(b[0], b[2], int((b[2] - b[0]) / N)),
range(b[1], b[3], int((b[3] - b[1]) / N)))])
```
The first dataframe contains multipolygons of the NYC boros
```
polydf.plot()
```
And the second GeoDataFrame is a sequentially generated set of circles in the same geographic space. We'll plot these with a [different color palette](https://matplotlib.org/examples/color/colormaps_reference.html).
```
polydf2.plot(cmap='tab20b')
```
The `geopandas.tools.overlay` function takes three arguments:
* df1
* df2
* how
Where `how` can be one of:
['intersection',
'union',
'identity',
'symmetric_difference',
'difference']
So let's identify the areas (and attributes) where both dataframes intersect using the `overlay` method.
```
newdf = polydf.overlay(polydf2, how="intersection")
newdf.plot(cmap='tab20b')
```
And take a look at the attributes; we see that the attributes from both of the original GeoDataFrames are retained.
```
polydf.head()
polydf2.head()
newdf.head()
```
Now let's look at the other `how` operations:
```
newdf = polydf.overlay(polydf2, how="union")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="identity")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="symmetric_difference")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="difference")
newdf.plot(cmap='tab20b')
```
| github_jupyter |
## Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will:
* Use SFrames to do some feature engineering.
* Transform categorical variables into binary variables.
* Write a function to compute the number of misclassified examples in an intermediate node.
* Write a function to find the best feature to split on.
* Build a binary decision tree from scratch.
* Make predictions using the decision tree.
* Evaluate the accuracy of the decision tree.
* Visualize the decision at the root node.
**Important Note**: In this assignment, we will focus on building decision trees where the data contain **only binary (0 or 1) features**. This allows us to avoid dealing with:
* Multiple intermediate nodes in a split
* The thresholding issues of real-valued features.
This assignment **may be challenging**, so brace yourself :)
# Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
```
import graphlab
```
# Load the lending club dataset
We will be using the same [LendingClub](https://www.lendingclub.com/) dataset as in the previous assignment.
```
loans = graphlab.SFrame('lending-club-data.gl/')
```
Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
```
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
```
Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
Since we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding.
```
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
```
Let's explore what the dataset looks like.
```
loans
```
## Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use `seed=1` so everyone gets the same results.
```
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
```
**Note:** There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in "[Learning from Imbalanced Data](http://www.ele.uri.edu/faculty/he/PDFfiles/ImbalancedLearning.pdf)" by Haibo He and Edwardo A. Garcia, *IEEE Transactions on Knowledge and Data Engineering* **21**(9) (June 26, 2009), p. 1263–1284. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
## Transform categorical data into binary features
In this assignment, we will implement **binary decision trees** (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features.
For instance, the **home_ownership** feature represents the home ownership status of the loanee, which is either `own`, `mortgage` or `rent`. For example, if a data point has the feature
```
{'home_ownership': 'RENT'}
```
we want to turn this into three features:
```
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
```
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
```
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
```
Let's see what the feature columns look like now:
```
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
print "Number of features (after binarizing categorical variables) = %s" % len(features)
```
Let's explore what one of these columns looks like:
```
loans_data['grade.A']
```
This column is set to 1 if the loan grade is A and 0 otherwise.
**Checkpoint:** Make sure the following answers match up.
```
print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum()
print "Expexted answer : 6422"
```
## Train-test split
We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use `seed=1` so that everyone gets the same result.
```
train_data, test_data = loans_data.random_split(.8, seed=1)
```
# Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
## Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the **majority class** for all data points that belong to this node.
Now, we will write a function that calculates the number of **missclassified examples** when predicting the **majority class**. This will be used to help determine which feature is the best to split on at a given node of the tree.
**Note**: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node.
** Steps to follow **:
* ** Step 1:** Calculate the number of safe loans and risky loans.
* ** Step 2:** Since we are assuming majority class prediction, all the data points that are **not** in the majority class are considered **mistakes**.
* ** Step 3:** Return the number of **mistakes**.
Now, let us write the function `intermediate_node_num_mistakes` which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find `## YOUR CODE HERE`. There are **three** places in this function for you to fill in.
```
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
safes = labels_in_node[labels_in_node == +1].size()
# Count the number of -1's (risky loans)
riskys = labels_in_node[labels_in_node == -1].size()
# Return the number of mistakes that the majority classifier makes.
return min(safes, riskys)
```
Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your `intermediate_node_num_mistakes` function, run the following code until you get a **Test passed!**, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
```
# Test case 1
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
```
## Function to pick best feature to split on
The function **best_splitting_feature** takes 3 arguments:
1. The data (SFrame of data which includes all of the feature columns and label column)
2. The features to consider for splits (a list of strings of column names to consider for splits)
3. The name of the target/label column (string)
The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on.
Recall that the **classification error** is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Follow these steps:
* **Step 1:** Loop over each feature in the feature list
* **Step 2:** Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the **left** split), and one group where all of the data has feature value 1 or True (we will call this the **right** split). Make sure the **left** split corresponds with 0 and the **right** split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process.
* **Step 3:** Calculate the number of misclassified examples in both groups of data and use the above formula to compute the **classification error**.
* **Step 4:** If the computed error is smaller than the best error found so far, store this **feature and its error**.
This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly.
**Note:** Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier.
Fill in the places where you find `## YOUR CODE HERE`. There are **five** places in this function for you to fill in.
```
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
```
To test your `best_splitting_feature` function, run the following code:
```
if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
```
## Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'splitting_feature' : The feature that this node splits on.
}
First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find `## YOUR CODE HERE`. There are **three** places in this function for you to fill in.
```
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True} ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = +1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
```
We have provided a function that learns the decision tree recursively and implements 3 stopping conditions:
1. **Stopping condition 1:** All data points in a node are from the same class.
2. **Stopping condition 2:** No more features to split on.
3. **Additional stopping condition:** In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the **max_depth** of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process.
Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find `## YOUR CODE HERE`. There are **seven** places in this function for you to fill in.
```
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(data[target]) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == None: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth: ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1] ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
```
Here is a recursive function to count the nodes in your tree:
```
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
```
Run the following test code to check your implementation. Make sure you get **'Test passed'** before proceeding.
```
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
```
## Build the tree!
Now that all the tests are passing, we will train a tree model on the **train_data**. Limit the depth to 6 (**max_depth = 6**) to make sure the algorithm doesn't run for too long. Call this tree **my_decision_tree**.
**Warning**: This code block may take 1-2 minutes to learn.
```
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, target, current_depth=0, max_depth=6)
```
## Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function `classify`, which takes in a learned `tree` and a test point `x` to classify. We include an option `annotate` that describes the prediction path when set to `True`.
Fill in the places where you find `## YOUR CODE HERE`. There is **one** place in this function for you to fill in.
```
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
```
Now, let's consider the first example of the test set and see what `my_decision_tree` model predicts for this data point.
```
test_data[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])
```
Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
```
classify(my_decision_tree, test_data[0], annotate=True)
```
** Quiz Question:** What was the feature that **my_decision_tree** first split on while making the prediction for test_data[0]?
** Quiz Question:** What was the first feature that lead to a right split of test_data[0]?
** Quiz Question:** What was the last feature split on before reaching a leaf node for test_data[0]?
## Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the **classification error** is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Now, write a function called `evaluate_classification_error` that takes in as input:
1. `tree` (as described above)
2. `data` (an SFrame)
3. `target` (a string - the name of the target/label column)
This function should calculate a prediction (class label) for each row in `data` using the decision `tree` and return the classification error computed using the above formula. Fill in the places where you find `## YOUR CODE HERE`. There is **one** place in this function for you to fill in.
```
def evaluate_classification_error(tree, data, target):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
temp = prediction - data[target]
mistakes = temp.nnz()
total = temp.size()
error = float(mistakes) / total
return error
```
Now, let's use this function to evaluate the classification error on the test set.
```
evaluate_classification_error(my_decision_tree, test_data, target)
```
**Quiz Question:** Rounded to 2nd decimal point, what is the classification error of **my_decision_tree** on the **test_data**?
## Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).
```
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
```
**Quiz Question:** What is the feature that is used for the split at the root node?
### Exploring the intermediate left subtree
The tree is a recursive dictionary, so we do have access to all the nodes! We can use
* `my_decision_tree['left']` to go left
* `my_decision_tree['right']` to go right
```
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
```
### Exploring the left subtree of the left subtree
```
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
```
**Quiz Question:** What is the path of the **first 3 feature splits** considered along the **left-most** branch of **my_decision_tree**?
**Quiz Question:** What is the path of the **first 3 feature splits** considered along the **right-most** branch of **my_decision_tree**?
```
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['right']['right'], my_decision_tree['right']['splitting_feature'])
```
| github_jupyter |
## 2 - Introductory Example - 1-Axis tracker with cumulative sky
Method <b> Gencumsky </b> has been modified to divide the yearly-cumulative sky into various skies, each one representing the cumulative irradiance for the hours at which the tracker is at a certain angle. For faster running, for a tracker that moves between 45 and -45 degrees limit angle, if only positions every 5 degrees are considered (45, 40, 35 .... -4-, -45), then only 18 skies (and 18 simulations) will be run for the whole year.

This procedure was presented in:
Ayala Pelaez S, Deline C, Greenberg P, Stein JS, Kostuk RK. Model and validation of single-axis tracking with bifacial PV. IEEE J Photovoltaics. 2019;9(3):715–21. https://ieeexplore.ieee.org/document/8644027 and https://www.nrel.gov/docs/fy19osti/72039.pdf (pre-print, conference version)
### Steps:
<ol>
<li> <a href='#step1'> Create a folder for your simulation, and load bifacial_radiance </a></li>
<li> <a href='#step2'> Create a Radiance Object, set Albedo and Download Weather Files </a></li>
<ul> (VERY SIMILAR TO FIXED TILT EXAMPLE UNTIL HERE) </ul>
<li> <a href='#step3'> Set Tracking Angles </a></li>
<li> <a href='#step4'> Generate the Sky </a></li>
<li> <a href='#step5'> Define a Module type </a></li>
<li> <a href='#step6'> Create the scene </a></li>
<li> <a href='#step7'> Combine Ground, Sky and Scene Objects </a></li>
<li> <a href='#step8'> Analyze and get results </a></li>
<li> <a href='#step9'> Clean Results </a></li>
</ol>
And finally: <ul> <a href='#condensed'> Condensed instructions </a></ul>
<a id='step1'></a>
## 1. Create a folder for your simulation, and load bifacial_radiance
First let's set the folder where the simulation will be saved. By default, this is the TEMP folder in the bifacial_radiance distribution.
The lines below find the location of the folder relative to this Jupyter Journal. You can alternatively point to an empty directory (it will open a load GUI Visual Interface) or specify any other directory in your computer, for example:
#### testfolder = r'C:\Users\sayala\Documents\RadianceScenes\Tutorials\Journal2'
```
import os
from pathlib import Path
testfolder = Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_02'
# Another option using relative address; for some operative systems you might need '/' instead of '\'
# testfolder = os.path.abspath(r'..\..\bifacial_radiance\TEMP')
print ("Your simulation will be stored in %s" % testfolder)
if not os.path.exists(testfolder):
os.makedirs(testfolder)
```
This will load bifacial_radiance and other libraries from python that will be useful for this Jupyter Journal:
```
from bifacial_radiance import *
import numpy as np
```
<a id='step2'></a>
## 2. Create a Radiance Object, Set Albedo, and Download and Load Weather File
These are all repeated steps from Tutorial 1, so condensing:
```
# Create a RadianceObj 'object' named bifacial_example. no whitespace allowed
demo = RadianceObj('tutorial_2', path = str(testfolder))
albedo = 0.25
demo.setGround(albedo)
# Pull in meteorological data using pyEPW for any global lat/lon
epwfile = demo.getEPW(lat = 37.5, lon = -77.6) # This location corresponds to Richmond, VA.
# Read in the weather data pulled in above.
metdata = demo.readWeatherFile(weatherFile = epwfile)
```
<a id='step3'></a>
## TRACKING Workflow
Until now, all the steps looked the same from Tutorial 1. The following section follows similar steps, but the functions are specific for working with single axis tracking.
## 3. Set Tracking Angles
This function will read the weather file, and based on the sun position it will calculate the angle the tracker should be at for each hour. It will create metdata files for each of the tracker angles considered.
```
limit_angle = 5 # tracker rotation limit angle. Setting it ridiculously small so this runs faster.
angledelta = 5 # sampling between the limit angles.
backtrack = True
gcr = 0.33
cumulativesky = True # This is important for this example!
trackerdict = demo.set1axis(metdata = metdata, limit_angle = limit_angle, backtrack = backtrack,
gcr = gcr, cumulativesky = cumulativesky)
```
Setting backtrack to True is important in this step, so the trackers correct for self-shading when following the sun at high zenith angles.
<a id='step4'></a>
## 4. Generate the Sky
This will create the skies for each sub-metdata file created by set1axis.
```
trackerdict = demo.genCumSky1axis()
```
This is how one of the cumulative sky .cal files associated with each .rad file generated look like:

Each of the values corresponds to the cumulative rradiance of one of those patches, for when the tracker is at that specific angle through the year.
<a id='step5'></a>
## 5. Define the Module type
Let's make a more interesting module in this example. Let's do 2-up configuration in portrait, with the modules rotating around a 10 centimeter round torque tube. Let's add a gap between the two modules in 2-UP of 10 centimeters, as well as gap between the torque tube and the modules of 5 centimeters. Along the row, the modules are separated only 2 centimeters for this example. The torquetube is painted Metal_Grey in this example (it's one of the materials available in Ground.rad, and it is 40% reflective).
Note that starting with bifacial_radiance version 0.4.0, the module object has a new geometry generation function `addTorquetube`. The old way of passing a properly formatted dictionary as a keyword argument will still work too.
```
x = 0.984 # meters
y = 1.7 # meters
moduletype = 'test-module'
numpanels = 2
zgap = 0.05
ygap = 0.10
xgap = 0.02
module = demo.makeModule(name=moduletype, x=x, y=y,xgap=xgap, ygap=ygap, zgap=zgap,
numpanels=numpanels)
module.addTorquetube(diameter=0.1, material='Metal_Grey', tubetype='round') # New torquetube generation function
print()
print(module)
print()
print(module.torquetube)
```
<a id='step6'></a>
## 6. Make the Scene
The scene Dictionary specifies the information of the scene. For tracking, different input parameters are expected in the dictionary, such as number of rows, number of modules per row, row azimuth, hub_height (distance between the axis of rotation of the modules and the ground).
```
hub_height = 2.3
sceneDict = {'gcr': gcr,'hub_height':hub_height, 'nMods': 20, 'nRows': 7}
```
To make the scene we have to create a Scene Object through the method makeScene1axis. This method will create a .rad file in the objects folder, with the parameters specified in sceneDict and the module created above.
```
trackerdict = demo.makeScene1axis(trackerdict = trackerdict, module = module, sceneDict = sceneDict)
```
<a id='step7'></a>
## 7. Combine Ground, Sky and Scene Objects
makeOct1axis joins the sky.rad file, ground.rad file, and the geometry.rad files created in makeScene.
```
trackerdict = demo.makeOct1axis(trackerdict = trackerdict)
```
<a id='step8'></a>
## 8. Analyze and get results
We can choose to analyze any module in the Scene we have created. The default, if no modWanted or rowWanted is passed, is to sample the center module of the center row.
For this example we will sample row 2, module 9.
```
modWanted = 9
rowWanted = 2
customname = '_Row_2_Module_09' # This is useful if we want to do various analysis.
trackerdict = demo.analysis1axis(trackerdict, modWanted=9, rowWanted = 2, customname=customname)
```
Let's look at the results with more detail. The analysis1axis routine created individual result .csv files for each angle, as well as one cumulative result .csv where the irradiance is added by sensor.
```
results = load.read1Result('cumulative_results__Row_2_Module_09.csv')
results
```
There are various things to notice:
I. The materials column has a specific format that will tell you if you are sampling the correct module:
a{ModWanted}.{rowWanted}.a{numPanel}.{moduletype}.material_key
* Since for this journal numPanels = 2, numPanel can either be 0 or 1, for the East-most and West-most module in the collector width.
* numPanel, ModWanted and RowWanted are indexed starting at 0 in the results.
* material_key is from the surface generated inside radiance. Usually it is 6457 for top surface of hte module and .2310 for the bottom one.
II. Sensors sample always in the same direction. For this N-S aligned tracker, that is East-most to West. For this 2-up portrait tracker which is 3.5 meters, 20x7 rows and we are sampling module 9 on row 2, the East to West sampling goes from 22.6 m to 19.81 m = 2.79m. It is not exatly 3.5 because the sensors are spaced evenly through the collector width (CW):

III. When there is a ygap in the collector width (2-UP or more configuration), some of the sensors might end up sampling the torque tube, or the sky. You can ses that in the materials columns. This also happens if the number of sensors is quite high, the edges of the module might be sampled instead of the sensors. For this reason, before calculating bifacial gain these results must be cleaned. For more advanced simulations, make sure you clean each result csv file individually. We provide some options on load.py but some are very use-specific, so you might have to develop your own cleaning tool (or let us know on issues!)
<div class="alert alert-warning">
Important: If you have torquetubes and y-gap values, make sure you clean your results.
</div>
<a id='step9'></a>
## 9. Clean Results
We have two options for cleaning results. The simples on is <b>load.cleanResults</b>, but there is also a deepClean for specific purposes.
cleanResults will find materials that should not have values and set them to NaN.
```
results_clean = load.cleanResult(results)
results_clean
```
These are the total irradiance values over all the hours of the year that the module at each sampling point will receive. Dividing the back irradiance average by the front irradiance average will give us the bifacial gain for the year:

Assuming that our module from Prism Solar has a bifaciality factor (rear to front performance) of 90%, our <u> bifacial gain </u> is of:
```
bifacialityfactor = 0.9
print('Annual bifacial ratio: %0.3f ' %( np.nanmean(results_clean.Wm2Back) * bifacialityfactor / np.nanmean(results_clean.Wm2Front)) )
```
<a id='condensed'></a>
## CONDENSED VERSION
Everything we've done so far in super short condensed version:
```
albedo = 0.25
lat = 37.5
lon = -77.6
nMods = 20
nRows = 7
hub_height = 2.3
gcr = 0.33
moduletype = 'test-module' # this must already exist since we are not calling makeModule in this CONDENSED example.
#testfolder = r'C:\Users\sayala\Documents\RadianceScenes\Tutorials\Journal2'
limit_angle = 5
angeldelta = 5
backtrack = True
gcr = gcr
modWanted = 9
rowWanted = 2
cumulativesky = True
import bifacial_radiance
demo = RadianceObj('test')
demo.setGround(albedo)
epwfile = demo.getEPW(lat, lon)
metdata = demo.readWeatherFile(epwfile)
demo.set1axis(limit_angle=limit_angle, backtrack=backtrack, gcr=gcr, cumulativesky=cumulativesky)
demo.genCumSky1axis()
sceneDict = {'gcr': gcr,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows} # orientation deprecated on v.0.2.4.
demo.makeScene1axis(module=moduletype, sceneDict=sceneDict)
demo.makeOct1axis()
demo.analysis1axis(modWanted=modWanted, rowWanted=rowWanted);
```
| github_jupyter |
### Helping functions
```
# Function for tagging residues as LIP/non-LIP
# Overwrites entries in the second dataset with LIP flag from the first dataset
def LIP_tag(ds_original, ds_residues):
#For every protein we take the information of where LIP residue are
for idx, row in ds_original.iterrows():
# Bind information to correct variables
pdb, chain, start, end = (row[0:4])
# Get the correct slice of data which will be edited
sliced = ((ds_residues['PDB_ID'] == pdb)
& (ds_residues['CHAIN_ID'] == chain)
& (ds_residues['RES_ID'] <= end)
& (ds_residues['RES_ID'] >= start))
#Now we set to 1 all the residue whose features are the one desired
ds_residues.loc[sliced, 'LIP'] = 1
ds_residues.loc[sliced, 'LIP_SCORE'] = 1
def down_sampling(df, number_of_samples, seed = 42):
noLIP_index = set(df[df['LIP'] == 0].index)
indexes = set(np.arange(0, np.shape(df)[0]))
sample = random.sample(noLIP_index, len(noLIP_index) - number_of_samples)
new_index = indexes.difference(sample)
df1 = df.iloc[list(new_index), :]
return df1
```
### Importing libraries
```
# Import default libraries
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import time
import random
import warnings
# Import Biopython utils
from Bio.PDB import PDBList, is_aa, PDBIO
from Bio.PDB.PDBParser import PDBParser
from Bio.SeqUtils import IUPACData
from Bio.PDB.PDBIO import Select
```
### Importing original dataset (LIP tagged sequences)
```
# Read original dataset (lips_dataset)
ds_original = pd.read_csv('./datasets/lips_dataset.txt', sep='\t')
# Define new dataset
ds_original.head()
```
### Downloading proteins (automatically skips a protein if it has already been downloaded)
```
# Select all proteins (pdb column)
pdb_ids = ds_original.pdb.unique()
# Define pdb files dir
pdb_dir = './pdb_files'
# Define pdb file fetching class
pdbl = PDBList()
# Fetch every protein
for pdb_id in pdb_ids:
# Execute fetching of the protein (pdb file)
pdbl.retrieve_pdb_file(pdb_id, pdir=pdb_dir, file_format='pdb')
```
### Creating redidues dataset
```
# Select all proteins (pdb column)
pdb_ids = ds_original.pdb.unique()
# Define pdb files dir
pdb_dir = './pdb_files'
# Define pdb file fetching class
pdbl = PDBList()
# Define a set containing (pdb_id, chain_id)
valid_chains = set([(row['pdb'], row['chain']) for idx, row in ds_original.iterrows()])
# New list for residues
ds_residues = list()
# Loop thorugh every protein
for pdb_id in ds_original.pdb.unique():
# Get structure of the protein
structure = PDBParser(QUIET=True).get_structure(pdb_id, pdb_dir + '/pdb{}.ent'.format(pdb_id))
# We select only the 0-th model
model = structure[0]
# Loop through every model's chain
for chain in model:
# Skip if the chain is not valid
if (pdb_id, chain.id) not in valid_chains:
continue
for residue in chain:
# Do not take into account non-aminoacidic residues (e.g. water molecules)
if(not is_aa(residue)):
continue
# Add an entry to the residues list
ds_residues.append((pdb_id, model.id, chain.id, residue.id[1], residue.get_resname(), 0, 0))
# Turn list into dataframe
ds_residues = pd.DataFrame(ds_residues)
# Define dataset column names
ds_residues.columns = ['PDB_ID', 'MODEL_ID', 'CHAIN_ID', 'RES_ID', 'RES_NAME', 'LIP_SCORE', 'LIP']
# Show some info about the dataset
print("Numbers of proteins: {}".format(np.shape(ds_original)[0]))
print("Numbers of res: {}".format(np.shape(ds_residues)[0]))
# Show first rows
ds_residues.head()
```
### Tagging LIP residues
```
# Launch tagging algorithm: we have 0 positively tagged residues
LIP_tag(ds_original, ds_residues)
# Check that the number of residues positively LIP-tagged is higher than 0
assert True, any(ds_residues['LIP'] == 1)
# Show first positively tagged LIP residues
ds_residues.head(100)
```
### Check dataset balancement
We check if we have the same numerosity of LIP and npn-LIP tagged residues.
```
# Compute numerosity of LIP tagged residues
print('Numerosity of LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 1].shape[0]))
# Compute numerosity of non-LIP tagged residues
print('Numerosity of non-LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 0].shape[0]))
# Add plot
fig, ax = plt.subplots(1, 1)
# Add frequency plot
fig = plt.hist(ds_residues['LIP'])
```
| github_jupyter |
```
import sys
import platform
print("Python version:\n", sys.version)
print ("Path to the python executable:\n", sys.executable)
```
# Biomedical Image Segmentation with U-Net
In this code example, we apply the U-Net architecture to segment brain tumors from raw MRI scans as shown below. With relatively little data we are able to train a U-Net model to accurately predict where tumors exist.
The Dice coefficient (the standard metric for the BraTS dataset used in the study) for our model is about 0.82-0.88. Menze et al. [reported](http://ieeexplore.ieee.org/document/6975210/) that expert neuroradiologists manually segmented these tumors with a cross-rater Dice score of 0.75-0.85, meaning that the model’s predictions are on par with what expert physicians have made.
<img src="images/figure1.png">
Since its introduction two years ago, the [U-Net](https://arxiv.org/pdf/1505.04597.pdf0) architecture has been used to create deep learning models for segmenting [nerves](https://github.com/jocicmarko/ultrasound-nerve-segmentation) in ultrasound images, [lungs](https://www.kaggle.com/c/data-science-bowl-2017#tutorial) in CT scans, and even [interference](https://github.com/jakeret/tf_unet) in radio telescopes.
## What is U-Net?
U-Net is designed like an [auto-encoder](https://en.wikipedia.org/wiki/Autoencoder). It has an encoding path (“contracting”) paired with a decoding path (“expanding”) which gives it the “U” shape. However, in contrast to the autoencoder, U-Net predicts a pixelwise segmentation map of the input image rather than classifying the input image as a whole. For each pixel in the original image, it asks the question: “To which class does this pixel belong?” This flexibility allows U-Net to predict different parts of the tumor simultaneously.
<img src="images/unet.png">
This module loads the data from `data.py`, creates a TensorFlow/Keras model from `model.py`, trains the model on the data, and then saves the best model.
```
import tensorflow as tf
import datetime
import time
import os
from IPython.display import Image
import numpy as np
import settings # Use the custom settings.py file for default parameters
# import onnxmltools
import sys; sys.argv=['']; del sys
# from model import load_model, get_callbacks, evaluate_model
from data import load_data
from model import unet
from argparser import args
if args.keras_api:
import keras as K
else:
from tensorflow import keras as K
print ("We are using Tensorflow version", tf.__version__,\
"with Intel(R) MKL", "enabled" if tf.pywrap_tensorflow.IsMklEnabled() else "disabled",)
print("Keras API version: {}".format(K.__version__))
onnx = False # Set whether we are exporting to ONNX model and using nGraph
```
For best CPU speed set the number of intra and inter threads to take advantage of multi-core systems.
See https://github.com/intel/mkl-dnn
```
#TODO: Set the multi-threading parameters for Tensorflow.
config = tf.ConfigProto(intra_op_parallelism_threads=args.num_threads,
inter_op_parallelism_threads=args.num_inter_threads)
sess = tf.Session(config=config)
K.backend.set_session(sess)
```
## Training Time!
The bulk of the training section can be broken down in 4 simple steps:
1. Load the training data
1. Define the model
3. Train the model on the data
4. Evaluate the best model
#### Step 1 : Loading the datafrom the `HDF5` file
```
hdf5_filename = os.path.join(args.data_path, args.data_filename)
print ("Full path to the HFS file:", hdf5_filename)
imgs_train, msks_train, imgs_validation, msks_validation, imgs_testing, msks_testing = \
load_data(hdf5_filename, args.batch_size,[args.crop_dim, args.crop_dim])
np.random.seed(816)
```
#### Step 2: Define the model
```
print("-" * 30)
print("Creating and compiling model ...")
print("-" * 30)
unet_model = unet()
model = unet_model.create_model(imgs_train.shape, msks_train.shape)
model_filename, model_callbacks = unet_model.get_callbacks()
# # If there is a current saved file, then load weights and start from there.
# saved_model = os.path.join(args.output_path, args.inference_filename)
# if os.path.isfile(saved_model):
# model.load_weights(saved_model)
```
The code snippet below draws the model using Keras' built-in `plot_model`. Compare with the implementation of `model.py`
```
tf.keras.utils.plot_model(model,
to_file='images/model.png',
show_shapes=True,
show_layer_names=True,
rankdir='TB'
)
Image('images/model.png')
```
#### Step 3: Train the model on the data
```
print("-" * 30)
print("Fitting model with training data ...")
print("-" * 30)
print("Step 3, training the model started at {}".format(datetime.datetime.now()))
start_time = time.time()
if onnx:
for i in range(0,30):
print ("Processing batch:", i)
start_time = time.time()
model.train_on_batch(imgs_train[i*args.batch_size:args.batch_size*(i+1)-1], \
msks_train[i*args.batch_size:args.batch_size*(i+1)-1])
print ("Time for training on batch:", time.time() - start_time)
# TODO: Convert the Keras model to ONNX and save it.
onnx_model = onnxmltools.convert_keras(model, target_opset=7)
onnxmltools.utils.save_model(onnx_model, 'output/unet_model_for_decathlon.onnx')
else:
history = model.fit(imgs_train, msks_train,
batch_size=args.batch_size,
epochs=args.epochs,
validation_data=(imgs_validation, msks_validation),
verbose=1, shuffle="batch",
callbacks=model_callbacks)
print("Total time elapsed for training = {} seconds".format(time.time() - start_time))
print("Training finished at {}".format(datetime.datetime.now()))
# Append training log
# with open("training.log","a+") as fp:
# fp.write("{}: {}\n".format(datetime.datetime.now(),
# history.history["val_dice_coef"]))
```
#### Step 4: Evaluate the best model
```
print("-" * 30)
print("Loading the best trained model ...")
print("-" * 30)
unet_model.evaluate_model(model_filename, imgs_testing, msks_testing)
```
## End: In this tutorial, you have learnt:
* What is the U-Net model
* Comparing training times - Tensorflow_MKLDNN vs Tensorflow (stock)
* How to tweak a series of environment variables to get better performance out of MKLDNN
* How to tweak a series of Tensorflow-related and neural-network specific parameters for better performance
`Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. SPDX-License-Identifier: EPL-2.0`
`Copyright (c) 2019 Intel Corporation`
| github_jupyter |
# Calling RES with Python in SPARK
## Pre-Requisite
* Python 3.5 for Spark
## Initializing Python environment with ODM Jars files and ODM Model archive
* Create a Spark Session
* Initialize the Python environment
```
from io import StringIO
import requests
import json
import pandas as pd
#from pyspark.sql import SQLContext
import os
import os
cwd = os.getcwd()
cwd = "/home/spark/shared/user-libs/spark2"
print(cwd)
# Download Material for Rule Exection
!curl -o {cwd}/miniloan-xom.jar https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-xom.jar
!curl -o {cwd}/miniloan-ruleapp.jar https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-ruleapp.jar
# Download ODM Library
!curl -o {cwd}/j2ee_connector-1_5-fr.jar http://xxx.xxx.xxx.xxx:xxxxx/download/lib/ODM8920/j2ee_connector-1_5-fr.jar
!curl -o {cwd}/jrules-engine.jar http://xxx.xxx.xxx.xxx:xxxxx/download/lib/ODM8920/jrules-engine.jar
!curl -o {cwd}/jrules-res-execution.jar http://xxx.xxx.xxx.xxx:xxxxx/download/lib/ODM8920/jrules-res-execution-memory.jar
os.environ['PYSPARK_SUBMIT_ARGS'] = "--jars local:"+cwd+"/miniloan-ruleapp.jar,local:"+cwd+"/miniloan-xom.jar,local:"+cwd+"/jrules-engine.jar,local:"+cwd+"/j2ee_connector-1_5-fr.jar,local:"+cwd+"/jrules-res-execution.jar pyspark-shell"
#import pyspark # only run after findspark.init()
#from pyspark.sql import SparkSession
# Create a Spark Session
#sc = SparkSession.builder.getOrCreate()
sc
```
## Load Dataset
```
# Create a SParkSQL Context to load the data in a dataframe
from pyspark.sql import SQLContext
sql = SQLContext(sc)
new_decisions_pd = pd.read_csv("https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-decisions-defaultly-1K.csv")
request_df = sql.createDataFrame(new_decisions_pd)
request_df.printSchema()
request_df.show(10)
from pyspark.sql import Row
def CreateODMSession(sc):
if not hasattr(CreateODMSession, "fac"):
sc = SparkSession.builder.getOrCreate()
factoryConfig = sc._jvm.ilog.rules.res.session.IlrJ2SESessionFactory.createDefaultConfig()
xuConfig = factoryConfig.getXUConfig();
xuConfig.setLogAutoFlushEnabled(True);
xuConfig.getPersistenceConfig().setPersistenceType(sc._jvm.ilog.rules.res.session.config.IlrPersistenceType.MEMORY);
xuConfig.getManagedXOMPersistenceConfig().setPersistenceType(sc._jvm.ilog.rules.res.session.config.IlrPersistenceType.MEMORY);
CreateODMSession.fac=sc._jvm.ilog.rules.res.session.IlrJ2SESessionFactory(factoryConfig)
return CreateODMSession.fac
def execute (row):
sc = SparkSession.builder.getOrCreate()
factory=CreateODMSession(sc)
sessionRequest = factory.createRequest()
sessionRequest.setRulesetPath(sc._jvm.ilog.rules.res.model.IlrPath.parsePath("/miniloanruleapp/miniloanrules"))
# Ensure latest version of the ruleset is taken into account
sessionRequest.setForceUptodate(True)
# Set the input parameters for the execution of the rules
inputParameters = sc._jvm.java.util.HashMap()
borrower = sc._jvm.miniloan.Borrower(row.name, row.creditScore,row.income)
loan = sc._jvm.miniloan.Loan()
loan.setAmount(row.loanAmount)
loan.setDuration(row.monthDuration)
loan.setYearlyInterestRate(row.rate)
# Set parameters
inputParameters["loan"]=loan
inputParameters["borrower"]=borrower
sessionRequest.setInputParameters(inputParameters)
session = factory.createStatelessSession()
# Perfrom ODM Execution
response = session.execute(sessionRequest)
col= response.getOutputParameters()
# for key in col:
# print (key, "corresponds to", col[key])
loanResult= response.getOutputParameters().get("loan")
return Row(isApproved=loanResult.isApproved(),firedRulesCount=col['ilog.rules.firedRulesCount'])
#execute("dd")
dfResult = request_df.rdd.map(execute).toDF()
#count= dfResult.count()
print("Execution fininsh")
#rddResult
# Count the nb of Loan approved
```
# We can query execution Results
```
dfResult.createOrReplaceTempView("loan")
sql = SQLContext(sc)
sql.sql("SELECT isApproved FROM loan").show()
```
| github_jupyter |
# Basic Image Processing
#### I have taken an image 'sky.png' which is stored in the 'image-processing' directory, and initially I will find the three brightest points (not spots) in that image.
```
# Importing the required libraries
import numpy as np
import skimage.io
import skimage.color
import matplotlib.pyplot as plt
image = skimage.io.imread("sky.png") # Reading the image as 3D array
skimage.io.imshow(image)
# Converting to grayscale helps in analysis by reducing the dimensions from 3 for RGB to 2 for grayscale.
gray_image = skimage.color.rgb2gray(image)
skimage.io.imsave('sky_grayscale.png', arr = gray_image)
skimage.io.imshow(gray_image)
gray_image.shape
indices = np.argsort(-gray_image, axis = None) # getting an array of indices such that the elements corresponding to
# them are sorted in increasing order.
final = [np.unravel_index(indices[0], gray_image.shape)] # First element of sorted array indices is the brightest point i.e. P1
count = 1
for i in indices: # Loop to find P2 and P3
j, k = np.unravel_index(i, gray_image.shape) # this function returns the y and x coordinates in terms of pixel
flag = 0
for v in final: # Loop to make sure that P1, P2 and P3 are at least 200 pixels apart
if abs(v[0]-j) > 200 and abs(v[1]-k) > 200:
flag += 1
if flag == len(final): # Verifying if all the coordinates inside final have been compared with (j, k)
final.append((j, k))
if len(final) == 3: # Exit the loop after finding the three brightest points
break
final
# coordinates inside final are in the form of (y, x) as opposed to the convention of (x, y). On fixing that
bright = [(i[1], i[0]) for i in final]
# In skimage module x coordinate increases as we move from left to right in an image and y coordinate increases on moving
#from top to bottom, that means origin (0, 0) is at top left corner. Consider this while trying to see these points in the image
bright
P1 = bright[0]
P2 = bright[1]
P3 = bright[2]
print('P1 is :', P1)
print('P2 is :', P2)
print('P3 is :', P3)
```
#### Now, cropping the image to a size of $512 \times 512$, around the brightest point. The cropped image is also in the 'image_processing' folder.
```
# We need to crop the image to a size of 512 X 512, with P1 at centre. This essentially means that x and y coordinates
# vary by 512/2 pixels around P1
512/2
# We can crop the image around P1 by slicing
crop_img = image[P1[1] - 256 : P1[1] + 256, P1[0] - 256 : P1[0] +256] #Varying y and x coordinates by 256 pixels from P1
skimage.io.imsave('crop_img.png', arr = crop_img)
skimage.io.imshow(crop_img)
```
#### Drawing *contour plot* and *3D Surface plot* for the cropped image.
```
# crop_img is of size 512 X 512 pixels. To draw contour plot and 3D surface plot, we will plot the 2D array of grayscaled
# crop_image as a function of x and y coordinates
X = np.arange(0, 512) # getting an array for x coordinates
Y = np.arange(0, 512) # getting an array for y coordinates
X, Y = np.meshgrid(X, Y) # making 2D array from X and Y
crop_img_gray = skimage.color.rgb2gray(crop_img) # Reason for grayscaling is to get a 2D array
```
###### Contour Plots
```
fig = plt.figure()
axes = fig.gca(projection = 'rectilinear') # 2D contour plot
axes.contour(X, Y, crop_img_gray)
plt.show()
fig2 = plt.figure()
axes = fig2.gca(projection = '3d') # 3D contour plot
axes.contour(X, Y, crop_img_gray)
plt.show()
```
##### 3D surface plot
```
fig3 = plt.figure()
axes = fig3.gca(projection = '3d')
axes.plot_surface(X, Y, crop_img_gray)
plt.show()
```
| github_jupyter |
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```
NAME = ""
COLLABORATORS = ""
```
---
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Low-Res Scoring and Fragments](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.02-Low-Res-Scoring-and-Fragments.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.01-Basic-Folding-Algorithm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Basic Folding Algorithm
Keywords: pose_from_sequence(), random move, scoring move, Metropolis, assign(), Pose()
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
```
## Building the Pose
In this workshop, you will be folding a 10 residue protein by building a simple de novo folding algorithm. Start by initializing PyRosetta as usual.
Create a simple poly-alanine `pose` with 10 residues for testing your folding algorithm. Store the pose in a variable called "polyA."
```
# YOUR CODE HERE
raise NotImplementedError()
polyA.pdb_info().name("polyA")
```
__Question:__
Check the backbone dihedrals of a few residues (except the first and last) using the `.phi()` and `.psi()` methods in `Pose`. What are the values of $\phi$ and $\psi$ dihedrals? You should see ideal bond lengths and angles, but the dihedrals may not be as realistic.
```
# YOUR CODE HERE
raise NotImplementedError()
```
OPTIONAL:
We may want to visualize folding as it happens. Before starting with the folding protocol, instantiate a PyMOL mover and use a UNIQUE port number between 10,000 and 65,536. We will retain history in order to view the entire folding process by utilizing the `.keep_history()` method. Make sure it says `PyMOL <---> PyRosetta link started!` on its command line.
```
pmm = PyMOLMover()
pmm.keep_history(True)
```
Use the PyMOL mover to view the `polyA` `Pose`. You should see a long thread-like structure in PyMOL.
```
pmm.apply(polyA)
```
## Building A Basic *de Novo* Folding Algorithm
Now, write a program that implements a Monte Carlo algorithm to optimize the protein conformation. You can do this here in the notebook, or you may use a code editor to write a `.py` file and execute in a Python or iPython shell.
Our main program will include 100 iterations of making a random trial move, scoring the protein, and accepting/rejecting the move. Therefore, we can break this algorithm down into three smaller subroutines: **random, score, and decision.**
### Step 1: Random Move
For the **random** trial move, write a subroutine to choose one residue at random using `random.randint()` and then randomly perturb either the φ or ψ angles by a random number chosen from a Gaussian distribution. Use the Python built-in function `random.gauss()` from the `random` library with a mean of the current angle and a standard deviation of 25°. After changing the torsion angle, use `pmm.apply(polyA)` to update the structure in PyMOL.
```
import math
import random
def randTrial(your_pose):
# YOUR CODE HERE
raise NotImplementedError()
return your_pose
```
### Step 2: Scoring Move
For the **scoring** step, we need to create a scoring function and make a subroutine that simply returns the numerical energy score of the pose.
```
sfxn = get_fa_scorefxn()
def score(your_pose):
# YOUR CODE HERE
raise NotImplementedError()
```
### Step 3: Accepting/Rejecting Move
For the **decision** step, we need to make a subroutine that either accepts or rejects the new conformatuon based on the Metropolis criterion. The Metropolis criterion has a probability of accepting a move as $P = \exp( -\Delta G / kT )$. When $ΔE ≥ 0$, the Metropolis criterion probability of accepting the move is $P = \exp( -\Delta G / kT )$. When $ΔE < 0$, the Metropolis criterion probability of accepting the move is $P = 1$. Use $kT = 1$ Rosetta Energy Unit (REU).
```
def decision(before_pose, after_pose):
# YOUR CODE HERE
raise NotImplementedError()
```
### Step 4: Execution
Now we can put these three subroutines together in our main program! Write a loop in the main program so that it performs 100 iterations of: making a random trial move, scoring the protein, and accepting/rejecting the move.
After each iteration of the search, output the current pose energy and the lowest energy ever observed. **The final output of this program should be the lowest energy conformation that is achieved at *any* point during the simulation.** Be sure to use `low_pose.assign(pose)` rather than `low_pose = pose`, since the latter will only copy a pointer to the original pose.
```
def basic_folding(your_pose):
"""Your basic folding algorithm that completes 100 Monte-Carlo iterations on a given pose"""
lowest_pose = Pose() # Create an empty pose for tracking the lowest energy pose.
# YOUR CODE HERE
raise NotImplementedError()
return lowest_pose
```
Finally, output the last pose and the lowest-scoring pose observed and view them in PyMOL. Plot the energy and lowest-energy observed vs. cycle number. What are the energies of the initial, last, and lowest-scoring pose? Is your program working? Has it converged to a good solution?
```
basic_folding(polyA)
```
Here's an example of the PyMOL view:
```
from IPython.display import Image
Image('./Media/folding.gif',width='300')
```
### Exercise 1: Comparing to Alpha Helices
Using the program you wrote for Workshop #2, force the $A_{10}$ sequence into an ideal α-helix.
**Questions:** Does this helical structure have a lower score than that produced by your folding algorithm above? What does this mean about your sampling or discrimination?
### Exercise 2: Optimizing Algorithm
Since your program is a stochastic search algorithm, it may not produce an ideal structure consistently, so try running the simulation multiple times or with a different number of cycles (if necessary). Using a kT of 1, your program may need to make up to 500,000 iterations.
<!--NAVIGATION-->
< [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Low-Res Scoring and Fragments](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.02-Low-Res-Scoring-and-Fragments.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.01-Basic-Folding-Algorithm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| github_jupyter |
# Student Performance in E-Learning Envirnoment
Importing packages
```
import numpy as np
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
```
Reading Data
```
data=pd.read_csv('msc2019b.csv')
data.head()
data.tail()
```
Pre-Processing
```
#Checking of Missing Values
data.isnull().sum()
data.columns
```
No Missing Values
```
#Binning
bins = [6.5,7.0,7.5,8.0,8.5,9.0,9.5,10.0]
group_names = ['>6.5','>7.0','>7.5','>8.0','>8.5','>9.0','>9.5']
data['grade_sep']= pd.cut(data['sem1_marks '],bins,labels=group_names)
data.head()
#Histogram for Binned Values
df = pd.DataFrame(data)
fig = plt.figure(figsize =(8,6))
plt.hist(df['grade_sep'],bins=7)
plt.show()
# OFFLINE EXAM - DISTRIBUTION PLOT
plt.figure(4, figsize=(6,10))
plt.subplot(211)
sn.distplot(df['sem1_marks '])
plt.subplot(212)
sn.distplot(df['sem2_marks '])
plt.show()
# ONLINE EXAM - DISTRIBUTION PLOT
plt.figure(2, figsize=(6,10))
plt.subplot(211)
sn.distplot(df['sem3_marks '])
plt.subplot(212)
sn.distplot(df['sem4_marks '])
plt.show()
```
From above graphs of offilne and online exams, we can easily give interfrence that :
1. Semester 1 scores are normally distributed and semester 2 score is also to some extend normally distributed,
where as semester 3 is not normally distributed and semester 4 is also not normally distibuted but it is compartively better than semester 3.
therefore, there is lot of randomness in semester 3 then following semester 4 and 2.
Randomness in Scores :
max randomess > Semester 3 >semester 4 > semester 2 > Semester 1 > Min Randomness
So, Online exam have lot of randomness , where offline exams gives stable score.
possible reasons :
1. Cheating in online exams
2. improper understanding or contact between student and staff
3. Uneven markings
4. Online Gradings
5. ....
Overall ; Lack of Transparency from both sides.
```
#BoxPlot for Each Semester
data = [df['sem1_marks '],df['sem2_marks '],df['sem3_marks '],df['sem4_marks ']]
#Figure Size
fig = plt.figure(figsize =(12, 8))
# Creating axes instance
ax = fig.add_axes([0, 0, 1, 1])
bp = ax.boxplot(data)
plt.show()
df.sample(frac=0.05)
df.info()
df.dtypes
df.describe()
df.corr()
#Correlation between each semester
df.iloc[5:9,5:9].corr()
sn.heatmap(df.iloc[5:9,5:9].corr(), linewidths=.8, annot=True)
#GENDER DESCRIPTION
#show count Gender
df['Gender'].value_counts()
#show Gender's unique
df['Gender'].unique()
#Gender show bar plot
sn.set(style='whitegrid')
ax=sn.barplot(x=df['Gender'].value_counts().index,y=df['Gender'].value_counts().values,palette="Reds_d",hue=['female','male'])
plt.legend(loc=8)
plt.xlabel('Gender')
plt.ylabel('Frequency')
plt.title('Show of Gender Bar Plot')
plt.show()
#DEPERMENT DESCRIPTION
plt.figure(figsize=(7,7))
sn.barplot(x=df['Department'].value_counts().index,
y=df['Department'].value_counts().values)
plt.xlabel('Department')
plt.ylabel('Frequency')
plt.title('Show of Department Bar Plot')
plt.show()
# MARKS - ALL GPA - DESCRIPTION
#Calculating Avg of offline exams(Sem 1 and Sem 2)
cols = ['sem1_marks ','sem2_marks ']
df['offline_avg'] = df[cols].astype(float).mean(axis=1)
#Calculating Avg of online exams(Sem 1 and Sem 2)
cols = ['sem3_marks ','sem4_marks ']
df['online_avg'] = df[cols].astype(float).mean(axis=1)
#Calculating Avg of online exams(Sem 1 and Sem 2)
cols = ['sem1_marks ','sem2_marks ','sem3_marks ','sem4_marks ']
df['avg'] = df[cols].astype(float).mean(axis=1)
df
#Counting student, Those who have online_avg > offline_avg
yes,no,l=[],[],[]
for i in range(len(df['Name'])):
if(df['online_avg'][i]>df['offline_avg'][i]):
yes.append(df['Name'][i])
l.append('YES')
else:
no.append(df['Name'][i])
l.append('NO')
#Adding to dataframe
df['Online_score_is_greater']= l
#Counting student, Those who have online_avg > offline_avg
df['Online_score_is_greater'].value_counts()
print('student, Those who have online_avg > offline_avg',len(yes))
print('student, Those who have online_avg < offline_avg',len(no))
#Gender show bar plot
sn.set(style='whitegrid')
ax=sn.barplot(x=df['Online_score_is_greater'].value_counts().index,y=df['Online_score_is_greater'].value_counts().values,palette="Blues_d",hue=['YES','NO'])
plt.legend(loc=8)
plt.xlabel('Average_Online_Score_is_Greater')
plt.ylabel('No. of Student')
plt.title('Student having Online Average Score more than offline Scores')
plt.show()
#Checking How much percentage improvement is done
per=[]
for i in range(len(df['Name'])):
per.append(abs(df['online_avg'][i]-df['offline_avg'][i])*10)
df['Percentge_difference_in_both_modes']=per
df.head()
cc=0
for i in range(len(df['Name'])):
if(df['Online_score_is_greater'][i]=='YES' and df['Percentge_difference_in_both_modes'][i]>=10):
cc=cc+1
print('The number of student, whose percentage difference in both mode is greater than 10% is ',cc)
print((36/150)*100,'% of Student got 10% percent more marks in 10%.')
ax1 = df.plot(kind='scatter', x='sem1_marks ', y='sem1_marks ', color='r')
ax2 = df.plot(kind='scatter', x='sem2_marks ', y='sem1_marks ', color='g', ax=ax1)
ax3 = df.plot(kind='scatter', x='sem3_marks ', y='sem1_marks ', color='b', ax=ax1)
ax4 = df.plot(kind='scatter', x='sem4_marks ', y='sem1_marks ', color='y', ax=ax1)
print(ax1 == ax2 == ax3 == ax4)
df=pd.read_csv('msc2019b.csv')
#Coverting Float value is multiplyed 100...to int.
df['sem1_marks '] = (100*df['sem1_marks ']).astype(float).astype(int)
df['sem2_marks '] = (100*df['sem2_marks ']).astype(float).astype(int)
df['sem3_marks '] = (100*df['sem3_marks ']).astype(float).astype(int)
df['sem4_marks '] = (100*df['sem4_marks ']).astype(float).astype(int)
df.head()
l=list(df.columns)
l[27]
# Label Encoding
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
df['Gender']= label_encoder.fit_transform(df['Gender'])
df['Department']= label_encoder.fit_transform(df['Department'])
df['Age']= label_encoder.fit_transform(df['Age'])
l=list(df.columns)
for i in range(9,28):
df[l[i]] = label_encoder.fit_transform(df[l[i]])
df.head()
df.drop(columns=['Roll number','Name'], inplace=True)
df
```
### MODEL SELECTION AND FITTING
```
#Feature
X = df.drop(['sem4_marks '], axis=1)
#Target
y = df['sem4_marks ']
l=list(X.iloc[1])
print(l)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
```
LINEAR REGRESSION
```
from sklearn.linear_model import LinearRegression
mlr = LinearRegression()
mlr.fit(x_train, y_train)
#Prediction of test set
y_pred_mlr= mlr.predict(x_test)
#Predicted values
print("Prediction for test set: {}".format(y_pred_mlr))
#Actual value and the predicted value
mlr_diff = pd.DataFrame({'Actual value': y_test, 'Predicted value': y_pred_mlr})
#Model Evaluation
from sklearn import metrics
meanAbErr = metrics.mean_absolute_error(y_test, y_pred_mlr)
meanSqErr = metrics.mean_squared_error(y_test, y_pred_mlr)
rootMeanSqErr = np.sqrt(metrics.mean_squared_error(y_test, y_pred_mlr))
Acc=mlr.score(X,y)*100
print('R squared: {:.2f}'.format(mlr.score(X,y)*100))
print('Mean Absolute Error:', meanAbErr)
print('Mean Square Error:', meanSqErr)
print('Root Mean Square Error:', rootMeanSqErr)
print('Accuracy:', round(Acc, 2), '%.')
```
Gradient Boosting Regressor
```
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
reg = GradientBoostingRegressor(random_state=0)
reg.fit(X_train, y_train)
reg.predict(X_test[1:2])
ac=reg.score(X_test, y_test)
acc=ac*100
print('Accuracy:', round(acc, 2), '%.')
```
Extra Trees Regressor
```
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
reg = ExtraTreesRegressor(n_estimators=100, random_state=0).fit(X_train, y_train)
ac=reg.score(X_test, y_test)
acc=ac*100
print('Accuracy:', round(acc, 2), '%.')
```
Random Forest Regressor Model
```
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(X, y, test_size = 0.25, random_state = 42)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(train_features, train_labels);
predictions = rf.predict(test_features)
errors = abs(predictions - test_labels)
print('Mean Absolute Error:', round(np.mean(errors), 2))
mape = 100 * (errors / test_labels)
accuracy = 100 - np.mean(mape)
print('Accuracy:', round(accuracy, 2), '%.')
```
### OUT OF ALL, WE GOT RANDOM FOREST REGRESSOR HAS MAXIMUM ACCURACY OF 97.11%.
WHY REGRESSOR and WHY NOT OTHER METHODS or MODELs:
Here, We are dealing with marks and related factors which effects the marks. There is a relationship between for sure. as well those relation is in numberical form. Therefore, We are considering Regressor model only.
Even if we try any classifier models, we wont get accuracy more than 10-20%,
Where we can have minimum of 50% accuracy in Regressor. As there exits , a expersional relationship between them.
So, Here are fitted model to find value of sem4 marks by using previous semester and some other factors.
Similarly, futher we can predict marks of semester 5 by using the above model.
Formula:
if semester 5 is Online , Then we can directly fit above model and get results:
Sem5_marks = predicted_sem5_mark
if semester 5 if offline, then result be :
Sem5_marks = ((sem1_mark x 3) +(predicted_sem5_mark))/4
| github_jupyter |
## Assignment 1 | Data Types
Add code cells as needed for your answers.
### Exercise 1: Manipulating Lists
Create a list containing the numbers 10, 20, and 30. Store your list as a variable named `a`. Then create a second list containing the numbers 30, 60, and 90. Call this this `b`.
```
a = [10, 20, 30]
b = [30, 60, 90]
```
In the cells below, write Python expressions to create the following four outputs by combining `a` and `b` in creative ways:
1. [[10, 20, 30], [30, 60, 90]]
2. [10, 20, 30, 30, 60, 90]
3. [10, 20, 60, 90]
4. [20, 40, 60]
**Creating [[10, 20, 30], [30, 60, 90]]**
```
#trying it one way
[a] + [b]
#trying it another way
c = [a, b]
print(c)
#trying it a third way
e = [a]
e.extend([b])
print(e)
```
**Creating [10, 20, 30, 30, 60, 90]**
```
#This way defines variable d as a concatenation of lists a and b.
d = a + b
print(d)
#This way says insert 30 in index 4, 60 in index 5, and 90 in index 6 of list a.
a.insert(4, 30)
a.insert(5, 60)
a.insert(6, 90)
print(a)
```
**Creating [10, 20, 60, 90]**
```
#delete the items at indices 2 and 3 (need to specify 2:4 because non-inclusive of upper boundary)
del a[2:4]
a
```
**Creating [20, 40, 60]**
```
#telling it to delete the items at indices 2 and 3 and then add '40' at index 1
del a[0]
print(a)
del a[2]
print(a)
a.insert(1, 40)
print(a)
```
### Exercise 2. Working with Lists
Create a list that contains the sums of each of the lists in G.
`G = [[13, 9, 8], [14, 6, 12], [10, 13, 11], [7, 18, 9]]`
Your output should look like:
- `[30, 32, 34, 34]`
Hint: try computing the sum for just one list first.
```
G = [[13, 9, 8], [14, 6, 12], [10, 13, 11], [7, 18, 9]]
H = sum(G[0]), sum(G[1]), sum(G[2]), sum(G[3])
print(H)
```
### Exercise 3: String Manipulation
Turn the string below into 'all good countrymen' using the minimum amount of code, using only the methods we've covered so far. A couple of lines of code should do the trick. Note: this requires string and list methods.
```
s = 'Now is the time for all good men to come to the aid of their country!'
#tell me the index of the 'a' the beginning of 'all'
s.find('a')
#tell me the index for the 'n' at the end of 'men'
s.find('n')
#define variable t as the segment of string(s) that contains 'all good men'
t = s[20:32]
t
#modify variable t by replacing 'men' with 'countrymen'
t.replace('men', 'countrymen')
```
### Exercise 4: String Manipulation and Type Conversion
Define a variable `a = "Sarah earns $96500 in a year"`. Then maniuplate the value of `a` in order to print the following string: `Sarah earns $8041.67 monthly`
Start by doing it in several steps and then combine them one step at a time until you can do it in one line.
```
a = "Sarah earns $96500 in a year"
b = a.replace('$96500 in a year', '$8041.67 monthly')
b
```
### Exercise 5: Create and Query a Dictionary on State Demographics
Create two dictionaries, one for California and one for New York state, based on the data in the following table:
| States | Pop Density | Prop White | Prop Afr Amer | Prop Asian | Prop Other | Owners | Renters |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| CA | 239.1 | 0.57 | 0.06 | 0.13 | 0.22 | 7035371 | 5542127 |
| NY | 411.2 | 0.65 | 0.15 | 0.07 | 0.22 | 3897837 | 3419918 |
Each dictionary should have the following keys and value types: `name: (string)` , `population density: (float)`, `race (dict)`, `tenure: (dict)`.
1. Create one dictionary called CA and one called NY that contain dictionaries containing name, pop_density, race as a dictionary, and tenure for California and New York. Now combine these into a dictionary called "states", making it a dictionary of dictionaries, or a nested dictionary.
1. Check if Texas is in our state dictionary (we know it isn't but show us).
1. Print the White population in New York as a percentage
1. Assume there was a typo in the data, and update the White population fraction of NY to 0.64. Verify that it was updated by printing the percentage again.
1. Print the percentage of households that are renters in California, with two decimal places
```
#create a dictionary named ca_dict with name as a string, pop_density as a float, and dictionaries for race and tenure
#create a dictionary named ny_dict with name as a string, pop_density as a float, and dictionaries for race and tenure
#create a dictionary named states with key-value pairs where California:ca_dict and New York:ny_dict
ca_dict = {'name': 'California', 'pop_density': 239.1, 'race': {'prop_white': 0.57, 'prop_afr_amer': 0.06, 'prop_asian': 0.13, 'prop_other': 0.22}, 'tenure': {'owners': 7035371, 'renters': 5542127}}
ny_dict = {'name': 'New York', 'pop_density': 411.2, 'race': {'prop_white': 0.65, 'prop_afr_amer': 0.15, 'prop_asian': 0.07, 'prop_other': 0.22}, 'tenure': {'owners': 3897837, 'renters': 3419918}}
states = {'California': ca_dict, 'New York': ny_dict}
print(states)
#does not include Texas
print(states.keys())
#Ignore this, notes to self -- old code not using nested dictionaries to practice using zip command
#keys = ['name', 'pop_density', 'prop_white', 'prop_afr_amer', 'proper_asian', 'prop_other', 'owners', 'renters']
#ca_values = ['CA', 239.1, 0.57, 0.06, 0.13, 0.22, 7035371, 5542127]
#ca = dict(zip(keys, ca_values))
#ny_values = ['NY', 411.2, 0.65, 0.15, 0.07, 0.22, 3897837, 3419918]
#ny = dict(zip(keys, ny_values))
#states = {'ca_dict':ca, 'ny_dict':ny}
#print(states)
#rounding NY prop_white to a percentage
for value in states["New York"]["race"].keys():
states["New York"]['race'][value] = round(100 * states['New York']['race'][value],2)
print(states['New York']['race']['prop_white'])
#updating the White population fraction of NY to 0.64
states['New York']['race']['prop_white'] = 64.0
print(states['New York']['race']['prop_white'])
#calculated total number of ca households and then divided to find percentage that are renters. Then rounded perc_renters to two decimal spaces.
ca_households = states['California']['tenure']['renters'] + states['California']['tenure']['owners']
perc_renters = states['California']['tenure']['renters']/ca_households
print(perc_renters)
round(perc_renters, 2)
```
### Exercise 6: Working with Numpy Arrays
1. Create and print a 4 x 4 array named `a` with value 3 everywhere.
1. Create and print a 4 x 4 array named `b` with elements drawn from a uniform random distribution
1. Create and print array `c` by dividing a by b
1. Compute and print the min, mean, max, median, and 90th percentile values of `c`
1. Compute and print the sum of the second column in `c`
```
#created array a
import numpy as np
a = np.array([[[[3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3]]]])
print(a)
#created array b
b = np.random.rand(4,4)
print(b)
#created array c by dividing a by b
c = a/b
print(c)
#calculated descriptive statistics
np.min(c), np.max(c), np.mean(c), np.median(c), np.percentile(c, 90, axis=None, out=None, overwrite_input=False, interpolation='linear', keepdims=False,)
print(c)
np.sum(c)
#I'm stuck on summing the second column. This just reprinted c :(
np.sum(c, axis=0)
```
| github_jupyter |
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Circuits
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/circuits"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/circuits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
```
## Conceptual overview
The primary representation of quantum programs in Cirq is the `Circuit` class. A `Circuit` is a collection of `Moments`. A `Moment` is a collection of `Operations` that all act during the same abstract time slice. An `Operation` is a some effect that operates on a specific subset of Qubits, the most common type of `Operation` is a `GateOperation`.

Let's unpack this.
At the base of this construction is the notion of a qubit. In Cirq, qubits and other quantum objects are identified by instances of subclasses of the Qid base class. Different subclasses of Qid can be used for different purposes. For example, the qubits that Google’s Xmon devices use are often arranged on the vertices of a square grid. For this, the class GridQubit subclasses Qid. For example, we can create a 3 by 3 grid of qubits using
```
import cirq
qubits = [cirq.GridQubit(x, y) for x in range(3) for y in range(3)]
print(qubits[0])
```
The next level up is the notion of a `Gate`. A `Gate` represents a physical process that occurs on a `Qubit`. The important property of a `Gate` is that it can be applied to one or more qubits. This can be done via the `Gate.on` method itself or via `()`, and doing this turns the `Gate` into a `GateOperation`.
```
# This is an Pauli X gate. It is an object instance.
x_gate = cirq.X
# Applying it to the qubit at location (0, 0) (defined above)
# turns it into an operation.
x_op = x_gate(qubits[0])
print(x_op)
```
A `Moment` is simply a collection of operations, each of which operates on a different set of qubits, and which conceptually represents these operations as occurring during this abstract time slice. The `Moment` structure itself is not required to be related to the actual scheduling of the operations on a quantum computer, or via a simulator, though it can be. For example, here is a `Moment` in which **Pauli** `X` and a `CZ` gate operate on three qubits:
```
cz = cirq.CZ(qubits[0], qubits[1])
x = cirq.X(qubits[2])
moment = cirq.Moment([x, cz])
print(moment)
```
The above is not the only way one can construct moments, nor even the typical method, but illustrates that a `Moment` is just a collection of operations on disjoint sets of qubits.
Finally, at the top level a `Circuit` is an ordered series of `Moment` objects. The first `Moment` in this series contains the first `Operations that will be applied. Here, for example, is a simple circuit made up of two moments:
```
cz01 = cirq.CZ(qubits[0], qubits[1])
x2 = cirq.X(qubits[2])
cz12 = cirq.CZ(qubits[1], qubits[2])
moment0 = cirq.Moment([cz01, x2])
moment1 = cirq.Moment([cz12])
circuit = cirq.Circuit((moment0, moment1))
print(circuit)
```
Note that the above is one of the many ways to construct a `Circuit`, which illustrates the concept that a `Circuit` is an iterable of `Moment` objects.
## Constructing circuits
Constructing Circuits as a series of `Moment` objects, with each `Moment` being hand-crafted, is tedious. Instead, we provide a variety of different ways to create a `Circuit`.
One of the most useful ways to construct a `Circuit` is by appending onto the `Circuit` with the `Circuit.append` method.
```
from cirq.ops import CZ, H
q0, q1, q2 = [cirq.GridQubit(i, 0) for i in range(3)]
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1), H(q2)])
print(circuit)
```
This appended a new moment to the qubit, which we can continue to do:
```
circuit.append([H(q0), CZ(q1, q2)])
print(circuit)
```
In these two examples, we appended full moments, what happens when we append all of these at once?
```
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1), H(q2), H(q0), CZ(q1, q2)])
print(circuit)
```
We see that here we have again created two `Moment` objects. How did `Circuit` know how to do this? `Circuit`'s `Circuit.append` method (and its cousin, `Circuit.insert`) both take an argument called the `InsertStrategy`. By default, `InsertStrategy` is `InsertStrategy.NEW_THEN_INLINE`.
### InsertStrategies
`InsertStrategy` defines how `Operations` are placed in a `Circuit` when requested to be inserted at a given location. Here, a location is identified by the index of the `Moment` (in the `Circuit`) where the insertion is requested to be placed at (in the case of `Circuit.append`, this means inserting at the `Moment`, at an index one greater than the maximum moment index in the `Circuit`).
There are four such strategies: `InsertStrategy.EARLIEST`, `InsertStrategy.NEW`, `InsertStrategy.INLINE` and `InsertStrategy.NEW_THEN_INLINE`.
`InsertStrategy.EARLIEST` is defined as:
*Scans backward from the insert location until a moment with operations touching qubits affected by the operation to insert is found. The operation is added to the moment just after that location.*
For example, if we first create an `Operation` in a single moment, and then use `InsertStrategy.EARLIEST`, `Operation` can slide back to this first ` Moment` if there is space:
```
from cirq.circuits import InsertStrategy
circuit = cirq.Circuit()
circuit.append([CZ(q0, q1)])
circuit.append([H(q0), H(q2)], strategy=InsertStrategy.EARLIEST)
print(circuit)
```
After creating the first moment with a `CZ` gate, the second append uses the `InsertStrategy.EARLIEST` strategy. The `H` on `q0` cannot slide back, while the `H` on `q2` can and so ends up in the first `Moment`.
Contrast this with the `InsertStrategy.NEW` `InsertStrategy`:
*Every operation that is inserted is created in a new moment.*
```
circuit = cirq.Circuit()
circuit.append([H(q0), H(q1), H(q2)], strategy=InsertStrategy.NEW)
print(circuit)
```
Here every operator processed by the append ends up in a new moment. `InsertStrategy.NEW` is most useful when you are inserting a single operation and do not want it to interfere with other `Moments`.
Another strategy is `InsertStrategy.INLINE`:
*Attempts to add the operation to insert into the moment just before the desired insert location. But, if there’s already an existing operation affecting any of the qubits touched by the operation to insert, a new moment is created instead.*
```
circuit = cirq.Circuit()
circuit.append([CZ(q1, q2)])
circuit.append([CZ(q1, q2)])
circuit.append([H(q0), H(q1), H(q2)], strategy=InsertStrategy.INLINE)
print(circuit)
```
After two initial `CZ` between the second and third qubit, we try to insert three `H` `Operations`. We see that the `H` on the first qubit is inserted into the previous `Moment`, but the `H` on the second and third qubits cannot be inserted into the previous `Moment`, so a new `Moment` is created.
Finally, we turn to the default strategy:
*Creates a new moment at the desired insert location for the first operation, but then switches to inserting operations according to `InsertStrategy.INLINE`.*
```
circuit = cirq.Circuit()
circuit.append([H(q0)])
circuit.append([CZ(q1,q2), H(q0)], strategy=InsertStrategy.NEW_THEN_INLINE)
print(circuit)
```
The first append creates a single moment with an `H` on the first qubit. Then, the append with the `InsertStrategy.NEW_THEN_INLINE` strategy begins by inserting the `CZ` in a new `Moment` (the `InsertStrategy.NEW` in `InsertStrategy.NEW_THEN_INLINE`). Subsequent appending is done `InsertStrategy.INLINE`, so the next `H` on the first qubit is appending in the just created `Moment`.
### Patterns for arguments to append and insert
In the above examples, we used a series of `Circuit.append `calls with a list of different `Operations` added to the circuit. However, the argument where we have supplied a list can also take more than just list values. For instance:
```
def my_layer():
yield CZ(q0, q1)
yield [H(q) for q in (q0, q1, q2)]
yield [CZ(q1, q2)]
yield [H(q0), [CZ(q1, q2)]]
circuit = cirq.Circuit()
circuit.append(my_layer())
for x in my_layer():
print(x)
print(circuit)
```
Recall that Python functions with a `yield` are generators. Generators are functions that act as iterators. In the above example, we see that we can iterate `over my_layer()`. In this case, each of the `yield` returns produces what was yielded, and here these are:
* `Operations`,
* lists of `Operations`,
* or lists of `Operations` mixed with lists of `Operations`.
When we pass an iterator to the `append` method, `Circuit` is able to flatten all of these and pass them as one giant list to `Circuit.append` (this also works for `Circuit.insert`).
The above idea uses the concept of `OP_TREE`. An `OP_TREE` is not a class, but a *contract*. The basic idea is that, if the input can be iteratively flattened into a list of operations, then the input is an `OP_TREE`.
A very nice pattern emerges from this structure: define generators for sub-circuits, which can vary by size or `Operation` parameters.
Another useful method to construct a `Circuit` fully formed from an `OP_TREE` is to pass the `OP_TREE` into `Circuit` when initializing it:
```
circuit = cirq.Circuit(H(q0), H(q1))
print(circuit)
```
### Slicing and iterating over circuits
Circuits can be iterated over and sliced. When they are iterated, each item in the iteration is a moment:
```
circuit = cirq.Circuit(H(q0), CZ(q0, q1))
for moment in circuit:
print(moment)
```
Slicing a `Circuit`, on the other hand, produces a new `Circuit` with only the moments corresponding to the slice:
```
circuit = cirq.Circuit(H(q0), CZ(q0, q1), H(q1), CZ(q0, q1))
print(circuit[1:3])
```
Especially useful is dropping the last moment (which are often just measurements): `circuit[:-1]`, or reversing a circuit: `circuit[::-1]`.
### Related
- [Transform circuits](transform.ipynb) - features related to circuit optimization and compilation
- [Devices](devices.ipynb) - validate circuits against device constraints
- [Import/export circuits](interop.ipynb) - features to serialize/deserialize circuits into/from different formats
| github_jupyter |

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/115) 03:42*
In numerical variable, you want to take the average mean and infer the average and the differences. In categorical variable, you take the proportion of frequency, you may want to perform some contigency table.Studies that take percentage are likely categorical variables (XX% support vs XX% oppose same sex marriage).
In this blog we're going to observe one categorical variable. We talk about binary classification, and then more than two classification. Then we're going to compare two categorical variables, again with binary classification and more than two classification.
<!--TEASER_END-->
## Sampling Distribution

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/115) 05:10*
Recall that sampling distribution is when you take infinite times/all possible combination of sample, for particular sample size, and draw the summary statistic from each of the sample distribution and make a distribution out of it.
So for example we observe categorical variable of smoker vs non-smoker. Because we don't know the population proportion and size, we make an estimate on each of the country. We take 1000 sample size for each country and calculate the proportion. The proportions will make a sampling distribution, and the average of proportions will be approximate proportion of whole population. So you see, in the beginning **we have categorical variable, but we just observe one levels and convert its sample statistics, which is numerical variables. **

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/115) 07:20*
CLT is the same for proportion when use it on mean. From CLT, we want to know shape, center, and spread. The CLT requires us to use random sampling/assignment, and the mean is just equal the proportion. Spread can be calculated by incorporates the proportions and the complements divided by sample size. Then for sample size, this is similar to what we require in binomial, where you have incorporates sample size, success proportion, and failure proportions.So if p population is unknown, use point estimate proportion.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/115) 11:01*
Take a look at this example. We're going to calculate the probability of at least 95% of 200 randomly selected sampled that are flowering plants. That what we usually calculate it before using *pnorm* function in R. Check the condtitions whether it satisfied the CLT or not. After that, we also want to meet binomial conditions for our sample size. If both conditions satisfied, we can shade the distribution and calculate it by using R what we get is.Remember that we're using *least* not exact in the distribution,because there's no such thing as cut exact in probability.
```
pnorm(0.95,mean=0.9,sd=0.0212,lower.tail=F)
```
We also can do this in binomial. We know that using binomial, the expected value (mean) is just sample size times probability of success.
```
n = 200
p = 0.95
n*p
```
So using just binomial distribution, we're summing up 190 to 200, because we're interested of probability of getting at least 190,
P(190) + P(191) .. + P(200),
```
sum(dbinom(190:200,200,0.90))
dbinom(30*0.12,250,0.08)
```
##### It's not exactly the same as we previously calculated, but nevertheless look similar.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/115) 15:40*
So what if the success-failure (np > 10 && n(1-p) > 10) conditions are not met?
The center of sampling distribution will be around at the population proportion. You see that we have 0 and 1 boundary. This is intuitive, as there are no >100% proportion or < 0% proportion. So we have those boundary. We have one proportion that closer to zero, and we also have proportion that closer to one. You see that sampling distribution is centered around the proportion of the sample.
The spread can be calculated using standard error formula. We have proportion, proportion complements and sample size.
But the shape is what differs. When the proportion is closer to zero, like 0.20 in the example, we have natural boundary of the distribution can't be less than 1, this will make a long tail towards 1. Same goes for propotion that in 0.8. So depending on the propotion, we can have skewed distribution.
Intuitively, using previous examples when we have 50 random sample, 90% of p will gives us left skewed distribution.
# Confidence Interval

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 01:38*
Don't try to focus on this question, it's just act as a basis of next research question. So we're know that controlling is a better study design, we divided into treatment and control groups, so b is a better choice. This question asked to 670 people and we focus on the proportion.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 02:17*
So we want to estimate the proportion of population, as parameter of interest. The point estimate is what we can calculate using our sample.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 09:59*
So before, we calculate the standard error of population proportion. But we also know that it's almost not available.So what we can do is get the standard error of point estimate proportion.
So how do you calculate the confidence interval? Use 95% confidence level.
First check the conditions
* Independence. We know that 670 is less than 10% population, and GSS is sampling randomly. So we know that one sampled of American has good intuition about experimental design will be independent of one another.
* Sample size/skewness. For the sample size, we can use formula to assert the enough sample size. But actually we can eyeballing by looking at the example. we have 571 successes and 99 failures, and both are greater than 10. So failures and successes are met, so we know that our sampling distribution will be nearly normal.
```
p = 0.85
n = 670
CL = 0.95
SE = sqrt(p*(1-p)/n)
z_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
ME = z_star * SE
c(p-ME, p+ME)
```
So based on this data, we can interpret confidence interval as:
* We are 95% confident that 83% to 87% of all Americans have good intuition about experimental design.
* 95% of random samples of 670 Americans will yield confidence interval that will capture true proportion of Americans that have good intuition about experimental design.
## Required sample size of desired ME

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 08:41*
So we previously learned about required sampled size for point estimate mean. For categorical variable, we use manipulate the parameter in standard of error. Since this is same example, we're going to use same proportion. Specifying the desired ME, and leave n for final calculations, we get the results. Since this is the threshold of minimal requirements, .04 will be round up 1, since there are .04 person(numerical discrete)
```
#Required sample size proportion for desired ME
p = 0.85
z_star = 1.96
ME = 0.01
z_star**2*p*(1-p)/ME**2
```
If we have $\hat{p}$, we can use the value to put into our calculations. If we don't have it, we use 0.5. This is picked for two reasons
* 50:50 is best estimate (prior) for one categorical variable with two levels.
* 0.5 will give us the largest possible sample size.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 09:27*
HT framework is also applies really similar to what we do in the mean.
First we set null and alternative hypothesis test. Remember that we use population parameter, like in CI, because they both want to infer the population parameter. What you have in this data, will reject or fail to reject the null hypothesis.
We set our point estimate as proportion sampled.
We check the conditions. Similar to mean we want to have less than 10% population and random sample/assignment. But the diference is when you have larger than 30 for mean, you want to have larger than 10 successes and larger than 10 failures. This is the expected value like in the binomial, so p will be picked, instead of p hat.
Draw the sampling distribution and shade p-value are (ALWAYS). calculate the Z critical, again we use the proportion for our SE(always when available).
Finally we make a decision, if it smaller than the threshold, reject null hypothesis. Otherwise we fail to reject null hypothesis.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 04:33*
So there notice any difference that you have between CI and HT. When you in CI, you calculate the proportion and range of difference. When you're in HT, you're given the null value, the true proportion of the population. So you use that instead. We're always using true proportion whenever possible. In CI case, the true proportion is unknown so we use the point estimate. in HT, because true proportion is given (as null value), we calculate the failures,successes, and SE based on null value.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 07:00*
Here in the example we want to use hypothesis test to know whether the majority of the Americans believe in evolution. Majority will be whichever proportion that are greater than 50%. We want to test it using our given data.
Since the question is about the alternative hypothesis, we can infer the null hypothesis, and we have proportion estimate of 0.6. Then we check the conditions.
1. Independence. 1983 less than 10% population. Whether Americans in sampled believes in population is independent of one another.
2. Sample size/skew. 1983 * 0.5 = 991.5 > 10. We don't have to calculate the complements, because 0.5 applies to both. S-F condition met, we know that sampling distribution will be nearly normal.
After we validate the condition, we now proceed to the next step.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/119) 09:24*
```
#Hypothesis testing, one categorical variable, given null value(p)
p = 0.5
p_hat = 0.6
n = 1983
SL = 0.05
SE = sqrt(p*(1-p)/n)
z_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)
pnorm(p_hat,mean=p,sd=SE,lower.tail=p_hat < p)
```
The p-value is practically zero, thus we reject the null hypothesis. There is almost 0% chance that 1983 randomly selected Americans where 60% of them or more believe in evolution, if in fact 50% of Americans believe in population. So the data provide convincing evidence that majority of all Americans believe in evolution.
### Summary
When defining population proportion, you use p. When you define sample proportion, you use $\hat{p}$. Plug population proportion to standard error formula. But since it almost always not known, use sample proportion.
For proportion, CLT states that the distribution of sample distribution will be nearly normal, centered at the true population proportion,with standard error as long as:
* Observations in the sample are independent to one another.
* At least 10 expected success and 10 expected failures in the observations.
For confidence interval, we use sampled proportion (if we already know the true population proportion, it's useless to build an interval to capture it). For hypothesis testing, we have true population,and incorporate it to our standard error calculation.For numerical variable, standard error doesn't incorporate mean, it uses standard deviation. So it doesn't have discrepancy for computing confidence interval and hypothesis testing.
When calculating required sample size for particular margin of error, if sampled proportion is unknown, we use 0.5. This have advantage in two ways. First, if categorical variable only have two levels, we have fair judgement, best prior uniform. Second, 0.5 will gives us the largest sample size.
# Estimating difference between two proportions (confidence interval)
In this section we want to calculate the proportions of each of group in categorical variable.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 04:05*
Here we have sample from population, one from Gallup Survey, and the other one from Coursera. The proportion of success is the proportion of citizen that yes, believe there should be law to ban all handgun possesion beside police officer. Here we have different proportion between US and Coursera. It could be that those strong issue that happen in the US, is not so much in the Coursera which consist international students.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 04:19*
Posted a question, we're making a definition between parameter of interest and point estimate.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 05:55*
Again we're calculating confidence interval, so we're going the calculate the point estimate difference and standard error difference. Standard error will bigger since we're including variability of both p1 and p2. Mind that we use p hat because population parameter is unknown, later in HT we're going to replace that based on population parameter.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 08:00*
Again the confidence interval of two group must be check.
* **Independence**. Ensure that within groups must be independent(random sampling/assignment, without replacement < 10% population). And between groups is independent as wel (non-paired).
* **Sample size/skew**. each of the group must validate success/failure condition (have at least 10 succeses and 10 failures). Remember that we're using proportion sample since proportion population is unknown.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 10:48*
So, again we continue the example. Check the conditions.
* Each of the sample size is higher than 10% of their respective population.Gallup survey has doing good joob of random sampling **but not for Coursera. their survey is voluntarily survey**. So we can say that sampled Americans are independent, sampled Americans may not independent.
* using subtraction on each of the table we have 257 sucesses and 771 failures on US, and 59 succeses and 24 failures on Courserians. Because both of them more than 10, we can say **sampling distributions of both proportions are nearly normal**.
So let's put it into the equation.
```
#Observe one level in categorical variable, of categorical of two levels.
#1 = Coursera, 2 = US
n_1 = 83
p_1 = 0.71
n_2 = 1028
p_2 = 0.25
CL = 0.95
SE = sqrt( (p_1*(1-p_1)/n_1)+(p_2*(1-p_2)/n_2) )
z_star = round(qnorm((1-CL)/2, lower.tail=F),digits=2)
ME = z_star*SE
c((p_1-p_2)-ME, (p_1-p_2)+ME)
```
Since the difference proportion is Coursera-US, we can say that **we are 95% confident that proportion of Courserians is 36% to 56% higher than US that believe there should be law for banning gun possesion** . Eventhough we change the order, we get same results, and US will be lower than Courserians, which is equals to the statement earlier.
Should we expect significant difference when we do hypothesis testing? Of course! we know that Courserians has 36% to 56% higher than US, then that would means the difference will be significance(compared to null hypothesis, null value is 0% difference). We also know that 0% is not in the (36,56) interval

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 02:31*
In this example we perform hypothesis test to see the sucess proportions(yes, bullied) between parent that are male/female (only represent by one parent). The null hypothesis there will be no difference,while the alternative state there are a difference. Remember that hypothesis testing is always about the true proportion.In CI, you use observed proportion(if you already know the true proportion, then you shouldn't calculate the interval to capture the the value, because you already know exactly the true proportion).
Remember that for CI we use the observed proportion, but it's little difficult for HT. we don't know the exact value of proportion 1 and proportion 2 equal to. So what do we do? We make one up. The idea is because the proportion is equal, they should be equal proportion if they joined into one population (which is both female and male are two levels of one categorical variable). So what we get is
$$\hat{p}_\mathbf{pool} = \frac{Nsuccess_1+Nsucces_2}{n_1+n_2}$$
calculating p pool we get,
```
np_1 = 34
np_2 = 61
n_1 = 90
n_2 = 122
(np_1+np_2)/(n_1+n_2)
```
So wherever p hat exist in the calculation for hypothesis testing, we replace that with pool proporton

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 07:20*
Calculating p-pool, we know that p-pool is closer to the female than male.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 08:36*
So what's so different from before(mean)? Well, in mean, SE we don't mean is not getting into equation when calculating Standard Error. So mean is useless for calculating standard error. But standard error incorporates the proportion into standard error.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 11:01*
After we recalculate the p pool, we we check the conditions for hypothesis testing.
* **Independence**. Within groups, therer are less than 10% population for both female and male. Between groups, there are no dependent(paired) data, if it does paired, there should be at least equal number of female and male. Therefore we can conclude that sampled males are independent of each other, sampled females as well. We also expect that male and female are independent to one another.
* **Sample size/skew**. We calculate each of the sample size into our new p pool and validate the conditions of successes and failures for each female and male.If the quarduple value is at least 10, we can assume that sampling distribution of proportion differences is nearly normal.
Next, we proceed to calculate hypothesis testing.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/123) 12:25*
So you maybe notice something different. Yes, **p-pool is not value that you use for null value!** It only represent what value that represent equal proportion for both female and male. But since the p pool is equal for both male and female, the difference is still zero, hence the null value is zero. So calculating everything,
```
#1 = Male, 2 = Female
n_1 = 90
p_1 = 0.38
n_2 = 122
p_2 = 0.5
p_pool = 0.45
null = 0
SE = sqrt((p_pool*(1-p_pool)/n_1) + (p_pool*(1-p_pool)/n_2))
pe = p_1 - p_2
pnorm(pe,mean=null,sd=SE, lower.tail=pe < null) * 2
SE
```
Based on the p-value and 5% significance level, we would failed to reject null hypothesis, and states **there is no difference between males and females with respect to likelihood reporting their kids to being bullied**
### Summary
Calculating for standard error of two categorical variable, testing the difference, is different when we have confidence interval or hypothesis testing that have null value other than zero. We join standard error of both propotion of categorical variable. But for hypothesis testing that have null value zero, both of categorical variable proportion is not known. Hence we use pool proportion, joining successes divided by sample size of both categorical variables. The reason behind another discrepancy for hypothesis testing with null value zero, is that assumed that proportions are equal for levels in categorical variable, we have to use common proportions that fit both levels.
> **REFERENCES**:
> Dr. Mine Çetinkaya-Rundel, [Cousera](https://class.coursera.org/statistics-003/lecture)
| github_jupyter |
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
rows = 2*N + 2*num_landmarks
cols = 2*N + 2*num_landmarks
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
omega = np.zeros((rows, cols))
omega[0][0] = 1
omega[1][1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros((rows, 1))
xi[0][0] = world_size / 2
xi[1][0] = world_size / 2
return omega, xi
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
for i in range(len(data)):
measurements, motion = data[i][0], data[i][1]
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
for measure in measurements:
idx_landmark = measure[0]
x, y= measure[1], measure[2]
# Update according to x value
omega[2*i, 2*i] += 1 / measurement_noise
omega[2*i, 2*N + 2*idx_landmark] += -1 / measurement_noise
omega[2*N + 2*idx_landmark, 2*i] += -1 / measurement_noise
omega[2*N + 2*idx_landmark, 2*N + 2*idx_landmark] += 1 / measurement_noise
xi[2*i, 0] += -x / measurement_noise
xi[2*N + 2*idx_landmark, 0] += x / measurement_noise
# Update according to y value
omega[2*i + 1, 2*i + 1] += 1 / measurement_noise
omega[2*i + 1, 2*N + 2*idx_landmark + 1] += -1 / measurement_noise
omega[2*N + 2*idx_landmark + 1, 2*i + 1] += -1 / measurement_noise
omega[2*N + 2*idx_landmark + 1, 2*N + 2*idx_landmark + 1] += 1 / measurement_noise
xi[2*i + 1, 0] += -y / measurement_noise
xi[2*N + 2*idx_landmark + 1, 0] += y / measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
dx = motion[0]
dy = motion[1]
# Update according to dx value
omega[2*i, 2*i] += 1 / motion_noise
omega[2*i, 2*i + 2] += -1 / motion_noise
omega[2*i + 2, 2*i] += -1 / motion_noise
omega[2*i + 2, 2*i + 2] += 1 / motion_noise
xi[2*i, 0] += -dx / motion_noise
xi[2*i + 2, 0] += dx / motion_noise
# Update according to dy value
omega[2*i + 1, 2*i + 1] += 1 / motion_noise
omega[2*i + 1, 2*i + 3] += -1 / motion_noise
omega[2*i + 3, 2*i + 1] += -1 / motion_noise
omega[2*i + 3, 2*i + 3] += 1 / motion_noise
xi[2*i + 1, 0] += -dy / motion_noise
xi[2*i + 3, 0] += dy / motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
mu = np.dot(np.linalg.inv(omega), xi)
return mu
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
**Answer**: True final pose is x=74.31763, y=59.50400 and my final pose estimated by slam is , x=74.65836045502598, y=62.241297930455204. It can be seen that true and predicted last pose locations are quite close. Accuracy of prediction is affected by noise factors such as motion and movement noises. We can deduced that we have low confidence since high values of motion and movement noise which are greater than 1 have been chosen. I think increasing noise parameters would result that final posion would be farther from real position. If N will be increased, I think that uncertainties would be decreased since robot would have chance to find more reference point.
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
```
| github_jupyter |
# Question repository
A list of open questions and possibly ambiguous stuff encountered throughout the material.
TODO: Tag exam-related ones appropriately, to differentiate them from (exclusively) curiosity-related ones.
**Note:** An alternative design would consist of adding a questions section to every notebook, tagging it appropriately using IPython metadata, and then using something like a Python/shell script to print all open questions in a centralized way. However...

## 2. Approximate retrieval
* Why perform first step of hashing if we only have a small number of features (e.g. 100)? If many features, why not just do a PCA first?
- might be because we want the shingle representation (0s and 1s) for the nice properties that Jaccard similarity offers us
## 3. Classification
* When transitioning from the first SVM formulation (with slack variables), to the second one aren't we loosening any constraint by fixing $\xi$?
* (tentative) It seems we're not, since we're taking multiple cases into consideration and merging them together into a single formulation using max.
* Slide 04:18: Is the first (primal) SVM formulation a (ii)-type one (since it has a minimization and its constraint as separate equation), or is it not eligible for this categorization?
* Slide 06:15: How do we go from step 1 to 2? Isn't the $\lambda \| w \|_2^2$ term outside the sum?
- yes it is, but the sum has a convenient $\frac{1}{T}$ in front of it, so we're safe to add the regularization term into the sum.
* Why do some SVM OCP implementations *always* regularize, even when the model was not updated at that stage.
## 4. Non-linear classification
* How exactly is the Lagrangian dual reformulation step (SVMs) different from the first time we reformulated the SVM problem statement to get rid of the slack variables?
- it's different because we changed the objective! We no longer have $\min_w$ or $\min_{w, \xi}$, it's now a maximization of the Lagrance coefficients: $\max_\alpha$; it's not a *reformulation*, but an *equivalent problem*
## 5. Active learning
* When doing active learning based on uncertainty sampling, how exactly do we know when we can safely infer some labels?
## 6. Clustering
* Homework 5 solution, 2.2: Why is:
\begin{equation}
\operatorname{Var}_{\hat{x}_i \sim q}\left [ \frac{1}{m} \sum_{i=1}^{m} \frac{d(\hat{x}_i; \mu)}{q(\hat{x}_i)} \right ] = \frac{1}{m^2} \sum_{i=1}^m \operatorname{Var}_{x_i \sim q} \left[ \frac{d(x_i; \mu)}{q(x_i)} \right]
\end{equation}
* And why doe we still have the $i$ subscript in the variance formulation? Can't we just write $x \tilde{} q$?
* Have to discuss this with friends!
## 7. Bandits
## 8. Exam-specific (and/or for review session on Jan 20)
* Exam 2014 Problem 6 (Submodular functions)
- solved by Syd in The Notes. Yay!
| github_jupyter |
# Model Checking
After running an MCMC simulation, `sample` returns a `MultiTrace` object containing the samples for all the stochastic and deterministic random variables. The final step in Bayesian computation is model checking, in order to ensure that inferences derived from your sample are valid. There are two components to model checking:
1. Convergence diagnostics
2. Goodness of fit
Convergence diagnostics are intended to detect lack of convergence in the Markov chain Monte Carlo sample; it is used to ensure that you have not halted your sampling too early. However, a converged model is not guaranteed to be a good model. The second component of model checking, goodness of fit, is used to check the internal validity of the model, by comparing predictions from the model to the data used to fit the model.
## Convergence Diagnostics
Valid inferences from sequences of MCMC samples are based on the
assumption that the samples are derived from the true posterior
distribution of interest. Theory guarantees this condition as the number
of iterations approaches infinity. It is important, therefore, to
determine the **minimum number of samples** required to ensure a reasonable
approximation to the target posterior density. Unfortunately, no
universal threshold exists across all problems, so convergence must be
assessed independently each time MCMC estimation is performed. The
procedures for verifying convergence are collectively known as
*convergence diagnostics*.
One approach to analyzing convergence is **analytical**, whereby the
variance of the sample at different sections of the chain are compared
to that of the limiting distribution. These methods use distance metrics
to analyze convergence, or place theoretical bounds on the sample
variance, and though they are promising, they are generally difficult to
use and are not prominent in the MCMC literature. More common is a
**statistical** approach to assessing convergence. With this approach,
rather than considering the properties of the theoretical target
distribution, only the statistical properties of the observed chain are
analyzed. Reliance on the sample alone restricts such convergence
criteria to **heuristics**. As a result, convergence cannot be guaranteed.
Although evidence for lack of convergence using statistical convergence
diagnostics will correctly imply lack of convergence in the chain, the
absence of such evidence will not *guarantee* convergence in the chain.
Nevertheless, negative results for one or more criteria may provide some
measure of assurance to users that their sample will provide valid
inferences.
For most simple models, convergence will occur quickly, sometimes within
a the first several hundred iterations, after which all remaining
samples of the chain may be used to calculate posterior quantities. For
more complex models, convergence requires a significantly longer burn-in
period; sometimes orders of magnitude more samples are needed.
Frequently, lack of convergence will be caused by **poor mixing**.
Recall that *mixing* refers to the degree to which the Markov
chain explores the support of the posterior distribution. Poor mixing
may stem from inappropriate proposals (if one is using the
Metropolis-Hastings sampler) or from attempting to estimate models with
highly correlated variables.
```
%matplotlib inline
import numpy as np
import seaborn as sns; sns.set_context('notebook')
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
from pymc3 import Normal, Binomial, sample, Model
from pymc3.math import invlogit
# Samples for each dose level
n = 5 * np.ones(4, dtype=int)
# Log-dose
dose = np.array([-.86, -.3, -.05, .73])
deaths = np.array([0, 1, 3, 5])
with Model() as bioassay_model:
# Logit-linear model parameters
alpha = Normal('alpha', 0, sd=100)
beta = Normal('beta', 0, sd=100)
# Calculate probabilities of death
theta = invlogit(alpha + beta * dose)
# Data likelihood
obs_deaths = Binomial('obs_deaths', n=n, p=theta, observed=deaths)
with bioassay_model:
bioassay_trace = sample(1000)
from pymc3 import traceplot
traceplot(bioassay_trace, varnames=['alpha'])
```
### Informal Methods
The most straightforward approach for assessing convergence is based on
simply **plotting and inspecting traces and histograms** of the observed
MCMC sample. If the trace of values for each of the stochastics exhibits
asymptotic behavior over the last $m$ iterations, this may be
satisfactory evidence for convergence.
```
traceplot(bioassay_trace, varnames=['beta'])
```
A similar approach involves
plotting a histogram for every set of $k$ iterations (perhaps 50-100)
beyond some burn in threshold $n$; if the histograms are not visibly
different among the sample intervals, this may be considered some evidence for
convergence. Note that such diagnostics should be carried out for each
stochastic estimated by the MCMC algorithm, because convergent behavior
by one variable does not imply evidence for convergence for other
variables in the analysis.
```
import matplotlib.pyplot as plt
beta_trace = bioassay_trace['beta']
fig, axes = plt.subplots(2, 5, figsize=(14,6))
axes = axes.ravel()
for i in range(10):
axes[i].hist(beta_trace[100*i:100*(i+1)])
plt.tight_layout()
```
An extension of this approach can be taken
when multiple parallel chains are run, rather than just a single, long
chain. In this case, the final values of $c$ chains run for $n$
iterations are plotted in a histogram; just as above, this is repeated
every $k$ iterations thereafter, and the histograms of the endpoints are
plotted again and compared to the previous histogram. This is repeated
until consecutive histograms are indistinguishable.
Another *ad hoc* method for detecting lack of convergence is to examine
the traces of several MCMC chains initialized with different starting
values. Overlaying these traces on the same set of axes should (if
convergence has occurred) show each chain tending toward the same
equilibrium value, with approximately the same variance. Recall that the
tendency for some Markov chains to converge to the true (unknown) value
from diverse initial values is called *ergodicity*. This property is
guaranteed by the reversible chains constructed using MCMC, and should
be observable using this technique. Again, however, this approach is
only a heuristic method, and cannot always detect lack of convergence,
even though chains may appear ergodic.
```
with bioassay_model:
bioassay_trace = sample(1000, chains=2, start=[{'alpha':0.5}, {'alpha':5}])
bioassay_trace.get_values('alpha', chains=0)[0]
plt.plot(bioassay_trace.get_values('alpha', chains=0)[:200], 'r--')
plt.plot(bioassay_trace.get_values('alpha', chains=1)[:200], 'k--')
```
A principal reason that evidence from informal techniques cannot
guarantee convergence is a phenomenon called ***metastability***. Chains may
appear to have converged to the true equilibrium value, displaying
excellent qualities by any of the methods described above. However,
after some period of stability around this value, the chain may suddenly
move to another region of the parameter space. This period
of metastability can sometimes be very long, and therefore escape
detection by these convergence diagnostics. Unfortunately, there is no
statistical technique available for detecting metastability.
### Formal Methods
Along with the *ad hoc* techniques described above, a number of more
formal methods exist which are prevalent in the literature. These are
considered more formal because they are based on existing statistical
methods, such as time series analysis.
PyMC currently includes three formal convergence diagnostic methods. The
first, proposed by [Geweke (1992)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011446), is a time-series approach that
compares the mean and variance of segments from the beginning and end of
a single chain.
$$z = \frac{\bar{\theta}_a - \bar{\theta}_b}{\sqrt{S_a(0) + S_b(0)}}$$
where $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the
z-scores (theoretically distributed as standard normal variates) of
these two segments are similar, it can provide evidence for convergence.
PyMC calculates z-scores of the difference between various initial
segments along the chain, and the last 50% of the remaining chain. If
the chain has converged, the majority of points should fall within 2
standard deviations of zero.
In PyMC, diagnostic z-scores can be obtained by calling the `geweke` function. It
accepts either (1) a single trace, (2) a Node or Stochastic object, or
(4) an entire Model object:
```
from pymc3 import geweke
with bioassay_model:
tr = sample(2000, tune=1000)
z = geweke(tr, intervals=15)
plt.scatter(*z[0]['alpha'].T)
plt.hlines([-1,1], 0, 1000, linestyles='dotted')
plt.xlim(0, 1000)
```
The arguments expected are the following:
- `x` : The trace of a variable.
- `first` : The fraction of series at the beginning of the trace.
- `last` : The fraction of series at the end to be compared with the section at the beginning.
- `intervals` : The number of segments.
Plotting the output displays the scores in series, making it is easy to
see departures from the standard normal assumption.
A second convergence diagnostic provided by PyMC is the Gelman-Rubin
statistic [Gelman and Rubin (1992)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011136). This diagnostic uses multiple chains to
check for lack of convergence, and is based on the notion that if
multiple chains have converged, by definition they should appear very
similar to one another; if not, one or more of the chains has failed to
converge.
The Gelman-Rubin diagnostic uses an analysis of variance approach to
assessing convergence. That is, it calculates both the between-chain
varaince (B) and within-chain varaince (W), and assesses whether they
are different enough to worry about convergence. Assuming $m$ chains,
each of length $n$, quantities are calculated by:
$$\begin{align}B &= \frac{n}{m-1} \sum_{j=1}^m (\bar{\theta}_{.j} - \bar{\theta}_{..})^2 \\
W &= \frac{1}{m} \sum_{j=1}^m \left[ \frac{1}{n-1} \sum_{i=1}^n (\theta_{ij} - \bar{\theta}_{.j})^2 \right]
\end{align}$$
for each scalar estimand $\theta$. Using these values, an estimate of
the marginal posterior variance of $\theta$ can be calculated:
$$\hat{\text{Var}}(\theta | y) = \frac{n-1}{n} W + \frac{1}{n} B$$
Assuming $\theta$ was initialized to arbitrary starting points in each
chain, this quantity will overestimate the true marginal posterior
variance. At the same time, $W$ will tend to underestimate the
within-chain variance early in the sampling run. However, in the limit
as $n \rightarrow
\infty$, both quantities will converge to the true variance of $\theta$.
In light of this, the Gelman-Rubin statistic monitors convergence using
the ratio:
$$\hat{R} = \sqrt{\frac{\hat{\text{Var}}(\theta | y)}{W}}$$
This is called the potential scale reduction, since it is an estimate of
the potential reduction in the scale of $\theta$ as the number of
simulations tends to infinity. In practice, we look for values of
$\hat{R}$ close to one (say, less than 1.1) to be confident that a
particular estimand has converged. In PyMC, the function
`gelman_rubin` will calculate $\hat{R}$ for each stochastic node in
the passed model:
```
from pymc3 import gelman_rubin
gelman_rubin(bioassay_trace)
```
For the best results, each chain should be initialized to highly
dispersed starting values for each stochastic node.
By default, when calling the `forestplot` function using nodes with
multiple chains, the $\hat{R}$ values will be plotted alongside the
posterior intervals.
```
from pymc3 import forestplot
forestplot(bioassay_trace)
```
## Autocorrelation
In general, samples drawn from MCMC algorithms will be autocorrelated. This is not a big deal, other than the fact that autocorrelated chains may require longer sampling in order to adequately characterize posterior quantities of interest. The calculation of autocorrelation is performed for each lag $i=1,2,\ldots,k$ (the correlation at lag 0 is, of course, 1) by:
$$\hat{\rho}_i = 1 - \frac{V_i}{2\hat{\text{Var}}(\theta | y)}$$
where $\hat{\text{Var}}(\theta | y)$ is the same estimated variance as calculated for the Gelman-Rubin statistic, and $V_i$ is the variogram at lag $i$ for $\theta$:
$$\text{V}_i = \frac{1}{m(n-i)}\sum_{j=1}^m \sum_{k=i+1}^n (\theta_{jk} - \theta_{j(k-i)})^2$$
This autocorrelation can be visualized using the `autocorrplot` function in PyMC3:
```
from pymc3 import autocorrplot
autocorrplot(tr);
```
### Effective sample size
The effective sample size is estimated using the partial sum:
$$\hat{n}_{eff} = \frac{mn}{1 + 2\sum_{i=1}^T \hat{\rho}_i}$$
where $T$ is the first odd integer such that $\hat{\rho}_{T+1} + \hat{\rho}_{T+2}$ is negative.
The issue here is related to the fact that we are **estimating** the effective sample size from the fit output. Values of $n_{eff} / n_{iter} < 0.001$ indicate a biased estimator, resulting in an overestimate of the true effective sample size.
```
from pymc3 import effective_n
effective_n(bioassay_trace)
```
Both low $n_{eff}$ and high $\hat{R}$ indicate **poor mixing**.
It is tempting to want to **thin** the chain to eliminate the autocorrelation (*e.g.* taking every 20th sample from the traces above), but this is a waste of time. Since thinning deliberately throws out the majority of the samples, no efficiency is gained; you ultimately require more samples to achive a particular desired sample size.
## Diagnostics for Gradient-based Samplers
Hamiltonian Monte Carlo is a powerful and efficient MCMC sampler when set up appropriately. However, this typically requires carefull tuning of the sampler parameters, such as tree depth, leapfrog step size and target acceptance rate. Fortunately, the NUTS algorithm takes care of some of this for us. Nevertheless, tuning must be carefully monitored for failures that frequently arise. This is particularly the case when fitting challenging models, such as those with high curvature or heavy tails.
Fortunately, however, gradient-based sampling provides the ability to diagnose these pathologies. PyMC makes several diagnostic statistics available as attributes of the `MultiTrace` object returned by the `sample` function.
```
bioassay_trace.stat_names
```
- `mean_tree_accept`: The mean acceptance probability for the tree that generated this sample. The mean of these values across all samples but the burn-in should be approximately `target_accept` (the default for this is 0.8).
- `diverging`: Whether the trajectory for this sample diverged. If there are many diverging samples, this usually indicates that a region of the posterior has high curvature. Reparametrization can often help, but you can also try to increase `target_accept` to something like 0.9 or 0.95.
- `energy`: The energy at the point in phase-space where the sample was accepted. This can be used to identify posteriors with problematically long tails. See below for an example.
- `energy_error`: The difference in energy between the start and the end of the trajectory. For a perfect integrator this would always be zero.
- `max_energy_error`: The maximum difference in energy along the whole trajectory.
- `depth`: The depth of the tree that was used to generate this sample
- `tree_size`: The number of leafs of the sampling tree, when the sample was accepted. This is usually a bit less than $2 ^ \text{depth}$. If the tree size is large, the sampler is using a lot of leapfrog steps to find the next sample. This can for example happen if there are strong correlations in the posterior, if the posterior has long tails, if there are regions of high curvature ("funnels"), or if the variance estimates in the mass matrix are inaccurate. Reparametrisation of the model or estimating the posterior variances from past samples might help.
- `tune`: This is `True`, if step size adaptation was turned on when this sample was generated.
- `step_size`: The step size used for this sample.
- `step_size_bar`: The current best known step-size. After the tuning samples, the step size is set to this value. This should converge during tuning.
If the name of the statistic does not clash with the name of one of the variables, we can use indexing to get the values. The values for the chains will be concatenated.
We can see that the step sizes converged after the 2000 tuning samples for both chains to about the same value. The first 3000 values are from chain 1, the second from chain 2.
```
with bioassay_model:
trace = sample(1000, tune=2000, init=None, chains=2, discard_tuned_samples=False)
plt.plot(trace['step_size_bar'])
```
The `get_sampler_stats` method provides more control over which values should be returned, and it also works if the name of the statistic is the same as the name of one of the variables. We can use the `chains` option, to control values from which chain should be returned, or we can set `combine=False` to get the values for the individual chains:
The `NUTS` step method has a maximum tree depth parameter so that infinite loops (which can occur for non-identified models) are avoided. When the maximum tree depth is reached (the default value is 10), the trajectory is stopped. However complex (but identifiable) models can saturate this threshold, which reduces sampling efficiency.
The `MultiTrace` stores the tree depth for each iteration, so inspecting these traces can reveal saturation if it is occurring.
```
sizes1, sizes2 = trace.get_sampler_stats('depth', combine=False)
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, sharey=True)
ax1.plot(sizes1)
ax2.plot(sizes2)
```
We can also check the acceptance for the trees that generated this sample. The mean of these values across all samples (except the tuning stage) is expected to be the same as `target_accept`, which is 0.8 by default.
```
accept = trace.get_sampler_stats('mean_tree_accept', burn=1000)
sns.distplot(accept, kde=False)
```
### Divergent transitions
Recall that simulating Hamiltonian dynamics via a symplectic integrator uses a discrete approximation of a continuous function. This is only a reasonable approximation when the step sizes of the integrator are suitably small. A divergent transition may indicate that the approximation is poor.
If there are too many divergent transitions, then samples are not being drawn from the full posterior, and inferences based on the resulting sample will be biased
If there are diverging transitions, PyMC3 will issue warnings indicating how many were discovered. We can obtain the indices of them from the trace.
```
trace['diverging'].nonzero()
```
### Bayesian Fraction of Missing Information
The Bayesian fraction of missing information (BFMI) is a measure of how hard it is to
sample level sets of the posterior at each iteration. Specifically, it quantifies how well momentum resampling matches the marginal energy distribution. A small value indicates that the adaptation phase of the sampler was unsuccessful, and invoking the central limit theorem may not be valid. It indicates whether the sampler is able to adequately explore the posterior distribution.
Though there is not an established rule of thumb for an adequate threshold, values close to one are optimal. Reparameterizing the model is sometimes helpful for improving this statistic.
```
from pymc3 import bfmi
bfmi(trace)
```
Another way of diagnosting this phenomenon is by comparing the overall distribution of
energy levels with the *change* of energy between successive samples. Ideally, they should be very similar.
If the distribution of energy transitions is narrow relative to the marginal energy distribution, this is a sign of inefficient sampling, as many transitions are required to completely explore the posterior. On the other hand, if the energy transition distribution is similar to that of the marginal energy, this is evidence of efficient sampling, resulting in near-independent samples from the posterior.
```
energy = trace['energy']
energy_diff = np.diff(energy)
sns.distplot(energy - energy.mean(), label='energy')
sns.distplot(energy_diff, label='energy diff')
plt.legend()
```
If the overall distribution of energy levels has longer tails, the efficiency of the sampler will deteriorate quickly.
## Goodness of Fit
Checking for model convergence is only the first step in the evaluation
of MCMC model outputs. It is possible for an entirely unsuitable model
to converge, so additional steps are needed to ensure that the estimated
model adequately fits the data. One intuitive way of evaluating model
fit is to compare model predictions with the observations used to fit
the model. In other words, the fitted model can be used to simulate
data, and the distribution of the simulated data should resemble the
distribution of the actual data.
Fortunately, simulating data from the model is a natural component of
the Bayesian modelling framework. Recall, from the discussion on
imputation of missing data, the posterior predictive distribution:
$$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$
Here, $\tilde{y}$ represents some hypothetical new data that would be
expected, taking into account the posterior uncertainty in the model
parameters.
Sampling from the posterior predictive distribution is easy
in PyMC. The `sample_ppc` function draws posterior predictive checks from all of the data likelhioods. Consider the `gelman_bioassay` example,
where deaths are modeled as a binomial random variable for which
the probability of death is a logit-linear function of the dose of a
particular drug.
The posterior predictive distribution of deaths uses the same functional
form as the data likelihood, in this case a binomial stochastic. Here is
the corresponding sample from the posterior predictive distribution (we typically need very few samples relative to the MCMC sample):
```
from pymc3 import sample_ppc
with bioassay_model:
deaths_sim = sample_ppc(bioassay_trace, samples=500)
```
The degree to which simulated data correspond to observations can be evaluated in at least two ways. First, these quantities can simply be compared visually. This allows for a qualitative comparison of model-based replicates and observations. If there is poor fit, the true value of the data may appear in the tails of the histogram of replicated data, while a good fit will tend to show the true data in high-probability regions of the posterior predictive distribution. The Matplot package in PyMC provides an easy way of producing such plots, via the `gof_plot` function.
```
fig, axes = plt.subplots(1, 4, figsize=(14, 4))
for obs, sim, ax in zip(deaths, deaths_sim['obs_deaths'].T, axes):
ax.hist(sim, bins=range(7))
ax.plot(obs+0.5, 1, 'ro')
```
## Exercise: Meta-analysis of beta blocker effectiveness
Carlin (1992) considers a Bayesian approach to meta-analysis, and includes the following examples of 22 trials of beta-blockers to prevent mortality after myocardial infarction.
In a random effects meta-analysis we assume the true effect (on a log-odds scale) $d_i$ in a trial $i$
is drawn from some population distribution. Let $r^C_i$ denote number of events in the control group in trial $i$,
and $r^T_i$ denote events under active treatment in trial $i$. Our model is:
$$\begin{aligned}
r^C_i &\sim \text{Binomial}\left(p^C_i, n^C_i\right) \\
r^T_i &\sim \text{Binomial}\left(p^T_i, n^T_i\right) \\
\text{logit}\left(p^C_i\right) &= \mu_i \\
\text{logit}\left(p^T_i\right) &= \mu_i + \delta_i \\
\delta_i &\sim \text{Normal}(d, t) \\
\mu_i &\sim \text{Normal}(m, s)
\end{aligned}$$
We want to make inferences about the population effect $d$, and the predictive distribution for the effect $\delta_{\text{new}}$ in a new trial. Build a model to estimate these quantities in PyMC, and (1) use convergence diagnostics to check for convergence and (2) use posterior predictive checks to assess goodness-of-fit.
Here are the data:
```
r_t_obs = [3, 7, 5, 102, 28, 4, 98, 60, 25, 138, 64, 45, 9, 57,
25, 33, 28, 8, 6, 32, 27, 22]
n_t_obs = [38, 114, 69, 1533, 355, 59, 945, 632, 278,1916, 873, 263,
291, 858, 154, 207, 251, 151, 174, 209, 391, 680]
r_c_obs = [3, 14, 11, 127, 27, 6, 152, 48, 37, 188, 52, 47, 16, 45,
31, 38, 12, 6, 3, 40, 43, 39]
n_c_obs = [39, 116, 93, 1520, 365, 52, 939, 471, 282, 1921, 583, 266,
293, 883, 147, 213, 122, 154, 134, 218, 364, 674]
N = len(n_c_obs)
# Write your answer here
```
## References
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 457–472.
Geweke, J., Berger, J. O., & Dawid, A. P. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bayesian Statistics 4.
Brooks, S. P., Catchpole, E. A., & Morgan, B. J. T. (2000). Bayesian Animal Survival Estimation. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 15(4), 357–376. doi:10.1214/ss/1177010123
Gelman, A., Meng, X., & Stern, H. (1996). Posterior predicitive assessment of model fitness via realized discrepencies with discussion. Statistica Sinica, 6, 733–807.
Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv.org.
| github_jupyter |
# 16장. 로지스틱 회귀 분석 과제
```
import matplotlib.pyplot as plt
import os
from typing import List, Tuple
import csv
from scratch.linear_algebra import Vector, get_column
```
## 1. 데이터셋
### 1.1 데이터셋 다운로드
```
import requests
data = requests.get("https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data")
dataset_path = os.path.join('data', 'wdbc.data')
with open(dataset_path, "w") as f:
f.write(data.text)
```
### 1.2 데이터 파싱
```
def parse_cancer_row(row: List[str]) -> Tuple[Vector, int]:
measurements = [float(value) for value in row[2:]]
label = row[1]
label = 1 if label == 'M' else 0
return measurements, label
```
### 1.3 데이터 읽기
위스콘신 유방암 진단 데이터셋 (Wisconsin Breast Cancer Diagnostic dataset)
https://www.kaggle.com/uciml/breast-cancer-wisconsin-data
```
X_cancer : List[Vector] = []
y_cancer : List[int] = []
with open(dataset_path) as f:
reader = csv.reader(f)
for row in reader:
x, y = parse_cancer_row(row)
X_cancer.append(x)
y_cancer.append(y)
print(X_cancer[0])
print(y_cancer[0])
```
#### 1.4 데이터 컬럼명
```
columns = [
"radius_mean", "texture_mean", "perimeter_mean", "area_mean", "smoothness_mean",
"compactness_mean", "concavity_mean", "points_mean", "symmetry_mean", "dimension_mean",
"radius_se", "texture_se", "perimeter_se", "area_se", "smoothness_se",
"compactness_se", "concavity_se", "points_se", "symmetry_se", "dimension_se",
"radius_worst", "texture_worst", "perimeter_worst", "area_worst", "smoothness_worst",
"compactness_worst", "concavity_worst", "points_worst", "symmetry_worst", "dimension_worst",
]
```
## 2. 데이터 탐색
### 2.1 클래스 비율 확인
```
from collections import defaultdict
label_type = defaultdict(int)
for y in y_cancer:
label = 'M' if y == 1 else 'B'
label_type[label] += 1
plt.figure(figsize=(8,4))
plt.subplot(1, 2, 1)
plt.bar(label_type.keys(),
label_type.values(),
0.5,
facecolor="#2E495E",
edgecolor=(0, 0, 0)) # Black edges for each bar
plt.xlabel("Diagnosis")
plt.ylabel("# of diagnosis")
plt.title("Cancer diagnosis")
plt.subplot(1, 2, 2)
pies = plt.pie(label_type.values(),
labels=label_type.keys(),
startangle=90)
plt.legend()
plt.show()
```
### 2.2 특징 별 히스토그램
```
def histogram(ax, col : int):
n, bins, patches = ax.hist(get_column(X_cancer, col),
8,
facecolor="#2E495E",
edgecolor=(0, 0, 0))
ax.set_title(columns[col], fontsize=8)
from matplotlib import pyplot as plt
num_rows = 6
num_cols = 5
fig, ax = plt.subplots(num_rows, num_cols, figsize=(num_cols*4, num_rows*4))
for row in range(num_rows):
for col in range(num_cols):
histogram(ax[row][col], num_cols * row + col)
plt.show()
```
### 2.3 특징 쌍 별 산포도
```
from typing import Dict
points_by_diagnosis: Dict[str, List[Vector]] = defaultdict(list)
for i, x in enumerate(X_cancer):
y = y_cancer[i]
label = 'M' if y == 1 else 'B'
points_by_diagnosis[label].append(x)
start = 0
end = start + 10
pairs = [(i, j) for i in range(start, end) for j in range(i+1, end) if i < j]
print(pairs)
marks = ['+', '.']
from matplotlib import pyplot as plt
num_rows = 9
num_cols = 5
fig, ax = plt.subplots(num_rows, num_cols, figsize=(num_cols*3, num_rows*3))
for row in range(num_rows):
for col in range(num_cols):
i, j = pairs[num_cols * row + col]
ax[row][col].set_title(f"{columns[i]} vs {columns[j]}", fontsize=8)
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
for mark, (diagnosis, points) in zip(marks, points_by_diagnosis.items()):
xs = [point[i] for point in points]
ys = [point[j] for point in points]
ax[row][col].scatter(xs, ys, marker=mark, label=diagnosis)
ax[-1][-1].legend(loc='lower right', prop={'size': 6})
plt.show()
```
## 3. 데이터 전처리
### 3.1 데이터셋 분리
#### 입력 데이터에 상수 항에 대한 입력 1 추가
```
X_cancer = [[1.0] + row for row in X_cancer]
import random
from scratch.machine_learning import train_test_split
random.seed(12)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, 0.25)
print('train dataset :', len(X_train))
print('test dataset :', len(X_test))
```
### 3.2 데이터 표준화 (Standardization) (Q1)
훈련 데이터의 평균과 표준 편차로 테스트 데이터를 표준화 하도록 normalization() 함수를 작성해 보시오.
```
from scratch.working_with_data import scale, rescale
def normalization(data: List[Vector],
means : Vector = None,
stdevs : Vector = None) -> List[Vector]:
# your code
dim = len(data[0])
if (means == None or stdevs == None):
means, stdevs = scale(data)
# Make a copy of each vector
rescaled = [v[:] for v in data]
for v in rescaled:
for i in range(dim):
if stdevs[i] > 0:
v[i] = (v[i] - means[i]) / stdevs[i]
return rescaled, means, stdevs
X_train_normed, X_train_means, X_train_stdevs = normalization(X_train)
X_test_normed, _, _ = normalization(X_test, X_train_means, X_train_stdevs)
```
## 4. 로지스틱 회귀
### 4.1 로지스틱 함수 (Logistic Function) (Q2)
로지스틱 함수와 미분을 구현해 보시오.
```
import math
# your code
#로지스틱 함수
def logistic(x: float) -> float:
return 1.0/(1 + math.exp(-x))
#로지스틱 함수의 미분
def logistic_prime(x: float) -> float:
y = logistic(x)
return y * (1 - y)
```
### 4.2 손실 함수 (Q3)
베르누이 분포의 음의 로그 우도(NLL)로 정의되는 손실 함수를 구현해 보시오.
```
from scratch.linear_algebra import Vector, dot
from typing import List
# your code
#음의 로그 우도
def _negative_log_likelihood(x: Vector, y: float, beta: Vector) -> float:
if y == 1:
return -math.log(logistic(dot(x, beta)))
else:
return -math.log(1 - logistic(dot(x, beta)))
#전체 데이터셋에 대한 NLL 합산
def negative_log_likelihood(xs: List[Vector],
ys: List[float],
beta: Vector) -> float:
return sum(_negative_log_likelihood(x, y, beta)
for x, y in zip(xs, ys))
```
### 4.3 손실 함수 미분 (Q4)
NLL의 그래디언트를 구현해 보시오.
```
from scratch.linear_algebra import vector_sum
# your code
def _negative_log_partial_j(x: Vector, y: float, beta: Vector, j: int) -> float:
return -(y - logistic(dot(x, beta))) * x[j]
def _negative_log_gradient(x: Vector, y: float, beta: Vector) -> Vector:
return [_negative_log_partial_j(x, y, beta, j) for j in range(len(beta))]
def negative_log_gradient(xs: List[Vector],
ys: List[float],
beta: Vector) -> Vector:
return vector_sum([_negative_log_gradient(x, y, beta)
for x, y in zip(xs, ys)])
```
### 4.4 모델 훈련 (Q5)
로지스틱 회귀 모델 학습을 경사 하강법으로 구현하시오.
```
import random
import tqdm
import IPython.display as display
from scratch.linear_algebra import vector_mean
from scratch.gradient_descent import gradient_step
def minibatches(xs: List[Vector], batch_size=20):
for start in range(0, len(xs), batch_size):
batch_xs=xs[start:start+batch_size]
batch_ys=ys[start:start+batch_size]
return batch_xs, batch_ys
def logistic_regression(xs: List[Vector],
ys: List[float],
learning_rate: float = 0.001,
num_steps: int = 1000,
batch_size: int = 1) -> Vector:
# your code
#초기화
beta = [random.random() for _ in range(31)]
history = []
with tqdm.trange(5000) as t:
for epoch in t:
for batch in minibatches(xs, batch_size=20):
gradient = negative_log_gradient(xs, ys, beta)
beta = gradient_step(beta, gradient, -learning_rate)
loss = negative_log_likelihood(xs, ys, beta)
t.set_description(f"loss: {loss:.3f} beta: {beta}")
history.append(loss)
if epoch and epoch % 100 == 0:
display.clear_output(wait=True)
plt.plot(history)
plt.show()
return beta
beta = logistic_regression(X_train_normed, y_train)
```
#### 𝜷 확인
```
plt.plot(beta)
plt.show()
```
### 3.7 모델 테스트 (Q6)
테스트 데이터를 이용해서 모델 예측을 해보고 TP, FP, FN, TN을 계산해 보시오.
```
# your code
true_positives = false_positives = true_negatives = false_negatives = 0
for x_i, y_i in zip(X_test_normed, y_test):
prediction = logistic(dot(beta, x_i))
if y_i == 1 and prediction >= 0.5: # TP: paid and we predict paid
true_positives += 1
elif y_i == 1: # FN: paid and we predict unpaid
false_negatives += 1
elif prediction >= 0.5: # FP: unpaid and we predict paid
false_positives += 1
else: # TN: unpaid and we predict unpaid
true_negatives += 1
TP = true_positives
FN = false_negatives
FP = false_positives
TN = true_negatives
confusion_matrix = [[TP, FP], [FN, TN]]
```
### 3.8 모델 성능
```
from scratch.machine_learning import accuracy, precision, recall, f1_score
print(confusion_matrix)
print("accuracy :", accuracy(TP, FP, FN, TN))
print("precision :", precision(TP, FP, FN, TN))
print("recall :", recall(TP, FP, FN, TN))
print("f1_score :", f1_score(TP, FP, FN, TN))
predictions = [logistic(dot(beta, x)) for x in X_test_normed]
plt.scatter(predictions, y_test, marker='+')
plt.xlabel("predicted probability")
plt.ylabel("actual outcome")
plt.title("Logistic Regression Predicted vs. Actual")
plt.show()
```
| github_jupyter |
# Programming with Python
## Episode 2 - Analyzing Data from Multiple Files
Teaching: 20 min,
Exercises: 20 min
## Objectives
- Use a library function to get a list of filenames that match a wildcard pattern.
- How can I do the same operations on many different files?
- Write a for loop to process multiple files.
We now have almost everything we need to process all our data files. The only thing that’s missing is a library with a rather unpleasant name `glob`, let's import it.
```
import glob
```
```
import glob
```
The `glob` library contains a function, also called `glob`, that finds files and directories whose names match a pattern. We provide those patterns as strings which may include *wildcards*:
- the wildcard character `*` matches zero or more of any character
- the wildcard character `?` matches a single character.
We can use this to get the names of all the CSV files in the `data` directory:
```
print(glob.glob('data/inflammation*.csv'))
```
the wild card character means you don't have to put the full name in (i.e. if you forget the full name??)
```
print(glob.glob('data/inflamm*.csv'))
filelist = glob.glob('data/inflamm*.csv')
print (filelist)
```
`glob.glob`’s result is a list of matching filenames and directory paths (in arbitrary order).
This means we can loop over it to do something with each filename in turn. In our case, the “something” we want to do is generate a set of plots for each file in our complete inflammation dataset.
If we want to start by analyzing just the first three files in alphabetical order, we can use the built-in `sorted` function to generate a new sorted list from the `glob.glob` output:
```
import numpy
import matplotlib.pyplot
filenames = sorted(glob.glob('data/inflammation*.csv'))
filenames = filenames[0:3]
print(filenames)
```
```
import numpy
import matplotlib.pyplot
import glob
filenames = sorted(glob.glob('data/inflammation*.csv'))
filenames = filenames[0:3]
for filenames in filenames:
# print(filenames)
data = numpy.loadtxt(fname=filenames, delimiter=',')
print (filenames, 'total sum', numpy.sum (data))
print (filenames, 'sum per day', numpy.sum (data, axis=0))
print(data)
#print('data after the loop is finished:\n', data)
#numpy.sum(data)
```
and now we can loop over each filename in turn using the code from an earlier episode to product a set of plots for each.
```
for f in filenames:
print(f)
data = numpy.loadtxt(fname=f, delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.plot(numpy.min(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show()
```
```
for f in filenames:
print(f)
data = numpy.loadtxt(fname=filenames, delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.plot(numpy.min(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show()
```
Sure enough, the maxima of the first two data sets show exactly the same ramp as the first, and their minima show the same staircase structure; a different situation has been revealed in the third dataset, where the maxima are a bit less regular, but the minima are consistently zero - probably indicating an issue in our data.
## Excercises
#### Plotting differences between data files
Plot the difference between the average of the first dataset and the average of the second dataset, i.e., the difference between the leftmost plot of the first two figures.
```
import glob
import numpy as np
import matplotlib.pyplot as plt
# Grab all the filenames and sort them
filenames = sorted(glob.glob('data/inflammation*.csv'))
print(filenames)
# load the first 2 data files
data0 = np.loadtxt(fname=filenames[0], delimiter=',')
data1 = np.loadtxt(fname=filenames[1], delimiter=',')
# now compute the means - np.mean()
data0_mean = np.mean(data0,axis = 0)
data1_mean = np.mean(data1,axis = 0)
print(data0_mean)
print(data1_mean)
data_mean_diff = data1_mean - data0_mean
print(data_mean_diff)
# create a figure
#either this way with subplots or below with plt.plot
fig = plt.figure(figsize=(10.0, 3.0))
subplot1 = fig.add_subplot(1,1,1,)
subplot1.plot(data_mean_diff)
# and plot the difference between the meansof the first and second data set - plt.plt()
#plt.plot (data_mean_diff)
# add a label to Y axis
plt.ylabel('Difference in average')
# and show it
plt.show()
```
#### Generate Composite Statistics
Use each of the files once, to generate a dataset containing values averaged over all patients:
```
import glob
import numpy as np
import matplotlib.pyplot as plt
filenames = glob.glob('data/inflammation*.csv')
composite_data = np.zeros((60,40))
for f in filenames:
#
# read each file (np.loadtxt) and add it to the composite_data vector
#
# and rescale it
composite_data/=len(filenames)
print(composite_data)
```
and now plot the stats for the composite date it using `matplotlib`
```
fig = plt.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(np.mean(composite_data, axis=0))
axes2.set_ylabel('max')
axes2.plot(np.max(composite_data, axis=0))
axes3.set_ylabel('min')
axes3.plot(np.min(composite_data, axis=0))
fig.tight_layout()
plt.show()
```
## Key Points
Use `glob.glob(pattern)` to create a list of files whose names match a pattern.
Use `*` in a pattern to match zero or more characters, and ? to match any single character.
### Save, and version control your changes
- save your work: `File -> Save`
- add all your changes to your local repository: `Terminal -> git add .`
- commit your updates a new Git version: `Terminal -> git commit -m "End of Episode 2"`
- push your lastest commits to GitHub: `Terminal -> git push`
| github_jupyter |
<font face="Verdana, cursive, sans-serif" >
<center><H1>Variable Selection with Information Value <br>and Application of Weight of Evidence</H1></center>
<center><H2><font color='darkred'>How to quickly identify potential variable for classification model </font></H2></center>
<p>This documentation is powered by <b>Jupyter Notebooks</b>. To learn more about how to code SAS in Jupyter Notebooks environment, please refer to <a href="https://github.com/sassoftware/sas_kernel">SAS Kernel for Jupyter</a>. Please note that you - DO NOT - required to have Python or Jupyter Notebooks in order to utilize these <b>SAS</b> macros.</p>
<p>Variable selection, or feature selection, is one of the most crucial part of data analytics process. Ideally, we would like the model to be *parsimonious* . That is, we should retain only few, relevant and predictive variables in the model. Generally speaking, variable selection also includes variable elimination as well.</p>
<p>Benefits of having a parsimonious model are:</p>
<ol>
<li>Clean and easy to understand model </li>
<li>A model that runs fast</li>
<li>Reduce overfitting and improve prediction performance</li>
<li>Less effort on unnecessary data collecting and pre-processing</li>
</ol>
<p>The following lists are my go-to guidlines that you may find useful. However, it is important to note that the lists are non-exhaustive, and you are recommended to futher your study with <b>*A Practical Guideline to Dimension Reduction, Patel(2016)*</b></p>
<ol>
<li> Eliminate variables with lots of missing/invalid values. For example, if a variable is missing/invalid more than 50%, you may consider removing it from your dataset, or not include it in modeling process. I personally would like to avoid imputing missing values, hence my threashold is usually set as high as 85% or more. Check out
<a href="https://nbviewer.jupyter.org/github/swatakit/SAS-Tools/blob/master/Missing%20Reports%20Notebooks.ipynb">sasmacro</a> and
<a href="https://nbviewer.jupyter.org/github/swatakit/Python-Tools/blob/master/Missing%20Reports.ipynb">python</a> that will do just that
</li>
<li> Eliminate variables that is not relevant to problem statement. This would require domain knowledge of the subject.</li>
<li> Eliminate variables that heavily loaded on 1 class. For example if a variable COUNTRY is 99% loaded on THAILAND, this variable should not be in the model as it would not provide any information</li>
<li> Eliminate highly correlated variables. Using domain knowledge or spearman/pearson correlation matrix, we can easily identity pairwise correlated variables. Pick only ones which deem most relevant to the problem statement.</li>
<li>Apply dimensionality reduction techniques such as principal component analysis, factor analysis</li>
<li>Apply sequential selection strategy such as forward/backward/stepwise selection</li>
<li>Apply regularization regression technique. Based on input lambda parameters; Ridge regression will penalise large coefficients, while Lasso regression will drop a variable completely by setting its coeficient to zero</li>
<li>Identify potential useful variables by using statistical tools such as chi-square criteria, <b>Weight of Evidence/Information Values</b>,impurity,infomation gain or variable importance</li>
</ol>
<p>Specifically for this post, I will focus on Weight of Evidence/Information Values calculation. For classification modeling, of which target variable is a binary:- Yes vs. No, Fraud vs. Non-Fraud, Positive vs. Negative, 1 vs. 0; WOE/IV is a very simple tool that can help you shortlist potential variables for classification model</p>
<p><b>Step-by-Step Calculation</b></p>
<ol>
<li>For a continuous variable, split data into a number of bins(for categorical variable, skip this step )</li>
<li>Calculate the number of events and non-events in each bin</li>
<li>Calculate the % of events and % of non-events in each bin.</li>
<li>Calculate WOE by taking natural log of division of % of non-events and % of events</li>
<img src="./images/woe.png" width=200px height=8px>
<li>Calculate IV by taking summation of (%non-event-%event)*WOE</li>
<img src="./images/iv.png" width=300px height=10px>
</ol>
Reference: <a href="www.listendata.com">listendata </a>
<img src="./images/iv_bins_9.png" >
An illustrated calculation in excel is provided <a href="https://github.com/swatakit/SAS-Tools/blob/master/WOE%20and%20IV%20Example.xlsx"> here </a>. As shown in the provided excel, iv calculated value also subjected to a size of a bin. However, the optimal sizes of bins is beyond the scope of this post.
As a rule of thumb, IV and predictiveness are shown follow:-
<img src="./images/iv_rthb.png" >
It is important to note that IV is merely a tool to suggest that a variable *maybe* predictive, it is not necessarily mean that it will *stay* in the model. IV also has one very intriguing application, that is, not only that we can use IV to shortlist potential variable, we can also transform a raw variable to <b>WOE-Transformed variable</b> and use it as an input in a model.
<img src="./images/woe_trans.PNG" >
As shown in the above picture, a raw data that is >401 will can be recoded with -0.2123.<br>
<br><b>Benefits of WOE transformation</b>
<ol>
<li>Variable Reduction</li>
<li>Deal with missing values</li>
<li>Deal with extreme values</li>
<li>Linearize the variables</li>
<li>Increase predictive accuracy in logistic modelling</li>
</ol>
Reference: Sharma(2011), SSRN Electronic Journal,
<br>*Evidence in Favor of Weight of Evidence and Binning Transformations for Predictive Modelling.*
<p>In this post, I demonstrate WOE/IV calculation to 2 parts</p>
<ul>
<li><b>Part-1: Basic</b> - A simple macro to calculate IV for a numeric variable and a categorical variable</li>
<li><b>Part-2: Advanced</b> - By utilising the power of <code>DO_OVER</code> macro, published by <a href="http://www2.sas.com/proceedings/sugi31/040-31.pdf">Ted Clay</a>. <br>I demonstrate how to fully automate WOE/IV calcualtion</li>
</ul>
<font face="Verdana, cursive, sans-serif" >
<H2><font color='darkred'>Part-1: Basic - A simple calculation of WOE/IV </font></H2>
<font face="Verdana, cursive, sans-serif" >
<b>First, let's execute Ted Clay macros </b>
```
OPTION NOSOURCE NONOTES;
%LET LOC_SASMACRO=C:\sasmacro;
%INCLUDE "&LOC_SASMACRO.\NUMLIST.SAS";
%INCLUDE "&LOC_SASMACRO.\ARRAY.SAS";
%INCLUDE "&LOC_SASMACRO.\DO_OVER.SAS";
```
<font face="Verdana, cursive, sans-serif" >
<b>let's use TITANIC as an example</b>
```
******************************
Use TITANIC dataset as example
*******************************;
%INCLUDE "&LOC_SASMACRO.\DATA_TITANIC.SAS";
```
<font face="Verdana, cursive, sans-serif" >
<b>Set target variable and dataset to calculate WOE/IV</b>
```
%LET TARGET4MODEL=SURVIVED;
%LET DSBASE=TITANIC;
*If your DSBASE is huge, you may consider keeping only TARGET4MODEL and some variables,in order to speed up the process;
DATA TARGET/*(KEEP=&TARGET4MODEL. <varlist>)*/;
SET &DSBASE.;
RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H3>Macro to calculate WOE/IV for CHARECTER</H3>
The following macro is the exact calculation shown in excel, only implemented in MACRO/SQL.<br>
SQL Logic is credited to <a href="https://www.youtube.com/watch?v=BI5sZn5KhMY"> Analytic University </a>
```
%GLOBAL CNT;
%LET CNT=0;
*To keep list of var and iv;
DATA SOURCE_IV;
INFILE DATALINES dsd DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT IV VARNAME$ ;
DATALINES ;
;
RUN;
*To keep list of var and woe;
DATA SOURCE_WOE;
INFILE DATALINES dsd DELIMITER='|' MISSOVER ;
INFORMAT BIN VARNAME $100. ;
INPUT BIN NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV ;
DATALINES ;
;
RUN;
%MACRO DOWOEIV_CHAR(TARGETVAR);
%PUT TARGETVAR=&TARGETVAR.;
%LET CNT=%EVAL(&CNT.+1);
%PUT &CNT.;
PROC SQL;
CREATE TABLE TARGET_1 AS
(
SELECT &TARGET4MODEL.
,&TARGETVAR.
,SUM(&TARGET4MODEL.=0) AS TOTAL_NONRESP
,SUM(&TARGET4MODEL.=1) AS TOTAL_RESP
FROM TARGET
)
;
CREATE TABLE WOE AS
(
SELECT &TARGETVAR.
,SUM(&TARGET4MODEL.=0) AS NONRESP /*NONRESP*/
,SUM(&TARGET4MODEL.=1) AS RESP /*RESP*/
,MEAN(TOTAL_NONRESP) AS TOT_NONRESP/*TRICK, ONLY USE MEAN FUNCTION TO GET TOTAL OF R AND NR*/
,MEAN(TOTAL_RESP) AS TOT_RESP
FROM TARGET_1
GROUP BY &TARGETVAR.
)
;
RUN;QUIT;
DATA _WOE_&CNT.(DROP=&TARGETVAR.);
INFORMAT BIN VARNAME $100.;
SET WOE;
BIN=&TARGETVAR.;
PCT_NONRESP=NONRESP/TOT_NONRESP;
PCT_RESP=RESP/TOT_RESP;
WOE=0;
IF PCT_RESP>0 THEN WOE=LOG(PCT_NONRESP/PCT_RESP);
IV=(PCT_NONRESP-PCT_RESP)*WOE;
VARNAME = "&TARGETVAR.";
RUN;
PROC MEANS DATA=_WOE_&CNT. NOPRINT; VAR IV; OUTPUT OUT=_IV_&CNT. SUM=IV; RUN;
DATA _IV_&CNT.(DROP= _TYPE_ _FREQ_);
SET _IV_&CNT.;
INFORMAT VARNAME $100.;
VARNAME = "&TARGETVAR.";
RUN;
PROC APPEND DATA=_IV_&CNT. BASE=SOURCE_IV FORCE; RUN;
PROC APPEND DATA=_WOE_&CNT. BASE=SOURCE_WOE FORCE; RUN;
PROC DELETE DATA=TARGET_1 WOE _WOE_&CNT. _IV_&CNT. ; RUN;
%MEND;
%DOWOEIV_CHAR(SEX);
%DOWOEIV_CHAR(PCLASS);
*The following is an IV for each variable;
PROC PRINT DATA=SOURCE_IV;RUN;
*The following is an WOE by each BIN, of each variable. As illustrated here, 'female' raw value can be replaced with -1.52988;
PROC PRINT DATA=SOURCE_WOE;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H3>Macro to calculate WOE/IV for NUMERIC</H3>
<br>For numeric variable, the variable needed to be discretize prior to proceeding with similar calculation.
Let's take AGE as an example; let's bin AGE with 2 type of formats
- Format 1 Using __PROC RANK__ to bin into 3 groups, base on numeric values
- Format 2 Using __PROC FORMAT__ to bin into groups, base on some defined rules
```
%GLOBAL CNT;
%LET CNT=0;
*To keep list of var and iv;
DATA SOURCE_IV;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT IV VARNAME$ BUCKET;
DATALINES ;
;
RUN;
*To keep list of var and WOE;
DATA SOURCE_WOE;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT BIN MIN MAX NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV BUCKET;
DATALINES ;
;
RUN;
*To keep list of var and iv;
DATA SOURCE_IVFMT;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. BUCKET $32. ;
INPUT IV VARNAME$ BUCKET$;
DATALINES ;
;
RUN;
*To keep list of var and WOE;
DATA SOURCE_WOEFMT;
INFILE DATALINES dsd DELIMITER='|' MISSOVER ;
INFORMAT BIN VARNAME $100. BUCKET $32. ;
INPUT BIN NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV BUCKET;
DATALINES ;
;
RUN;
%MACRO DOWOEIV_NUM(TARGETVAR,SBIN);
%PUT SBIN=&SBIN.;
%PUT TARGETVAR=&TARGETVAR.;
%LET CNT=%EVAL(&CNT.+1);
%PUT &CNT.;
DATA TEMP;
SET TARGET(KEEP=&TARGET4MODEL. &TARGETVAR.);
RUN;
PROC RANK DATA=TEMP GROUP=&SBIN. OUT=TARGET_BIN&SBIN.;
VAR &TARGETVAR.;
RANKS &TARGETVAR._R;
RUN;
PROC SQL;
CREATE TABLE TARGET_BIN&SBIN._1 AS
(
SELECT &TARGET4MODEL.
,&TARGETVAR.
,&TARGETVAR._R
,SUM(&TARGET4MODEL.=0) AS TOTAL_NONRESP
,SUM(&TARGET4MODEL.=1) AS TOTAL_RESP
FROM TARGET_BIN&SBIN.
)
;
CREATE TABLE WOE AS
(
SELECT &TARGETVAR._R
,MIN(&TARGETVAR.) AS MIN
,MAX(&TARGETVAR.) AS MAX
,SUM(&TARGET4MODEL.=0) AS NONRESP /*NONRESP*/
,SUM(&TARGET4MODEL.=1) AS RESP /*RESP*/
,MEAN(TOTAL_NONRESP) AS TOT_NONRESP
,MEAN(TOTAL_RESP) AS TOT_RESP
FROM TARGET_BIN&SBIN._1
GROUP BY &TARGETVAR._R
)
;
RUN;QUIT;
DATA _WOE_&SBIN._&CNT.(RENAME=(&TARGETVAR._R=BIN));
SET WOE;
INFORMAT VARNAME $100.;
PCT_NONRESP=NONRESP/TOT_NONRESP;
PCT_RESP=RESP/TOT_RESP;
WOE=0;
IF PCT_RESP>0 THEN WOE=LOG(PCT_NONRESP/PCT_RESP);
IV=(PCT_NONRESP-PCT_RESP)*WOE;
VARNAME = "&TARGETVAR.";
BUCKET=&SBIN.;
RUN;
PROC MEANS DATA=_WOE_&SBIN._&CNT. NOPRINT; VAR IV; OUTPUT OUT=_IV_&SBIN._&CNT. SUM=IV; RUN;
DATA _IV_&SBIN._&CNT.(DROP= _TYPE_ _FREQ_);
SET _IV_&SBIN._&CNT.;
INFORMAT VARNAME $100.;
VARNAME = "&TARGETVAR.";
BUCKET=&SBIN.;
RUN;
PROC APPEND DATA=_IV_&SBIN._&CNT. BASE=SOURCE_IV FORCE; RUN;
PROC APPEND DATA=_WOE_&SBIN._&CNT. BASE=SOURCE_WOE FORCE; RUN;
PROC DELETE DATA=TARGET_BIN&SBIN._1 TARGET_BIN&SBIN. WOE
_IV_&SBIN._&CNT. _WOE_&SBIN._&CNT. TEMP; RUN;
%MEND;
%MACRO DOWOEIV_FMT(TARGETVAR,FMT);
%PUT SBIN=&FMT.;
%PUT TARGETVAR=&TARGETVAR.;
%LET CNT=%EVAL(&CNT.+1);
%PUT &CNT.;
DATA TEMP;
SET TARGET(KEEP=&TARGET4MODEL. &TARGETVAR.);
&TARGETVAR._BIN=PUT(&TARGETVAR. ,&FMT.);
RUN;
PROC SQL;
CREATE TABLE TARGET_1 AS
(
SELECT &TARGET4MODEL.
,&TARGETVAR._BIN
,SUM(&TARGET4MODEL.=0) AS TOTAL_NONRESP
,SUM(&TARGET4MODEL.=1) AS TOTAL_RESP
FROM TEMP
)
;
CREATE TABLE WOE AS
(
SELECT &TARGETVAR._BIN
,SUM(&TARGET4MODEL.=0) AS NONRESP /*NONRESP*/
,SUM(&TARGET4MODEL.=1) AS RESP /*RESP*/
,MEAN(TOTAL_NONRESP) AS TOT_NONRESP/*TRICK, ONLY USE MEAN FUNCTION TO GET TOTAL OF R AND NR*/
,MEAN(TOTAL_RESP) AS TOT_RESP
FROM TARGET_1
GROUP BY &TARGETVAR._BIN
)
;
RUN;QUIT;
DATA _WOE_&CNT.(DROP= &TARGETVAR._BIN);
INFORMAT BIN VARNAME $100. BUCKET $32.;
SET WOE;
PCT_NONRESP=NONRESP/TOT_NONRESP;
PCT_RESP=RESP/TOT_RESP;
WOE=0;
IF PCT_RESP>0 THEN WOE=LOG(PCT_NONRESP/PCT_RESP);
IV=(PCT_NONRESP-PCT_RESP)*WOE;
VARNAME = "&TARGETVAR.";
BUCKET="&FMT.";
BIN=&TARGETVAR._BIN;
RUN;
PROC MEANS DATA=_WOE_&CNT. NOPRINT; VAR IV; OUTPUT OUT=_IV_&CNT. SUM=IV; RUN;
DATA _IV_&CNT.(DROP= _TYPE_ _FREQ_);
SET _IV_&CNT.;
INFORMAT BUCKET $32.;
INFORMAT VARNAME $100.;
VARNAME = "&TARGETVAR.";
BUCKET="&FMT.";
RUN;
PROC APPEND DATA=_IV_&CNT. BASE=SOURCE_IVFMT FORCE; RUN;
PROC APPEND DATA=_WOE_&CNT. BASE=SOURCE_WOEFMT FORCE; RUN;
PROC DELETE DATA=TARGET_1 WOE _WOE_&CNT. _IV_&CNT. TEMP ; RUN;
%MEND;
PROC FORMAT;
VALUE AGE_BIN
0-<18='0_(<18)'
18-<35='A_[18,35)'
35-<60='B_[35,60)'
60-HIGH='C_[60>>)'
OTHER='Z_MISSING'
;
RUN;
%DOWOEIV_NUM(AGE,3);
%DOWOEIV_FMT(AGE,AGE_BIN.);
*The following is an IV for each variable;
PROC PRINT DATA=SOURCE_IV;RUN;
PROC PRINT DATA=SOURCE_IVFMT;RUN;
*The following is an WOE by each BIN, of each variable.;
PROC PRINT DATA=SOURCE_WOE;RUN;
PROC PRINT DATA=SOURCE_WOEFMT;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H2><font color='darkred'>Part-2: Advanced - Automate the calculation of WOE/IV </font></H2>
<p></p>
<p>Now that we have an understanding of how <code>DOWOEIV_CHAR()</code>,<code>DOWOEIV_NUM()</code>,<code>DOWOEIV_FMT()</code> works, we take things a little bit further by looping thru all variables in a dataset. The following are strategies that carried out by the macros</p>
<ul>
<li>Calculate %populated of each variable, and exclude variable that has %populated less than 75% </li>
<li>If a variable is of type DATE, DATETIME or some kind of Citizen ID, Customer ID, Name, exclude it </li>
<li>Calculate calculate basic stats for all remaining variables, carefully identify how many BINS we may have for each variable </li>
<li>For the remaining variables</li>
<ul>
<li>For character and flags, calculate WOE/IV with <code>DOWOEIV_CHAR()</code></li>
<li>For numeric, calculate WOE/IV with <code>DOWOEIV_NUM()</code>,<code>DOWOEIV_FMT()</code> . <code>DOWOEIV_FMT()</code> is optional for some variable</li>
</ul>
</ul>
Once we have identified which variables to calculate IV with which macros, we proceed with automation
<font face="Verdana, cursive, sans-serif" >
<b>Define macros and missing/invalid values patterns</b>
```
**********************************************
Defined formats for missing/invalids values.
***********************************************;
PROC FORMAT;
VALUE NM_MISS
.= '0'
99999999= '0'
OTHER = '1'
;
VALUE $CH_MISS
'',' ','.','-','*'= '0'
'N/A','n/a','NA','N.A','-NA-','na','n.a.','n.a' = '0'
'NULL','null','NONE','--NONE--' = '0'
'unknown','UNKNOWN','Z_ERROR','Z_MISSING'= '0'
'99999999','X','TESTUSER','U','C9999'= '0'
'email@domain.com'= '0'
OTHER = '1'
;
VALUE $NM_MISSLABEL
'0'="MISS/INVALID"
'1'="POPUPATED"
;
RUN;
%INCLUDE "&LOC_SASMACRO.\MISSING_REPORTS.SAS";
%INCLUDE "&LOC_SASMACRO.\QUICKSTATS.SAS";
```
<font face="Verdana, cursive, sans-serif" >
<b>Take a look at some sample</b>
<br>It is already obvious in this step that __NAME,PASSENGERID,and TICKET__ are likely to be dropped from IV calculation
```
PROC PRINT DATA=TITANIC(OBS=5);RUN;
%MISSING_REPORT(DSNAME=TITANIC,
FMT_MISSNUM=NM_MISS.,
FMT_MISSCHAR=$CH_MISS.);
```
<font face="Verdana, cursive, sans-serif" >
Missing report shows that <b>CABIN</b> is likely to be dropped
```
PROC PRINT DATA=MSREPORT_TITANIC ; RUN;
%QUICKSTATS(DSNAME=TITANIC,
REPORTNAME=TITANIC,
NLIMIT=20);
```
<font face="Verdana, cursive, sans-serif" >
<b>Quickstats shows that for numeric variable(TYPE=1)</b>
<ul>
<li>AGE, FARE can be discretized into bins=3,5,10</li>
<li>PARCH, PCLASS,SIBSP, instead of discretized into bins=3,5,10 ; these variables are better discretized into bins=NLEVELS of itself </li>
</ul>
```
PROC PRINT DATA=TITANIC_QSTATS_NM;RUN;
PROC PRINT DATA=TITANIC_QSTATS_CH;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<b>Filter out variables that is not needed or did not pass criteria from IV calculation</b>
```
DATA VARLIST_CHAR VARLIST_NUM;
SET MSREPORT_TITANIC;
*too much missing, drop it;
IF P_OK<75 THEN DELETE;
*Dont include target for IV calculation;
IF VAR IN ('Survived' ) THEN DELETE;
*this is passenger id,name, ticket code which has no meaning, drop it;
IF VAR IN ('PassengerId' 'Name' 'Ticket') THEN DELETE;
IF TYPE=2 OR VAR IN ('Parch', 'Pclass','SibSp') THEN OUTPUT VARLIST_CHAR;
ELSE OUTPUT VARLIST_NUM;
RUN;
PROC PRINT DATA=VARLIST_CHAR;RUN;
PROC PRINT DATA=VARLIST_NUM;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H3>Macro to AUTOMATE WOE/IV for CHARECTER</H3>
```
%ARRAY(VARLIST, DATA=VARLIST_CHAR, VAR=VAR);
%LET VARLIST = %DO_OVER(VARLIST,PHRASE=?);;
DATA TARGET(KEEP=&TARGET4MODEL. &VARLIST.);
SET &DSBASE.;
RUN;
PROC DELETE DATA=&TARGET4MODEL._WOE_CHAR
&TARGET4MODEL._IV_CHAR
SOURCE_IV SOURCE_WOE ; RUN;
DATA SOURCE_IV;
INFILE DATALINES dsd DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT IV VARNAME$ ;
DATALINES ;
;
RUN;
DATA SOURCE_WOE;
INFILE DATALINES dsd delimiter='|' MISSOVER ;
INFORMAT BIN VARNAME $100. ;
INPUT BIN NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV ;
DATALINES ;
;
RUN;
%GLOBAL CNT;
%LET CNT=0;
%DO_OVER(VALUES=&VARLIST.,MACRO=DOWOEIV_CHAR);
DATA &TARGET4MODEL._WOE_CHAR;
SET SOURCE_WOE;
RUN;
DATA &TARGET4MODEL._IV_CHAR;
SET SOURCE_IV;
RUN;
PROC SORT DATA=&TARGET4MODEL._IV_CHAR;BY DESCENDING IV ; ;RUN;
PROC DELETE DATA=SOURCE_WOE SOURCE_IV ; RUN;
%CLEAN_DSLABEL(WORK,&TARGET4MODEL._WOE_CHAR);
%CLEAN_DSLABEL(WORK,&TARGET4MODEL._IV_CHAR);
%PUT IV CHARS ENDED..;
PROC PRINT DATA=&TARGET4MODEL._IV_CHAR;RUN;
PROC PRINT DATA=&TARGET4MODEL._WOE_CHAR;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H3>Macro to AUTOMATE WOE/IV for NUMERIC</H3>
```
*For each numvar, determine how many bin we can have for autocal. you can set different threashold for NLevels ;
DATA BINS;
SET TITANIC_QSTATS_NM;
WHERE TYPE=1;
IF NLevels>=3 THEN R3=1; ELSE R3=0;
IF NLevels>=50 THEN R5=1; ELSE R5=0; *set higher criteria for bin5,10;
IF NLevels>=100 THEN R10=1; ELSE R10=0; *set higher criteria for bin5,10;
*Set some customised bin for some numvar;
INFORMAT FMT $50.;
FMT='NA';
IF NAME IN ('Age') THEN FMT='AGE_BIN.';
RENAME NAME=VAR;
RUN;
%QUICK_LEFTJOIN(VARLIST_NUM,BINS,VAR,OUT=VARNUMFMT(KEEP=VAR R3 R5 R10 P_OK FMT));
```
<font face="Verdana, cursive, sans-serif" >
<b>A dictionary for numeric variable-iv calculation</b>
After the above EDA, we set R3-R10 and FMT flag for IV calculation. Note that, you can change the threashold as well as having more bins with minor modification
```
PROC PRINT DATA=VARNUMFMT;RUN;
%ARRAY(VARLIST, DATA=VARNUMFMT, VAR=VAR);
%LET VARLIST = %DO_OVER(VARLIST,PHRASE=?);;
DATA TARGET(KEEP=&TARGET4MODEL. &VARLIST.);
SET &DSBASE.;
RUN;
PROC DELETE DATA=&TARGET4MODEL._WOE
&TARGET4MODEL._IV
SOURCE_IV SOURCE_WOE
SOURCE_IVFMT SOURCE_WOEFMT; RUN;
DATA SOURCE_IV;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT IV VARNAME$ BUCKET;
DATALINES ;
;
RUN;
DATA SOURCE_WOE;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. ;
INPUT BIN MIN MAX NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV BUCKET;
DATALINES ;
;
RUN;
DATA SOURCE_IVFMT;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT VARNAME $100. BUCKET $32. ;
INPUT IV VARNAME$ BUCKET$;
DATALINES ;
;
RUN;
DATA SOURCE_WOEFMT;
INFILE DATALINES DSD DELIMITER='|' MISSOVER ;
INFORMAT BIN VARNAME $100. BUCKET $32. ;
INPUT BIN NONRESP RESP TOT_NONRESP TOT_RESP VARNAME$ PCT_NONRESP PCT_RESP WOE IV BUCKET;
DATALINES ;
;
RUN;
%GLOBAL CNT;
%LET CNT=0;
%MACRO LOOP_DOWOEIV(TARGETVAR);
DATA TEMP;
SET VARNUMFMT;
IF VAR="&TARGETVAR." THEN
DO;
CALL SYMPUTX('R3',R3);
CALL SYMPUTX('R5',R5);
CALL SYMPUTX('R10',R10);
CALL SYMPUTX('FMT',FMT);
END;
RUN;
%IF &R3. %THEN %DOWOEIV_NUM(&TARGETVAR.,3);
%IF &R5. %THEN %DOWOEIV_NUM(&TARGETVAR.,5);
%IF &R10. %THEN %DOWOEIV_NUM(&TARGETVAR.,10);
%IF &FMT. NE NA %THEN %DOWOEIV_FMT(&TARGETVAR.,&FMT.);
%LET R3=0;
%LET R5=0;
%LET R10=0;
%LET FMT='NA';
%MEND;
%DO_OVER(VALUES=&VARLIST.,MACRO=LOOP_DOWOEIV);
DATA &TARGET4MODEL._WOE;
SET SOURCE_WOE;
RUN;
DATA &TARGET4MODEL._IV;
SET SOURCE_IV;
RUN;
%CLEAN_DSLABEL(WORK,&TARGET4MODEL._WOE);
%CLEAN_DSLABEL(WORK,&TARGET4MODEL._IV);
*Clean up redundant;
PROC SORT DATA=&TARGET4MODEL._IV; BY VARNAME DESCENDING IV BUCKET;RUN;
PROC SORT DATA=&TARGET4MODEL._IV NODUPKEY;BY VARNAME IV;RUN;
*consolidate FMT win BIN;
DATA &TARGET4MODEL._IV;
SET &TARGET4MODEL._IV;
INFORMAT BUCKET_TMP $32.;
BUCKET_TMP=PUT(COMPRESS(BUCKET),32.);
DROP BUCKET;
RENAME BUCKET_TMP=BUCKET;
RUN;
DATA &TARGET4MODEL._IV;
SET &TARGET4MODEL._IV SOURCE_IVFMT;
RUN;
PROC SORT DATA=&TARGET4MODEL._IV; BY VARNAME DESCENDING IV; RUN;
DATA &TARGET4MODEL._WOE;
INFORMAT BUCKET_TMP $32. BIN_TMP $100.;
SET &TARGET4MODEL._WOE;
BUCKET_TMP=PUT(COMPRESS(BUCKET),32.);
DROP BUCKET;
RENAME BUCKET_TMP=BUCKET;
BIN_TMP=PUT(COMPRESS(BIN),100.);
DROP BIN;
RENAME BIN_TMP=BIN;
RUN;
DATA &TARGET4MODEL._WOE;
SET &TARGET4MODEL._WOE SOURCE_WOEFMT;
RUN;
PROC SORT DATA=&TARGET4MODEL._WOE; BY VARNAME BUCKET BIN; RUN;
PROC DELETE DATA=SOURCE_WOE SOURCE_IV TARGET SOURCE_IVFMT SOURCE_WOEFMT; RUN;
%PUT IV-NUM ENDED..;
PROC PRINT DATA=&TARGET4MODEL._IV;RUN;
PROC PRINT DATA=&TARGET4MODEL._WOE;RUN;
```
<font face="Verdana, cursive, sans-serif" >
<H3>Conclusion</H3>
<br>
With WOE/IV techniques, missing/invalid values problems can be alleviated. The above WOE table also serve as a dictionary for WOE transformation. We can see from the above table that missing/invalid values are allocated in one of the bins. With missing values taken care of, we have more choices for variable selection.
<br>Let's take <b>AGE</b> as an example
<ol>
<li>Recode raw AGE to AGE_WOE. For example if AGE bin=3 is selected, missing value will be recoded with 0.40378, AGE between [0.42-22] will be recoded as -0.16791, so on and so forth, AGE_WOE is to be treated as numeric variable</li>
<li>Recode raw AGE to AGE_BIN. As shown in the below sample with <code>PUT()</code>. AGE_BIN is to be treated as categorical variable</li>
</ol>
```
DATA TITANIC_BIN;
SET TITANIC;
AGE_BIN = PUT(AGE,AGE_BIN.);
RUN;
PROC PRINT DATA=TITANIC_BIN(OBS=3);
VAR AGE AGE_BIN;
RUN;
```
| github_jupyter |
## Importing Basic Tools
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load the data
```
df = pd.read_csv('../data/data_pp.csv')
df.head()
```
### Asses the baseline
```
df['billboard'].value_counts(normalize=True)
# dropping ID, AlbumId, date_month and date_day
df_fit = df.drop(['date_day','id','album_id','date_month','decade'],axis=1) # as mentioned in the report
df_fit.head()
```
## Creating the Feature and Target Matrices
```
## Preparing data
X = df_fit.drop(['billboard'],axis=1)
y = df_fit['billboard']
```
## Check the Cross-Validation and Train Test Splits
```
from sklearn.model_selection import StratifiedKFold
def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
"""Create a sample plot for indices of a cross-validation object."""
# Generate the training/testing visualizations for each CV split
for ii, (tr, tt) in enumerate(cv.split(X=X, y=y, groups=group)):
# Fill in indices with the training/test groups
indices = np.array([np.nan] * len(X))
indices[tt] = 1
indices[tr] = 0
# Visualize the results
ax.scatter(range(len(indices)), [ii + .5] * len(indices),
c=indices, marker='_', lw=lw, cmap=plt.cm.coolwarm,
vmin=-.2, vmax=1.2)
# Plot the data classes and groups at the end
ax.scatter(range(len(X)), [ii + 1.5] * len(X),
c=y, marker='_', lw=lw, cmap=plt.cm.Paired)
# Formatting
yticklabels = list(range(n_splits)) + ['class']
ax.set(yticks=np.arange(n_splits+1) + .5, yticklabels=yticklabels,
xlabel='Sample index', ylabel="CV iteration",
ylim=[n_splits+1.1, -.1], xlim=[0, 100])
ax.set_title('{}'.format(type(cv).__name__), fontsize=15)
return ax
groups = [1]*len(X)
fig, ax = plt.subplots()
cv = StratifiedKFold(n_splits=5,shuffle=True)
ax = plot_cv_indices(cv, X, y, groups, ax, n_splits=5)
plt.savefig('../figures/CV_split.png')
```
## Primary Pipeline, the successive methods were just commented/uncommented
```
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.metrics import make_scorer
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
def ML_pipeline_GridSearchCV(X,y,random_state,n_folds):
# create a test set based on groups
X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.2, random_state = random_state, stratify=y)
# create the cross validator
kf = StratifiedKFold(n_splits=n_folds,shuffle=True,random_state=random_state)
# create the pipeline: preprocessor + supervised ML method
#pipe_SVC = make_pipeline(SVC())
#pipe_RandomForest = make_pipeline(RandomForestClassifier(n_estimators=100,random_state=random_state))
#pipe_LogisticRegression = make_pipeline(LogisticRegression(solver='saga',max_iter=10000,random_state=random_state))
#pipe_KNN = make_pipeline(KNeighborsClassifier(n_jobs=-1))
#pipe_LDA = make_pipeline(LDA())
#pipe_adaboost = make_pipeline(AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=2),n_estimators=100,random_state=random_state))
#pipe_xgboost = make_pipeline(XGBClassifier(n_estimators=100, objective='binary:logistic', learning_rate=0.3, colsample_bytree=0.9, subsample=0.66,
# silent=False, nthread=1))
# the parameter(s) we want to tune
#param_grid_SVC = {'svc__C': np.logspace(-3,4,num=8),'svc__gamma': np.logspace(-3,4,num=8)}
#param_grid_randomForest = {'randomforestclassifier__max_depth': np.linspace(2,10,num=9,dtype=int),'randomforestclassifier__min_samples_split': np.linspace(2,10,num=9,dtype=int)}
#param_grid_logisticregression = {'logisticregression__penalty': ['l1','l2'],'logisticregression__C': np.logspace(-4, 4, 20)}
#param_grid_knn = {'kneighborsclassifier__n_neighbors': np.linspace(1,100,20,dtype=int),'kneighborsclassifier__weights': ['uniform','distance'], 'kneighborsclassifier__metric': ['euclidean','manhattan']}
#param_grid_LDA = {'lineardiscriminantanalysis__solver': ['eigen','lsqr','svd']}
#param_grid_adaboost = {'adaboostclassifier__learning_rate': np.logspace(-3,4,num=8), 'adaboostclassifier__algorithm': ['SAMME', 'SAMME.R']}
#param_grid_xgboost = {"xgbclassifier__max_depth" : [ 3, 4, 5, 6, 8, 10, 12, 15],
# "xgbclassifier__min_child_weight" : [ 1, 3, 5, 7 ],
# "xgbclassifier__gamma" : [ 0.0, 0.1, 0.2 , 0.3, 0.4 ]}
# prepare gridsearch
grid = GridSearchCV(pipe, param_grid=param_grid,scoring = make_scorer(accuracy_score),
cv=kf, return_train_score = True,iid=False, verbose=3, n_jobs=-1)
# do kfold CV on _other
grid.fit(X_other, y_other)
return grid, grid.score(X_test, y_test)
```
## Now call each method and save them for analysis
```
## SVC-rbf didn't converge for 5 hours
## KNN didn't converge after 12 hours
## Random Forest saved
## logistic regression saved
## LDA saved
## QDA saved
## AdaBoost saved
import pickle
test_scores = []
for i in range(5):
grid, test_score = ML_pipeline_GridSearchCV(X,y,i*42,5)
print(grid.best_params_)
print('best CV score:',grid.best_score_)
print('test score:',test_score)
fname = '../results/grid'+str(i)+'.lasso'
file = open(fname, 'wb')
pickle.dump(grid,file)
file.close()
test_scores.append(test_score)
print('test accuracy:',np.around(np.mean(test_scores),2),'+/-',np.around(np.std(test_scores),2))
```
## Some LDA and QDA just for completeness
```
## Non Parametric Classification
def ML_pipeline_nonParametric(X,y,random_state):
X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.2, random_state = random_state, stratify=y)
clf = QDA() # Replace with LDA
clf.fit(X_other,y_other)
return clf, clf.score(X_test,y_test)
# splitter for _other
```
### Now run the algorithms from this cell and save them
```
test_scores = []
for i in range(5):
clf, test_score = ML_pipeline_nonParametric(X,y,i*42)
fname = '../results/grid'+str(i)+'.qda'
file = open(fname, 'wb')
pickle.dump(grid,file)
file.close()
test_scores.append(test_score)
print('test accuracy:',np.around(np.mean(test_scores),2),'+/-',np.around(np.std(test_scores),2))
```
| github_jupyter |
```
%pylab inline
import os
import glob
import pandas as pd
import re
from collections import OrderedDict
import seaborn as sns
sns.set_context('paper', font_scale=2)
sns.set_style('white')
def clean_tx(tx):
return re.sub(r'\.[0-9]+', '', tx)
root_dir = '/staging/as/skchoudh/re-ribo-analysis/hg38/SRP010679/ribocop_results_Feb2019_longest/'
def get_uorf(df):
return df.loc[df.ORF_type=='uORF',]
def get_translating(df):
return df.loc[df.status=='translating']
def get_tx_from_orfid(orfid):
splitted = orfid.split('_')
if len(splitted) == 4:
txid = splitted[0]
elif len(splitted)==5:
txid = splitted[0] + '_' + splitted[1]
else:
raise RuntimeError('Found:{}'.format(splitted))
return txid
annotation = pd.read_table('/staging/as/skchoudh/ribocop_hg38_feb2019_annotation_candidate_orfs.tsv').set_index('ORF_ID')
annotation
orf_df = OrderedDict()
for f in glob.glob('{}/*_translating_ORFs.tsv'.format(root_dir)):
filename = os.path.basename(f).replace('_translating_ORFs.tsv', '')
orf_df[filename] = pd.read_table(f)
orf_df_annotated = OrderedDict()
for key, df in orf_df.items():
df = df[df.ORF_type=='annotated'].sort_values(by=['transcript_id', 'ORF_ID', 'gene_id'])
orf_df_annotated[key] = df
print(df.shape)
def eval_profile(profile):
profile = eval(profile)
return sum(profile)
orf_profiles_sum = OrderedDict()
for key, df in orf_df_annotated.items():
df['profile_sum'] = df.profile.apply(eval_profile)
orf_profiles_sum[key] = df[['transcript_id', 'profile_sum']]
for key,df in orf_profiles_sum.items():
df.to_csv('/staging/as/skchoudh/SRP010679_tx_counts/{}.tsv'.format(key), sep='\t', header=None, index=False)
def get_start_end_coordinates(orf_id):
#orf_id = row['ORF_ID']
splitted = orf_id.split('_')
stop = splitted[-2]
start = splitted[-3]
return pd.Series([int(start), int(stop)])
```
# Approach
We need the transcript level counts to process in riborex. We take the simple approach of first groupring
everythong at the gene level. We only care about the protein_coding genes and the annotated ones.
```
selected_df = orf_df[filename]
selected_df[['start', 'stop']] =selected_df['ORF_ID'].apply(get_start_end_coordinates)
selected_df_grouped = selected_df.groupby(['gene_id', 'transcript_id'])
for key, group_df in selected_df_grouped:
profile_series_sum = pd.Series([])
index_series_sum = pd.Series([])
for index, row in group_df.iterrows():
profile = eval(row['profile'])
profile_series = pd.Series(profile, index=range(int(row['start']), int(row['stop'])+1 ))
index_series = pd.Series([1]*(row['stop']-row['start']+1), index=range(int(row['start']), int(row['stop'])+1 ))
profile_series_sum = profile_series_sum.add(profile_series)
index_series_sum = index_series_sum.add(index_series)
break
orf_id_range_index = {}
for orf_id, row in annotation.iterrows():
intervals_string = row['coordinate']
intervals = intervals_string.split(',')
interval_range_index = []
for interval in intervals:
start, end = interval.split('-')
interval_range_index = interval_range_index + list(range(int(start), int(end)+1))
orf_id_range_index[orf_id] = interval_range_index
metadata = pd.read_table('/staging/as/skchoudh/SRP010679_tx_counts/metadata_ribo.tsv')
metadata
metadata = metadata.loc[metadata.treatment.isin(['vehicle', 'pp242']),]
metadata
vehicle1 = orf_df['SRX118286']
vehicle2 = orf_df['SRX118292']
treatment1 = orf_df['SRX118290']
treatment2 = orf_df['SRX118296']
vehicle1 = get_uorf(vehicle1)
vehicle2 = get_uorf(vehicle2)
treatment1 = get_uorf(treatment1)
treatment2 = get_uorf(treatment2)
vehicle1['vehicle1_profile_sum'] = vehicle1.profile.apply(eval_profile)
vehicle2['vehicle2_profile_sum'] = vehicle2.profile.apply(eval_profile)
treatment1['treatment1_profile_sum'] = treatment1.profile.apply(eval_profile)
treatment2['treatment2_profile_sum'] = treatment2.profile.apply(eval_profile)
vehicle1_sum_df = vehicle1[['ORF_ID', 'vehicle1_profile_sum']]
vehicle2_sum_df = vehicle2[['ORF_ID', 'vehicle2_profile_sum']]
treatment1_sum_df = treatment1[['ORF_ID', 'treatment1_profile_sum']]
treatment2_sum_df = treatment2[['ORF_ID', 'treatment2_profile_sum']]
vehicle1_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_differential_analysis/SRX118286.tsv', header=False, index=False, sep='\t')
vehicle2_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_differential_analysis/SRX118292.tsv', header=False, index=False, sep='\t')
treatment1_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_differential_analysis/SRX118290.tsv', header=False, index=False, sep='\t')
treatment2_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_differential_analysis/SRX118296.tsv', header=False, index=False, sep='\t')
vehicle1 = orf_df['SRX118286']
vehicle2 = orf_df['SRX118292']
treatment1 = orf_df['SRX118290']
treatment2 = orf_df['SRX118296']
vehicle1 = get_uorf(vehicle1)
vehicle2 = get_uorf(vehicle2)
treatment1 = get_uorf(treatment1)
treatment2 = get_uorf(treatment2)
vehicle1 = get_translating(vehicle1)
vehicle2 = get_translating(vehicle2)
treatment1 = get_translating(treatment1)
treatment2 = get_translating(treatment2)
vehicle1['vehicle1_profile_sum'] = vehicle1.profile.apply(eval_profile)
vehicle2['vehicle2_profile_sum'] = vehicle2.profile.apply(eval_profile)
treatment1['treatment1_profile_sum'] = treatment1.profile.apply(eval_profile)
treatment2['treatment2_profile_sum'] = treatment2.profile.apply(eval_profile)
vehicle1_sum_df = vehicle1[['ORF_ID', 'vehicle1_profile_sum']]
vehicle2_sum_df = vehicle2[['ORF_ID', 'vehicle2_profile_sum']]
treatment1_sum_df = treatment1[['ORF_ID', 'treatment1_profile_sum']]
treatment2_sum_df = treatment2[['ORF_ID', 'treatment2_profile_sum']]
vehicle_both = list(sorted(set(vehicle1.ORF_ID).intersection(set(vehicle2.ORF_ID))))
treatment_both = list(sorted(set(treatment1.ORF_ID).intersection(set(treatment2.ORF_ID))))
combined_list = list(sorted(set(vehicle_both).intersection(treatment_both)))
vehicle1_sum_df = vehicle1_sum_df[vehicle1_sum_df.ORF_ID.isin(combined_list)].set_index('ORF_ID')
vehicle2_sum_df = vehicle2_sum_df[vehicle2_sum_df.ORF_ID.isin(combined_list)].set_index('ORF_ID')
treatment1_sum_df = treatment1_sum_df[treatment1_sum_df.ORF_ID.isin(combined_list)].set_index('ORF_ID')
treatment2_sum_df = treatment2_sum_df[treatment2_sum_df.ORF_ID.isin(combined_list)].set_index('ORF_ID')
vehicle1_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_translating_only_differential_analysis/SRX118286.tsv', header=False, index=True, sep='\t')
vehicle2_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_translating_only_differential_analysis/SRX118292.tsv', header=False, index=True, sep='\t')
treatment1_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_translating_only_differential_analysis/SRX118290.tsv', header=False, index=True, sep='\t')
treatment2_sum_df.to_csv('/staging/as/skchoudh/SRP010679_uORF_translating_only_differential_analysis/SRX118296.tsv', header=False, index=True, sep='\t')
vehicle_tx_list = [clean_tx(get_tx_from_orfid(x)) for x in vehicle_both]
treatment_tx_list = [clean_tx(get_tx_from_orfid(x)) for x in treatment_both]
cds_pp242_vs_vehicle_sig_down = pd.read_table('/staging/as/skchoudh/SRP010679_tx_differential_analysis/ribo_pp242_vs_vehicle.sig.down.tsv')
cds_pp242_vs_vehicle_sig_up = pd.read_table('/staging/as/skchoudh/SRP010679_tx_differential_analysis/ribo_pp242_vs_vehicle.sig.up.tsv')
cds_pp242_vs_vehicle_sig_up
cds_pp242_vs_vehicle_sig = pd.read_table('/staging/as/skchoudh/SRP010679_tx_differential_analysis/ribo_pp242_vs_vehicle.sig.tsv').reset_index().rename(columns={'index': 'txid', 'log2FoldChange': 'log2FC_CDS'})
cds_pp242_vs_vehicle_sig = cds_pp242_vs_vehicle_sig[['txid', 'log2FC_CDS']].sort_values(by=['txid']).set_index('txid')
cds_pp242_vs_vehicle_sig
uorf_pp242_vs_vehicle = pd.read_table('/staging/as/skchoudh/SRP010679_uORF_translating_only_differential_analysis/ribo_pp242_vs_vehicle.tsv').reset_index()
uorf_pp242_vs_vehicle['txid'] = uorf_pp242_vs_vehicle['index'].apply(get_tx_from_orfid)
uorf_pp242_vs_vehicle = uorf_pp242_vs_vehicle.sort_values(by='txid')
uorf_tx_level_fc = uorf_pp242_vs_vehicle[['log2FoldChange', 'txid']].groupby('txid').max().reset_index().sort_values(by=['txid']).rename(columns={'log2FoldChange': 'log2FC_uorf'}).set_index('txid')
uorf_cds_fc_df = uorf_tx_level_fc.join(cds_pp242_vs_vehicle_sig, how='inner')
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(uorf_cds_fc_df.log2FC_CDS, uorf_cds_fc_df.log2FC_uorf)
ax.set_xlabel('log2FC_CDS')
ax.set_ylabel('log2FC_uORF')
fig.tight_layout()
ax = sns.lmplot(x='log2FC_CDS', y= 'log2FC_uorf', data=uorf_cds_fc_df)
ax.set(xlabel='log2FC_CDS', ylabel = 'log2FC_uORF')
plt.savefig('uORF_CDS_log2FC.pdf')
```
# uORF-vs-TE Final results
```
annotation = pd.read_table('/staging/as/skchoudh/ribocop_hg38_feb2019_annotation_longest_candidate_orfs.tsv').set_index('ORF_ID')
t2g = annotation[['transcript_id', 'gene_id']].drop_duplicates().set_index('transcript_id')
diff_uorf_sig = pd.read_table('/home/cmb-panasas2/wenzhenl/github/ribocop-results/real/foldchange.txt')
diff_uorf_sig['transcript_id'] = diff_uorf_sig.ORF_ID.apply(get_tx_from_orfid)
diff_uorf_sig['gene_id'] = t2g.loc[diff_uorf_sig['transcript_id'], 'gene_id'].values
diff_uorf_sig['transcript_id'] = diff_uorf_sig['transcript_id'].apply(clean_tx)
diff_uorf_sig['gene_id'] = diff_uorf_sig['gene_id'].apply(clean_tx)
diff_uorf_sig.to_csv('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/foldchange_annotated.tsv', index=False, sep='\t')
diff_T_sig = pd.read_table('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_T_sig.txt', sep=' ').reset_index().rename(columns={'index': 'ORF_ID'})
diff_T_sig['transcript_id'] = diff_T_sig.ORF_ID.apply(get_tx_from_orfid)
diff_T_sig['gene_id'] = t2g.loc[diff_T_sig['transcript_id'], 'gene_id'].values
diff_T_sig['transcript_id'] = diff_T_sig['transcript_id'].apply(clean_tx)
diff_T_sig['gene_id'] = diff_T_sig['gene_id'].apply(clean_tx)
diff_T_sig.to_csv('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_T_sig_annotated.tsv', index=False, sep='\t')
diff_T_sig = pd.read_table('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_T_sig.txt', sep=' ').reset_index().rename(columns={'index': 'ORF_ID'})
diff_T_sig['transcript_id'] = diff_T_sig.ORF_ID.apply(get_tx_from_orfid)
diff_T_sig['gene_id'] = t2g.loc[diff_T_sig['transcript_id'], 'gene_id'].values
diff_T_sig['transcript_id'] = diff_T_sig['transcript_id'].apply(clean_tx)
diff_T_sig['gene_id'] = diff_T_sig['gene_id'].apply(clean_tx)
diff_T_sig.to_csv('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_T_sig_annotated.tsv', index=False, sep='\t')
diff_U_sig = pd.read_table('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_U_sig.txt', sep=' ').reset_index().rename(columns={'index': 'ORF_ID'})
diff_U_sig['transcript_id'] = diff_U_sig.ORF_ID.apply(get_tx_from_orfid)
diff_U_sig['gene_id'] = t2g.loc[diff_U_sig['transcript_id'], 'gene_id'].values
diff_U_sig['transcript_id'] = diff_U_sig['transcript_id'].apply(clean_tx)
diff_U_sig['gene_id'] = diff_U_sig['gene_id'].apply(clean_tx)
diff_U_sig.to_csv('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_U_sig_annotated.tsv', index=False, sep='\t')
with open('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_U_sig_genelist.txt', 'w') as fh:
for gene in diff_U_sig.gene_id.unique():
fh.write('{}\n'.format(gene))
with open('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/diff_T_sig_genelist.txt', 'w') as fh:
for gene in diff_T_sig.gene_id.unique():
fh.write('{}\n'.format(gene))
with open('/home/cmb-panasas2/skchoudh/github_projects/ribocop-results/real/foldchange_genelist.txt', 'w') as fh:
for gene in diff_uorf_sig.gene_id.unique():
fh.write('{}\n'.format(gene))
```
| github_jupyter |
# Nearest Centroid Classification with MInMaxScaler & PowerTransformer
This Code template is for the Classification task using a simple NearestCentroid with feature rescaling technique MinMaxScaler and feature tranformation technique used is PowerTransformer in a pipeline.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder,MinMaxScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestCentroid
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Data Scaling
Used sklearn.preprocessing.MinMaxScaler
This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)
### Feature Transformation
Used sklearn.preprocessing.PowerTransformer
Apply a power transform featurewise to make data more Gaussian-like.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
### Model
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed.
#### Tuning Parameter
> **metric** : The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean.
> **shrink_threshold** :Threshold for shrinking centroids to remove features.
```
# Build Model here
model = make_pipeline(MinMaxScaler(),PowerTransformer(),NearestCentroid())
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
| github_jupyter |
# Finger 1
- Analizar la distribución de postulaciones a avisos según:
- la fecha
- dia del mes
- dia de la semana
- etc.
- Algunas de las preguntas que podrían plantearse son:
- Se recibe un volumen de postulaciones similar cualquier dia de la semana?
- Existen variaciones según el mes del año, o aun dentro del mes?
- Etc.
- Averiguar:
- cuál es área de trabajo (nombre_area) sobre la cual se concentra mayor cantidad de ofertas laborales
- y dentro de esta área, cuál es la zona (nombre_zona) en donde hay mas demanda?
```
%matplotlib inline
from datetime import datetime
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('default') # haciendo los graficos un poco mas bonitos xD
plt.rcParams['figure.figsize'] = (15, 5)
```
## Analisis de postulantes
```
DATA_PATH = 'datos_navent_fiuba/'
postulantes_educacion = pd.read_csv('./datos_navent_fiuba/fiuba_1_postulantes_educacion.csv')
postulantes_educacion.head()
postulantes_genero_y_edad = pd.read_csv('./datos_navent_fiuba/fiuba_2_postulantes_genero_y_edad.csv')
postulantes_genero_y_edad.head()
postulantes = pd.merge(postulantes_educacion, postulantes_genero_y_edad, on='idpostulante')
postulantes.head()
```
## Analisis de postulaciones
```
postulaciones = pd.read_csv('./datos_navent_fiuba/fiuba_4_postulaciones.csv')
postulaciones['fechapostulacion'] = pd.to_datetime(postulaciones['fechapostulacion'])
postulaciones['aniopostulacion'] = postulaciones['fechapostulacion'].dt.year
postulaciones['mespostulacion'] = postulaciones['fechapostulacion'].dt.month
postulaciones['diasemanapostulacion'] = postulaciones['fechapostulacion'].dt.weekday_name
postulaciones['diapostulacion'] = postulaciones['fechapostulacion'].dt.date
postulaciones.head()
postulaciones['monthname'] = postulaciones['fechapostulacion'].dt.strftime('%b')
postulaciones.groupby('monthname').count()['idpostulante'].plot(kind='bar', figsize=(14,4), rot=0);
ax=plt.gca();
ax.invert_xaxis();
ax.set_ylabel('Cantidad de postulaciones');
ax.set_xlabel('Mes');
ax.set_title('Cantidad de postulaciones totales por dia de la semana');
```
## Postulaciones por dia para el mes de enero
```
postulaciones_mes1 = postulaciones.loc[postulaciones['mespostulacion'] == 1, :]
postulaciones_mes1 = postulaciones_mes1.groupby('diapostulacion').count()
postulaciones_mes1.insert(0, 'diadelmes', range(1, 1 + len(postulaciones_mes1)))
postulaciones_mes1 = postulaciones_mes1.set_index('diadelmes')
#postulaciones_mes1 = postulaciones_mes1.reset_index()
print ("Cantidad de postulantes en mes de enero:", postulaciones_mes1['idpostulante'].sum())
postulaciones_mes1['idpostulante'].plot(figsize=(14,4));
ax=plt.gca()
ax.set_ylabel('Cantidad de postulaciones')
ax.set_xlabel('Dias del mes');
ax.set_title('Cantidad de postulaciones por dia en el mes de enero')
plt.xticks(np.arange(1, len(postulaciones_mes1), 1.0));
```
## Postulaciones por dia para el mes de febrero
```
postulaciones_mes2 = postulaciones.loc[postulaciones['mespostulacion'] == 2, :]
postulaciones_mes2 = postulaciones_mes2.groupby('diapostulacion').count()
postulaciones_mes2.insert(0, 'diadelmes', range(1, 1 + len(postulaciones_mes2)))
postulaciones_mes2 = postulaciones_mes2.set_index('diadelmes')
#postulaciones_mes2 = postulaciones_mes2.reset_index()
print ("Cantidad de postulantes en mes de enero:", postulaciones_mes1['idpostulante'].sum())
postulaciones_mes2['idpostulante'].plot(figsize=(14,4));
ax=plt.gca()
ax.set_ylabel('Cantidad de postulaciones')
ax.set_xlabel('Dias del mes');
ax.set_title('Cantidad de postulaciones por dia en el mes de febrero')
plt.xticks(np.arange(1, len(postulaciones_mes2), 1.0));
```
## Postulaciones promedio por dia del mes
```
postulaciones_mes1 = postulaciones_mes1[['idpostulante']]
postulaciones_mes1 = postulaciones_mes1.rename(columns={'idpostulante': 'count'})
postulaciones_mes1['mes'] = 1
postulaciones_mes2 = postulaciones_mes2[['idpostulante']]
postulaciones_mes2 = postulaciones_mes2.rename(columns={'idpostulante': 'count'})
postulaciones_mes2['mes'] = 2
postulacionespormes = pd.concat([postulaciones_mes1, postulaciones_mes2]).groupby('diadelmes')
postulacionespormes = postulacionespormes.agg({'count': 'mean'})
print("Postulaciones promedio mensual:", postulacionespormes['count'].sum())
postulacionespormes.plot(figsize=(14,4));
ax=plt.gca()
ax.set_ylabel('Cantidad de postulaciones')
ax.set_xlabel('Dias del mes');
ax.set_title('Cantidad de postulaciones promedio por dia del mes')
plt.xticks(np.arange(1, len(postulacionespormes), 1.0));
```
## Postulaciones por fecha
```
postulaciones.groupby('diasemanapostulacion')['idpostulante'].count()
postulaciones.groupby('diapostulacion').count()['idpostulante'].plot(figsize=(14,4));
ax=plt.gca()
ax.set_ylabel('Cantidad de postulaciones')
ax.set_xlabel('Tiempo (dia)');
ax.set_title('Cantidad de postulaciones en funcion del tiempo');
```
## Postulaciones promedio por dia de la semana
```
postulaciones_totales_por_dia = postulaciones.groupby('diasemanapostulacion').count()['idpostulante']
postulaciones_totales_por_dia.plot(kind='bar', figsize=(14,4));
ax=plt.gca();
ax.set_ylabel('Cantidad de postulaciones');
ax.set_xlabel('Dia de la semana');
ax.set_title('Cantidad de postulaciones acumuladas por dia de la semana');
```
## Analisis de ofertas laborales
```
avisos_online = pd.read_csv('./datos_navent_fiuba/fiuba_5_avisos_online.csv')
avisos_online.head()
avisos_online_detalle = pd.read_csv('./datos_navent_fiuba/fiuba_6_avisos_detalle.csv')
avisos_online_detalle.head()
cant_avisos_por_area = avisos_online_detalle.groupby('nombre_area').count()['idaviso'].sort_values(ascending=False)
cant_avisos_por_area.head()
cant_avisos_por_area.head().plot(kind='barh', figsize=(14,4));
ax=plt.gca();
ax.invert_yaxis();
ax.set_ylabel('Areas');
ax.set_xlabel('Cantidad de avisos');
ax.set_title('Cantidad de avisos por area');
avisos_ventas = avisos_online_detalle[avisos_online_detalle['nombre_area'] == 'Ventas']
avisos_ventas.head()
avisos_ventas_por_zone = avisos_ventas.groupby('nombre_zona').count()['idaviso'].sort_values(ascending=False)
avisos_ventas_por_zone
avisos_ventas_por_zone.head().plot(kind='bar', figsize=(14,4), rot=0);
ax=plt.gca();
ax.set_ylabel('Zonas');
ax.set_xlabel('Cantidad de avisos');
ax.set_title('Cantidad de avisos del area ventas por zona');
```
| github_jupyter |
# Import ScikitLearn, Pandas and Numpy
```
import sklearn
import pandas as pd
import numpy as np
```
# 1. Read the Dataset using Pandas
```
train_data = pd.read_csv("data/image_train_data/image_train_data.csv")
test_data = pd.read_csv("data/image_test_data/image_test_data.csv")
train_data
```
# 2. Exploratory Data Analysis
```
train_data.head()
train_data.info()
train_data.describe()
import seaborn as sns
fig = sns.countplot(x='label', data=train_data)
```
# 3. Data Preprocessing
The "deep_features" column and "image_array" column have there lists string represented. So, they need to be transformed to floats and integers respectively to be used
```
def filtering_image_array(row):
y = row.loc['deep_features']
y = y.replace('[', '').replace(']','').replace(" ", ",").split(',')
y = list(map(float, y))
row.loc['deep_features'] = y
x = row.loc['image_array']
x = x.replace('[', '').replace(']','').replace(" ", ",").split(',')
x = list(map(int, x))
row.loc['image_array'] = x
return row
train_data = train_data.apply(filtering_image_array, axis=1)
test_data = test_data.apply(filtering_image_array, axis=1)
train_data.head()
im = train_data.loc[0]['image_array']
im[0:10]
```
Plot images by index
```
import matplotlib.pyplot as plt
def plot_images_index(ids, data):
for idx in ids:
row = data.loc[idx]
im = np.asarray(row.loc['image_array'])
im.resize(32,32,3)
plt.figure(figsize=(1, 1))
plt.imshow(im)
plt.show()
def plot_images_id(ids, data):
for idx in ids:
row = data.loc[data['id'] == idx]
im = np.asarray(row.loc[:, 'image_array'].tolist())
im = np.resize(im, (32,32,3))
plt.figure(figsize=(1, 1))
plt.imshow(im)
plt.show()
x = [24, 33, 36, 70, 90]
plot_images_id(x, train_data)
x = [0, 1, 2, 3, 4]
plot_images_index(x, train_data)
```
Apply LabelEncoder() on the label column of both training and test datasets.
```
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(train_data['label'])
def encodin_labels(row):
encoded_label = encoder.transform([row['label']])
row['encoded_label'] = encoded_label[0]
return row
train_data = train_data.apply(encodin_labels, axis=1)
train_data.head()
def encodin_labels(row):
encoded_label = encoder.transform([row['label']])
row['encoded_label'] = encoded_label[0]
return row
test_data = test_data.apply(encodin_labels, axis=1)
```
# 4. Logistic Regression Pipeline
```
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
pipeline_simple = Pipeline(
steps=[
('StandardScaler', StandardScaler()),
('Logistic_Regression', LogisticRegression(multi_class='multinomial', max_iter=1000))
],
verbose=True
)
pipeline_complex = Pipeline(
steps=[
('StandardScaler', StandardScaler()),
('Logistic_Regression', LogisticRegression(multi_class='multinomial', max_iter=1000))
],
verbose=True
)
from sklearn import set_config
set_config(display='diagram')
```
# 5. Pipeline Training
```
features_simple = 'image_array'
label = 'encoded_label'
X_train_simple = np.array(train_data[features_simple].tolist())
y_train = np.array(train_data[label].tolist())
pipeline_simple.fit(X_train_simple, y_train)
features_simple = 'image_array'
label = 'encoded_label'
X_train_simple = np.array(train_data[features_simple].tolist())
y_train = np.array(train_data[label].tolist())
pipeline_simple.fit(X_train_simple, y_train)
features_complex = 'deep_features'
X_train_complex = np.array(train_data[features_complex].tolist())
y_train = np.array(train_data[label].tolist())
pipeline_complex.fit(X_train_complex, y_train)
```
## PLot Learning Curves Pipeline
```
from sklearn.model_selection import learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and traning learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
estimator.verbose = False
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
estimator.verbose = True
return plt
```
# 6. Metrics
```
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
X_test = np.array(test_data[features_simple].tolist())
y_pred_simple = pipeline_simple.predict(X_test)
y_true = np.array(test_data[label].tolist())
print('Classification report:\n\n{}'.format(
classification_report(test_data[label], y_pred_simple))
)
print('Accuracy Score:', accuracy_score(y_true, y_pred_simple))
X_test = np.array(test_data[features_complex].tolist())
y_pred_complex = pipeline_complex.predict(X_test)
y_true = np.array(test_data[label].tolist())
print('Classification report:\n\n{}'.format(
classification_report(test_data[label], y_pred_complex))
)
print('Accuracy Score:', accuracy_score(y_true, y_pred_complex))
fig = plot_learning_curve(pipeline_simple, 'Simple Pipeline Learning Curves', X_train_simple, y_train)
fig = plot_learning_curve(pipeline_complex, 'Complex Pipeline Learning Curves', X_train_complex, y_train)
```
# Assignment
## Task 2: Create category-specific image retrieval models
```
bird_data = train_data.query('label == "bird"')
bird_data = bird_data.reset_index(drop=True)
bird_data.head()
cat_data = train_data.query('label == "cat"')
cat_data = cat_data.reset_index(drop=True)
cat_data.head()
dog_data = train_data.query('label == "dog"')
dog_data = dog_data.reset_index(drop=True)
dog_data.head()
automobile_data = train_data.query('label == "automobile"')
automobile_data = automobile_data.reset_index(drop=True)
automobile_data.head()
from sklearn.neighbors import NearestNeighbors
bird_model = Pipeline(
steps=[
('Nearest Neighbor', NearestNeighbors(metric='euclidean', algorithm='brute'))
],
verbose=True
)
cat_model = Pipeline(
steps=[
('Nearest Neighbor', NearestNeighbors(metric='euclidean', algorithm='brute'))
],
verbose=True
)
dog_model = Pipeline(
steps=[
('Nearest Neighbor', NearestNeighbors(metric='euclidean', algorithm='brute'))
],
verbose=True
)
automobile_model = Pipeline(
steps=[
('Nearest Neighbor', NearestNeighbors(metric='euclidean', algorithm='brute'))
],
verbose=True
)
X_train_bird = np.array(bird_data[features_complex].tolist())
bird_model.fit(X_train_bird)
X_train_cat = np.array(cat_data[features_complex].tolist())
cat_model.fit(X_train_cat)
X_train_dog = np.array(dog_data[features_complex].tolist())
dog_model.fit(X_train_dog)
X_train_automobile = np.array(automobile_data[features_complex].tolist())
automobile_model.fit(X_train_automobile)
```
## 3. A simple example of nearest-neighbors classification:
```
def nearest_image(sample, type_data, pipeline):
distances, indices = pipeline['Nearest Neighbor'].kneighbors(np.array(sample[features_complex]).reshape(1, -1))
neighbors = pd.DataFrame({'distance':distances[0].tolist(), 'index':indices[0].tolist()})
left_join = pd.merge(neighbors, type_data, how='left', left_on='index', right_index=True)
plot_images_id([left_join.loc[0, 'id']], type_data)
return left_join
sample = test_data.loc[0]
sample_list = [sample.id]
plot_images_id(sample_list, test_data)
table = nearest_image(sample, cat_data, cat_model)
table['distance'].mean()
sample = test_data.loc[0]
sample_list = [sample.id]
plot_images_id(sample_list, test_data)
table = nearest_image(sample, dog_data, dog_model)
table['distance'].mean()
```
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.autograd import Variable
data_dir = 'Cat_Dog_data'
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
test_data
train_data
train_loader = torch.utils.data.DataLoader(train_data,batch_size=128,shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data,batch_size=128)
model = models.densenet121(pretrained=True)
print(model)
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([('fc1',nn.Linear(1024,500)),
('relu',nn.ReLU()),
('fc2',nn.Linear(500,2)),
('output',nn.LogSoftmax(dim=1))]))
model.classifier = classifier
torch.cuda.is_available()
import time
#for cuda in [True, False]:
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(),lr=0.003)
# if cuda:
# model.cuda()
# else:
# model.cpu()
for ii, (inputs, labels) in enumerate(train_loader):
inputs, labels = Variable(inputs), Variable(labels)
# if cuda:
# inputs, labels = inputs.cuda(), labels.cuda()
# else:
# inputs, labels = inputs.cpu(), labels.cpu()
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs,labels)
loss.backward()
optimizer.step()
if ii==1:
break
print(f"Cuda = {cuda}; Time per batch: {(time.time()-start)/3:.3f} seconds")
```
### Full Model
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
| github_jupyter |
```
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
# Adding project root and custom functions
from os import listdir, getcwd
from os.path import isfile, join
import sys
from pathlib import Path
def find_root_dir():
# find config.py
path = Path(getcwd()).parent
onlyfiles = [f for f in listdir(path) if isfile(join(path, f))]
while "config.py" not in onlyfiles:
path = path.parent
onlyfiles = [f for f in listdir(path) if isfile(join(path, f))]
return path
ROOT_DIR = str(find_root_dir())
sys.path.append(ROOT_DIR)
from config import get_project_root
# custom functs
from src.features import build_features
from src.visualization.visualize import plot_confusion_matrix
import pandas as pd
import json
import matplotlib.pyplot as plt
import numpy as np
import random
from collections import Counter
from nltk import ngrams
from nltk.tokenize import TweetTokenizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import Binarizer
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
from sklearn.utils.multiclass import unique_labels
```
## Prepare data
```
ROOT_PATH = get_project_root()
D1_PATH = ROOT_PATH / "data/raw/EmotionPush/emotionpush.augmented.json"
df = build_features.to_df(D1_PATH)
# simplify emotion labels
df = df[df.emotion2.isin(["neutral", "joy", "sadness", "anger"])]
```
## Feature transform (onehot)
```
tokenizer = TweetTokenizer(reduce_len=True, preserve_case=False)
X = df.utterance2
freq = CountVectorizer(tokenizer=tokenizer.tokenize, stop_words='english', lowercase=True)
X = freq.fit_transform(X)
onehot = Binarizer()
X = onehot.fit_transform(X.toarray())
```
## Stratified train test split
```
# X as is from above
y = df.emotion2
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0, stratify=y)
```
## Fit Model & predict
```
clf = LinearSVC(random_state=0)
clf.fit(X_train, y_train)
# predict
y_pred = clf.predict(X_test)
```
## Eval predictions
```
np.set_printoptions(precision=2)
plot_confusion_matrix(y_test, y_pred,
title='Confusion matrix, without normalization')
plt.show()
# Plot normalized confusion matrix
plot_confusion_matrix(y_test, y_pred, normalize=True,
title='Normalized confusion matrix')
def myscorer(y_test, y_pred):
# micro average (precision, recall, f1) of anger, joy, sadness
classes_names = ["anger", "joy", "neutral", "sadness"]
angersum = len([ x for x in y_pred if x == "anger"])
joysum = len([ x for x in y_pred if x == "joy"])
sadsum = len([ x for x in y_pred if x == "sadness"])
angertot = len([ x for x in y_test if x == "anger"])
joytot = len([ x for x in y_test if x == "joy"])
sadtot = len([ x for x in y_test if x == "sadness"])
prec = precision_score(y_test, y_pred, average=None)
recall = recall_score(y_test, y_pred, average=None)
f1 = f1_score(y_test, y_pred, average=None)
print("Anger\t\t -> # pred: {}/{},\t prec: {:.3f}, recall: {:.3f}, f1: {:.3f}".format(
angersum, angertot, prec[0], recall[0], f1[0]))
print("Joy\t\t -> # pred: {}/{},\t prec: {:.3f}, recall: {:.3f}, f1: {:.3f}".format(
joysum, joytot, prec[1], recall[1], f1[1]))
print("Sadness\t\t -> # pred: {}/{},\t prec: {:.3f}, recall: {:.3f}, f1: {:.3f}".format(
sadsum, sadtot, prec[3], recall[3], f1[3]))
print("Micro\t\t -> # pred: {}/{},\t prec: {:.3f}, recall: {:.3f}, f1: {:.3f}".format(
angersum+joysum+sadsum, angertot+joytot+sadtot, (prec[0]+prec[1]+prec[3])/3,
(recall[0]+recall[1]+recall[3])/3,
(f1[0]+f1[1]+f1[3])/3 ))
myscorer(y_test, y_pred)
```
| github_jupyter |
```
import pandas as pd
import re
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import os
import glob, ntpath
def file_modify(filename, a=None):
with open(filename, 'r+') as file:
content = file.read()
for ch in ['"']:
if ch in content:
content = content.replace(ch,'')
with open(filename, 'w') as file:
file.write(content)
df = pd.read_csv("output/functions.csv")
cols = ['TYPE', 'ASSOC. WITH']
df = df[cols]
df = df[df['ASSOC. WITH'].notnull()]
df.reset_index(drop=True, inplace=True)
a = df['ASSOC. WITH'].unique()
df = pd.DataFrame(a,columns=['Dialog'])
df['Dialog'] = " DIALOG=(" + df['Dialog'] + ")"
df['Version'] = 'VER=0001'
df['Reports'] = 'REPORTS=(SUMMARY,PROCESSES)'
df.to_csv("1.txt", index=False, header=False)
file_modify("1.txt")
text = "C:/Users/TapajyotiDeb/Downloads/mrdp/MRDPADSC.TXT"
with open(text,'r') as file:
content = file.read()
lines = content.split("DIALOG...: ")
lines = [x for x in lines if x]
dialogname = []
schema = []
onmap = []
inmap = []
oumap = []
with open(text,'r') as file:
content = file.readlines()
for line in content:
if "DIALOG...:" in line:
try:
name = re.search('DIALOG...: (.*) SCHEMA', line).group(1)
dialogname.append(name.strip())
except:
dialogname.append('')
if "SCHEMA...:" in line:
try:
name = re.search('SCHEMA...: (.*) ONLINE', line).group(1)
schema.append(name.strip())
except:
schema.append('')
if "ONLINE MAP:" in line:
try:
name = re.search('ONLINE MAP: (.*) INPUT', line).group(1)
onmap.append(name.strip())
except:
onmap.append('')
if "INPUT MAP.:" in line:
try:
name = re.search('INPUT MAP.: (.*) OUTPUT', line).group(1)
inmap.append(name.strip())
except:
inmap.append('')
if "OUTPUT MAP" in line:
try:
name = re.search('OUTPUT MAP:(.*)', line).group(1)
oumap.append(name.strip())
except:
oumap.append('')
df1 = pd.DataFrame(list(zip(dialogname, schema, onmap, inmap, oumap)), columns=['DIALOG', 'SCHEMA', 'ONLINE MAP', 'INPUT MAP', 'OUTPUT MAP'])
df1.to_csv("output/adsc_dialog.csv", index=False)
outname = ntpath.basename(text).split(".")[0]
newdf =df1[['ONLINE MAP']]
newdf.replace('', np.nan, inplace=True)
newdf.dropna(inplace=True)
newdf.reset_index(drop=True, inplace=True)
newdf = " MAP=" + newdf + ",VERSION=0001"
newdf.to_csv(outname+"MAPS.txt", index=False, header=False)
file_modify(outname+"MAPS.txt")
with open(text,'r') as file:
content = file.read()
lines1 = content.split("DIALOG...: ")
lines1 = [x for x in lines1 if x]
diname1 = []
diname2 = []
diname3 = []
for i in range(len(lines1)):
dialog = lines1[i].split("\n")
dname = dialog[0].split()[0]
for line1 in dialog:
if "PREMAP PROCESS:" in line1:
try:
diname1.append(dname)
except:
diname1.append("")
for line1 in dialog:
if "INCLUDED MODULE:" in line1:
try:
diname2.append(dname)
except:
diname2.append("")
for line1 in dialog:
if "RESPONSE PROCESS:" in line1:
try:
diname3.append(dname)
except:
diname3.append("")
pre = pd.DataFrame(diname1, columns=['DIALOG'])
inc = pd.DataFrame(diname2, columns=['DIALOG'])
rsp = pd.DataFrame(diname3, columns=['DIALOG'])
df2 = pd.DataFrame()
for i in range(len(lines)):
rec= []
vers= []
rtype = []
dialog = lines[i].split("\n")
dname = dialog[0].split()[0]
for line in dialog:
if "RECORD...:" in line:
name = re.search('RECORD...: (.*)', line).group(1)
rec.append(name.split()[0])
vers.append(name.split()[2])
rtype.append(name.split()[3])
df21 = pd.DataFrame(list(zip(rec, vers, rtype)), columns=['RECORD NAME', 'VERSION', 'RECORD TYPE'])
df21.insert(0,'DIALOG','')
df21['DIALOG'] = dname
df2 = pd.concat([df2, df21])
df2.to_csv("output/adsc_records.csv", index=False)
df_cbl = " "*12 + "COPY IDMS " + df2[['RECORD NAME']]
df_cbl.drop_duplicates(inplace=True)
df_cbl.to_csv("adsc_records.cbl", index=False, header=False)
module = []
mod_name = []
vers = []
execute = []
event = []
desc = []
for line in content:
try:
name1 = re.search("EXECUTE ON EDIT ERRORS:(.*) CONTROL", line).group(1)
except:
pass
try:
name2 = re.search("CONTROL KEY/EVENT:(.*)", line).group(1)
except:
pass
if "PREMAP PROCESS:" in line:
try:
v = re.search("VERS.:(.*)", line).group(1)
name = re.search('PREMAP PROCESS:(.*) VERS', line).group(1)
module.append("PREMAP PROCESS")
mod_name.append(name.strip())
vers.append(v.strip())
continue
except:
mod_name.append("")
vers.append("")
continue
if "INCLUDED MODULE:" in line:
try:
v = re.search("VERS.:(.*)", line).group(1)
name = re.search('INCLUDED MODULE:(.*) VERS', line).group(1)
module.append("INCLUDED MODULE")
mod_name.append(name.strip())
vers.append(v.strip())
continue
except:
mod_name.append("")
vers.append("")
continue
if "MODULE DESCRIPTION:" in line:
try:
name = re.search("MODULE DESCRIPTION:(.*)", line).group(1)
desc.append(name)
continue
except:
desc.append("")
continue
df31 = pd.DataFrame(list(zip(module, mod_name, vers)), columns=['MODULE', 'NAME', 'VERSION'])
df31
with open(text,'r') as file:
content = file.read()
lines1 = content.split("DIALOG...: ")
lines1 = [x for x in lines1 if x]
diname = []
for i in range(len(lines1)):
dialog = lines1[i].split("\n")
dname = dialog[0].split()[0]
for line1 in dialog:
if "PREMAP PROCESS:" in line1:
try:
diname.append(dname)
except:
diname.append("")
if "INCLUDED MODULE:" in line1:
try:
diname.append(dname)
except:
diname.append("")
aa = pd.DataFrame(diname, columns=['DIALOG'])
df31 = pd.concat([aa, df31], axis=1)
df31
module = []
mod_name = []
vers = []
execute = []
event = []
desc = []
for line in content:
if "RESPONSE PROCESS:" in line:
try:
v = re.search("VERS.:(.*)", line).group(1)
name = re.search('RESPONSE PROCESS:(.*) VERS', line).group(1)
module.append("RESPONSE PROCESS")
mod_name.append(name.strip())
vers.append(v.strip())
continue
except:
mod_name.append("")
vers.append("")
continue
if "EXECUTE ON EDIT ERRORS" in line:
try:
name1 = re.search("EXECUTE ON EDIT ERRORS:(.*) CONTROL", line).group(1)
name2 = re.search("CONTROL KEY/EVENT:(.*)", line).group(1)
execute.append(name1.strip())
event.append(name2.strip())
continue
except:
execute.append("")
event.append("")
continue
if "MODULE DESCRIPTION:" in line:
try:
name = re.search("MODULE DESCRIPTION:(.*)", line).group(1)
desc.append(name)
continue
except:
desc.append("")
continue
with open(text,'r') as file:
content = file.readlines()
counter = 0
for line in content:
counter += 1
if "RESPONSE PROCESS:" in line:
break
s = ''
for line in content[counter-1:]:
s += line
lines = s.split("RESPONSE PROCESS: ")
code = []
for line in lines[1:]:
if "CODE:" in line:
name = re.search("CODE: (.*)\n",line).group(1)
code.append(name)
elif "CODE:" not in line:
code.append("")
df32 = pd.DataFrame(list(zip(module, mod_name, vers, execute, event, code)), columns=['MODULE', 'NAME', 'VERSION', 'EXECUTE ON EDIT ERRORS', 'CONTROL KEY/EVENT', 'CODE'])
df32
df32 = pd.concat([rsp, df32], axis=1)
df32
a = pd.concat([df31, df32])
a.reset_index(drop=True, inplace=True)
df3 = a.replace(np.nan, '', regex=True)
df3.to_csv("output2/processes.csv", index=False)
with open(text,'r') as file:
content = file.readlines()
counter = 0
for line in content:
counter += 1
if "RESPONSE PROCESS:" in line:
break
s = ''
for line in content[counter-1:]:
s += line
lines = s.split("RESPONSE PROCESS: ")
def code_logic(lines):
code = []
for i in range(len(lines)-1):
a = lines[i:]
b = a[1].split("\n")
counter = 0
for item in b:
counter += 1
if "DATE LAST UPDATED:" in item:
break
c = b[counter:]
if c == []:
code.append("")
continue
buf = ''
count = 0
for item in range(len(c)):
try:
if ((c[item].split()[0] == '0') and ('000' in c[item].split()[1])) or ('000' in c[item].split()[0]):
if c[item].split()[1] == '00000100':
count += 1
if count > 1:
break
buf += c[item] + "\n"
except:
continue
code.append(buf)
return code
logic = code_logic(lines)
df41 = df32[['MODULE', 'NAME']]
df42 = pd.DataFrame(logic, columns=['PROCESS/MODULE CODE'])
df43 = pd.concat([rsp, df41, df42], axis=1)
df43
counter = 0
for line in content:
counter += 1
if "PREMAP PROCESS:" in line:
break
s = ''
for line in content[counter-1:]:
s += line
lines = s.split("PREMAP PROCESS: ")
logic = code_logic(lines)
df411 = df31[df31['MODULE'] == 'PREMAP PROCESS']
df411 = df411[['MODULE', 'NAME']].reset_index(drop=True)
df421 = pd.DataFrame(logic, columns=['PROCESS/MODULE CODE'])
df44 = pd.concat([pre, df411, df421], axis=1)
df44
counter = 0
for line in content:
counter += 1
if "INCLUDED MODULE:" in line:
break
s = ''
for line in content[counter-1:]:
s += line
lines = s.split("INCLUDED MODULE: ")
logic = code_logic(lines)
df412 = df31[df31['MODULE'] == 'INCLUDED MODULE']
df412 = df412[['MODULE', 'NAME']].reset_index(drop=True)
df422 = pd.DataFrame(logic, columns=['PROCESS/MODULE CODE'])
df45 = pd.concat([inc, df412, df422], axis=1)
df45
df4 = pd.concat([df43, df44, df45])
df4.to_csv("output2/module_code.csv", index=False)
with open(text,'r') as file:
content = file.read()
lines = content.split("DIALOG...: ")
lines = [x for x in lines if x]
diname = []
for i in range(len(lines)):
dialog = lines[i].split("\n")
dname = dialog[0].split()[0]
for line in dialog:
if "RESPONSE PROCESS:" in line:
try:
diname.append(dname)
continue
except:
diname.append("")
aa = pd.DataFrame(diname, columns=['DIALOG'])
aa
```
| github_jupyter |
```
import numpy as np
from luwiji.knn import illustration
```
# Avoid the Data Leakage
```
illustration.data_leakage
```
Data Leakage adalah bocornya informasi data test ketika kita training. Hal ini paling sering terjadi ketika:
- imputation
- kita malah impute menggunakan informasi dari seluruh data, harusnya hanya train data saja
- solusi: fit_transform pada train, transform pada test
- scaling
- kita malah scaling menggunakan informasi dari seluruh data, harusnya hanya train data saja
- solusi: fit_transform pada train, transform pada test
- k-fold pada train-test split
- k-fold menyebabkan kita menyentuh data test yang seharusnya tersembunyi
- ibaratnya seperti kita boleh retake ujian, ya lama-lama kita jadi tahu soal ujiannya
Tips menghindari data leakage:
- Split data sebelum melakukan imputation, scaling
- Jangan pernah fit apapun selalin data train, lalu simpan informasi hasil fit tersebut
- Gunakan train-val-test split
# Train - validation - test split
Berdasarkan konsep-konsep sebelumnya, maka dapat didesain splitting yang lebih baik menjadi:
- Train set: untuk training
- Validation set: untuk k-fold dan tuning model
- Test set: untuk scoring
Ibaratnya seperti:
- Kerjakan PR: belajar materi
- Quiz yang boleh retake: kita akan retake terus sampai dapat strategi terbaik untuk ujian
- Ujian: gunakan strategi terbaik, scorenya lah yang akan jadi acuan performa kita
```
illustration.train_val_test
```
## Data Preparation
```
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv("data/titanic.csv", index_col="PassengerId")
df.drop(columns=["Name", "Ticket", "Age", "Cabin"], inplace=True)
X = df.drop(columns="Survived")
y = df.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
## Preprocessing
```
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler
numerical_pipeline = Pipeline([
("imputer", SimpleImputer(strategy="mean")),
("scaler", MinMaxScaler())
])
categorical_pipeline = Pipeline([
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore"))
])
from sklearn.compose import ColumnTransformer
X_train.head()
preprocessor = ColumnTransformer([
("numeric", numerical_pipeline, ["SibSp", "Parch", "Fare"]),
("categoric", categorical_pipeline, ["Pclass", "Sex", "Embarked"])
])
```
## Training pipeline
```
from sklearn.neighbors import KNeighborsClassifier
pipeline = Pipeline([
("prep", preprocessor),
("algo", KNeighborsClassifier())
])
pipeline.fit(X_train,y_train)
```
## GridSearchCV
```
from sklearn.model_selection import GridSearchCV
parameter = {
"algo__n_neighbors": range(1,51,2),
"algo__weights": ["uniform", "distance"],
"algo__p": [1, 2]
}
model = GridSearchCV(pipeline, parameter, cv=3, n_jobs=-1, verbose=1)
model.fit(X_train, y_train)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
pipeline.get_params() #check parameter yang digunakan dalam pipeline, bisa ditambah juga parameternya untuk pipeline lain
model.best_params_ #nilai "quiz" atau kombinasi parameter terbaik
pd.DataFrame(model.cv_results_) #log dari process yang dilakukan
```
# Summary
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler
# Import data dan drop kolom
df = pd.read_csv("data/titanic.csv", index_col="PassengerId")
df.drop(columns=["Name", "Ticket", "Age", "Cabin"], inplace=True)
# Dataset Splitting
X = df.drop(columns="Survived")
y = df.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Preprocessor
numerical_pipeline = Pipeline([
("imputer", SimpleImputer(strategy="mean")),
("scaler", MinMaxScaler())
])
categorical_pipeline = Pipeline([
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder())
])
preprocessor = ColumnTransformer([
("numeric", numerical_pipeline, ["SibSp", "Parch", "Fare"]),
("categoric", categorical_pipeline, ["Pclass", "Sex", "Embarked"])
])
# Pipeline
pipeline = Pipeline([
("prep", preprocessor),
("algo", KNeighborsClassifier())
])
# Parameter Tuning
parameter = {
"algo__n_neighbors": np.arange(1, 51, 2),
"algo__weights": ["uniform", "distance"],
"algo__p": [1, 2]
}
model = GridSearchCV(pipeline, parameter, cv=3, n_jobs=-1, verbose=1)
model.fit(X_train, y_train)
# Evaluation
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
```
# Prediction
```
illustration.jack_and_rose
X.iloc[:1]
data = [
[1, "female", 1, 1, 80, "S"], # Rose
[3, "male", 0, 0, 5, "S"] # Jack
]
X_pred = pd.DataFrame(data, columns=X.columns, index=["Rose", "Jack"])
X_pred
model.predict(X_pred)
X_pred ["pred"] = model.predict(X_pred)
X_pred
```
# Save Model
```
from jcopml.utils import save_model
save_model(model, "knn_titanic.pkl")
```
# Load Model
```
from jcopml.utils import load_model
model_hasil_load = load_model("model/knn_titanic.pkl")
model_hasil_load.predict(X_pred)
```
# Save yang lebih efisien
| github_jupyter |
```
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.autograd import Variable
```
## Load data
```
train_dataset = datasets.MNIST(root='D:/dev/data/mnist', train=True, transform=transforms.ToTensor(), download=True)
print(len(train_dataset))
print(train_dataset[0][0].size(), train_dataset[0][1])
test_dataset = datasets.MNIST(root='D:/dev/data/mnist', train=False, transform=transforms.ToTensor(), download=True)
print(len(test_dataset))
```
## Display MNIST
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
print(train_dataset[0][0].numpy().shape)
show_img = train_dataset[0][0].numpy().reshape(28,28)
plt.imshow(show_img, cmap='gray')
# Label
print(train_dataset[0][1].numpy())
```
## Make dataset iterable
```
len(train_dataset)
batch_size = 100
n_iters = 3000
n_epochs = int(n_iters / (len(train_dataset)/batch_size))
print(n_epochs)
```
### Create iterable object : Training dataset
```
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
```
### Check iterability
```
import collections
isinstance(train_loader, collections.Iterable)
```
### Create iterable object : Test dataset
```
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
```
### Check iterability
```
isinstance(test_loader, collections.Iterable)
```
## Build model
```
class LogisticRegressionModel(nn.Module):
def __init__(self, input_dims, output_dims):
super(LogisticRegressionModel, self).__init__()
self.linear = nn.Linear(in_features=input_dims, out_features=output_dims)
def forward(self, x):
return self.linear(x)
```
## Instatiate model
```
input_dims = 28*28
output_dims = 10
logistic_reg = LogisticRegressionModel(input_dims, output_dims)
```
## Instantiate loss class
```
criterion = nn.CrossEntropyLoss()
```
## model.parameters() explained
```
print(logistic_reg.parameters())
print(len(list(logistic_reg.parameters())))
print(list(logistic_reg.parameters())[0].size())
print(list(logistic_reg.parameters())[1].size())
```
## Optimizer
```
learning_rate = 0.001
optimizer = torch.optim.SGD(logistic_reg.parameters(), lr=learning_rate)
```
## Training phase
```
iteration = 0
for epoch in range(n_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.view(-1,28*28))
labels = Variable(labels)
optimizer.zero_grad()
outputs = logistic_reg(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
iteration += 1
if iteration%500 == 0:
correct = 0
total = 0
for images, labels in test_loader:
images = Variable(images.view(-1,28*28))
labels = Variable(labels)
outputs = logistic_reg(images)
predicted = torch.argmax(outputs, dim=1)
total += labels.size()[0]
correct += (predicted == labels).sum()
accuracy = 100*correct/total
print('Epoch: {}, Iteration: {}, Loss: {}, Accuracy: {}%'.format(epoch, iteration, loss, accuracy))
```
## Show test images
```
for images, labels in test_loader:
images = Variable(images.view(-1, 28*28))
labels = Variable(labels)
plt.imshow(images[0].view(28,28).numpy(), cmap='gray')
plt.title(labels[0].numpy())
break
```
| github_jupyter |
# IEEE MEGA PROJECT
**Team Name: BetaTech**
**Team Leader: Mollika Garg**
**Email Id: mollika.garg@gmail.com**
**Team Member: Shreya Sharma**
**Email Id: shreyasharma.1510001@gmail.com**
**Team Member: Koushiki Chakrabarti**
**Email Id: koushikichakrabarti@gmail.com**
### PROJECT DETAILS
**Domain: Machine Learning**
**Project Name: Tackling Dengue Cases**
### PROJECT DESCRIPTION
Predict dengue cases from climate and determine potential dengue hotspots by detecting stagnant water areas from satellite data. Make ML algorithms predict the number of dengue cases based on climate factors and use thresholding techniques to predict stagnant water hotspots by using satellite data.
```
##IMPORTS
# used for manipulating directory paths
import os
# used to analyze data
import pandas as pd
# scientific and vector computation for python
import numpy as np
# for image visualisation
from matplotlib import pyplot as plt
# encode target labels
from sklearn.preprocessing import LabelEncoder
# performs the task of Standardization
from sklearn.preprocessing import StandardScaler
# to find the error
from sklearn.metrics import mean_absolute_error
# used for training SVM
from sklearn.svm import SVR
# used for training KNN
from sklearn.neighbors import KNeighborsRegressor
# used for training Random Forest
from sklearn.ensemble import RandomForestRegressor
```
### READING DATA
```
## Read Data
malaria_features = pd.read_excel("C:\\Users\\molli\\OneDrive\\Desktop\\Data Set\\Malaria_Data.xlsx")
malaria_labels= malaria_features["No. of cases"]
malaria_features=malaria_features.drop(labels="No. of cases",axis=1)
## Displaying head of the data
malaria_features.head()
## Encoding labels
lmap={"Jan":0,"Feb":1,"Mar":2,"Apr":3,"May":4,"Jun":5, "Jul":6, "Aug":7, "Sep":8, "Oct":9, "Nov":10, "Dec":11}
malaria_features["Month "]=malaria_features["Month "].map(lmap)
## Droping the feature 'City'
malaria_features=malaria_features.drop("City", axis=1)
## Displaying head of the data
malaria_features.head()
## storing feature values in X and labels in Y
X=malaria_features.values
Y=malaria_labels.values
## spiltting data into training and testing
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X,Y,test_size = 0.2,random_state = 0)
print(y_test)
## scaling the data
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
```
### Training The Data and Chosing the Best Hyperparameters
#### 1) K Nearest Neighbours
```
knn = KNeighborsRegressor(n_neighbors=4)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print(mean_absolute_error(y_test, y_pred))
```
#### 2) Random Forest
```
rf = RandomForestRegressor(n_estimators=200)
rf.fit(x_train, y_train)
y_pred1 = rf.predict(x_test)
print(mean_absolute_error(y_test, y_pred1))
```
#### 3) Support Vector Machine
```
clf = SVR(C=3000, tol=1e-3)
clf.fit(x_train, y_train)
y_pred2 = clf.predict(x_test)
print(mean_absolute_error(y_test, y_pred2))
```
### Comparing Predicted and Test data values Trained on SVM
```
## printing predicted and test data values
print(y_pred)
print(y_test)
## In case of negative values, converting them to 0 to obtain better accuracy
for i in range(0,len(y_pred)):
if y_pred[i]<0:
y_pred[i]=0
print(mean_absolute_error(y_test, y_pred))
```
### Conclusion
We tried a differnet types of models. We tried KNN, Random Forest and SVM. In the end we found that, the data trained on KNN give the best testing result with mean absolute error of 16.7
| github_jupyter |
```
from os import listdir
from numpy import array
from keras.preprocessing.text import Tokenizer, one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.utils import to_categorical
from keras.layers import Embedding, TimeDistributed, RepeatVector, LSTM, concatenate , Input, Reshape, Dense, Flatten
from keras.preprocessing.image import array_to_img, img_to_array, load_img
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
import numpy as np
# Load the images and preprocess them for inception-resnet
images = []
all_filenames = listdir('images/')
all_filenames.sort()
for filename in all_filenames:
images.append(img_to_array(load_img('images/'+filename, target_size=(299, 299))))
images = np.array(images, dtype=float)
images = preprocess_input(images)
# Run the images through inception-resnet and extract the features without the classification layer
IR2 = InceptionResNetV2(weights='imagenet', include_top=False)
features = IR2.predict(images)
# We will cap each input sequence to 100 tokens
max_caption_len = 100
# Initialize the function that will create our vocabulary
tokenizer = Tokenizer(filters='', split=" ", lower=False)
# Read a document and return a string
def load_doc(filename):
file = open(filename, 'r')
text = file.read()
file.close()
return text
# Load all the HTML files
X = []
all_filenames = listdir('html/')
all_filenames.sort()
for filename in all_filenames:
X.append(load_doc('html/'+filename))
# Create the vocabulary from the html files
tokenizer.fit_on_texts(X)
# Add +1 to leave space for empty words
vocab_size = len(tokenizer.word_index) + 1
# Translate each word in text file to the matching vocabulary index
sequences = tokenizer.texts_to_sequences(X)
# The longest HTML file
max_length = max(len(s) for s in sequences)
# Intialize our final input to the model
X, y, image_data = list(), list(), list()
for img_no, seq in enumerate(sequences):
for i in range(1, len(seq)):
# Add the entire sequence to the input and only keep the next word for the output
in_seq, out_seq = seq[:i], seq[i]
# If the sentence is shorter than max_length, fill it up with empty words
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# Map the output to one-hot encoding
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
# Add and image corresponding to the HTML file
image_data.append(features[img_no])
# Cut the input sentence to 100 tokens, and add it to the input data
X.append(in_seq[-100:])
y.append(out_seq)
X, y, image_data = np.array(X), np.array(y), np.array(image_data)
# Create the encoder
image_features = Input(shape=(8, 8, 1536,))
image_flat = Flatten()(image_features)
image_flat = Dense(128, activation='relu')(image_flat)
ir2_out = RepeatVector(max_caption_len)(image_flat)
language_input = Input(shape=(max_caption_len,))
language_model = Embedding(vocab_size, 200, input_length=max_caption_len)(language_input)
language_model = LSTM(256, return_sequences=True)(language_model)
language_model = LSTM(256, return_sequences=True)(language_model)
language_model = TimeDistributed(Dense(128, activation='relu'))(language_model)
# Create the decoder
decoder = concatenate([ir2_out, language_model])
decoder = LSTM(512, return_sequences=False)(decoder)
decoder_output = Dense(vocab_size, activation='softmax')(decoder)
# Compile the model
model = Model(inputs=[image_features, language_input], outputs=decoder_output)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
# Train the neural network
model.fit([image_data, X], y, batch_size=64, shuffle=False, epochs=2)
# map an integer to a word
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# generate a description for an image
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'START'
# iterate over the whole length of the sequence
for i in range(900):
# integer encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0][-100:]
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
# predict next word
yhat = model.predict([photo,sequence], verbose=0)
# convert probability to integer
yhat = np.argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# Print the prediction
print(' ' + word, end='')
# stop if we predict the end of the sequence
if word == 'END':
break
return
# Load and image, preprocess it for IR2, extract features and generate the HTML
test_image = img_to_array(load_img('images/87.jpg', target_size=(299, 299)))
test_image = np.array(test_image, dtype=float)
test_image = preprocess_input(test_image)
test_features = IR2.predict(np.array([test_image]))
generate_desc(model, tokenizer, np.array(test_features), 100)
# Load and image, preprocess it for IR2, extract features and generate the HTML
test_image = img_to_array(load_img('images/86.jpg', target_size=(299, 299)))
test_image = np.array(test_image, dtype=float)
test_image = preprocess_input(test_image)
test_features = IR2.predict(np.array([test_image]))
generate_desc(model, tokenizer, np.array(test_features), 100)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/bert_nlp/blob/main/section_2/01_pytorch_basic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# PyTorchの基礎
PyTorchの基礎を学ぶために、ニューラルネットワークを訓練して手書き文字の認識を行います。
## データの読み込み
`torchvision.datasets`を使って手書き文字のデータを読み込み、**DataLoader**を設定します。
DataLoaderを使うと、データの読み込みやミニバッチ法の実装などが大幅に楽になります。
`torchvision.datasets`にはMNISTの他にも様々なデータセットが用意されています。
https://pytorch.org/docs/stable/torchvision/datasets.html
```
import torch
from torchvision.datasets import MNIST # 手書き文字画像のデータ
from torchvision import transforms
from torch.utils.data import DataLoader
# 訓練データを取得
mnist_train = MNIST("./data",
train=True, download=True,#train data
transform=transforms.ToTensor())
# テストデータの取得
mnist_test = MNIST("./data",#保存場所
train=False, download=True,#test data
transform=transforms.ToTensor())
print("訓練データの数:", len(mnist_train), "テストデータの数:", len(mnist_test))
# DataLoaderの設定
img_size = 28
batch_size = 256
train_loader = DataLoader(mnist_train, #train用dataset指定
batch_size=batch_size,#バッチサイズ指定
shuffle=True)#randomに取り出す
test_loader = DataLoader(mnist_test,#test用dataset指定
batch_size=batch_size,
shuffle=False)
```
手書き文字の画像サイズは、28×28になります。
## モデルの構築
今回は、`nn.Module`モジュールを継承したクラスとして、モデルを構築します。
`.cuda()`により、モデルの計算はGPU上で行われるようになります。
```
import torch.nn as nn
import torch.nn.functional as F#活性化関数など
class Net(nn.Module):#Netクラスを定義-nn.Moduleを継承
def __init__(self):#Initializer=コンストラクタ
super().__init__()
self.fc1 = nn.Linear(img_size*img_size, 1024) # free conected,nn.Linear=全結合層=前後の層が全て結合 入力数:全ピクセル数なのでimg_size*img_size 中間層のニューロン:1024
self.fc2 = nn.Linear(1024, 512)#入力1024 出力512
self.fc3 = nn.Linear(512, 10)#入力512 出力10
def forward(self, x):#順伝搬のメソッド
x = x.view(-1, img_size*img_size) # テンソルサイズ変換 バッチサイズ:-1(自動的に調整) 入力の数:img_size*img_size
x = F.relu(self.fc1(x))#xをfc1に入れてrelu
x = F.relu(self.fc2(x))#xをfc2に入れてrelu
x = self.fc3(x)#全結合層に入れて
return x#返り値
net = Net()
# net.cuda() # GPU対応
print(net)#モデルを表示
net.parameters
```
### データセットを取り出し
```
import matplotlib.pyplot as plt
train1=[x for x in train_loader][0]
train1_x=train1[0][0]
train1_y=train1[1][0].reshape(1)
# print((train1_x))
print('train1_x.shape',train1_x.shape)
plt.imshow(train1_x.reshape((28, 28)), cmap='gray')
print('train1_y',train1_y)
print('train1_y',train1_y.shape)
```
### 作ったモデルでテスト
```
pred1_y=net(train1_x)
print('pred1_y:',pred1_y)
print('pred1_y.argmax(1):',pred1_y.argmax(1))
loss=loss_fnc(pred1_y, train1_y)
print('loss:',loss.item())
```
## 学習
モデルを訓練します。
DataLoaderを使い、ミニバッチを取り出して訓練および評価を行います。
1エポックの中で何度もミニバッチを使って訓練が行われるので、ミニバッチ法が実装されていることになります。
学習には時間がかかりますので、編集→ノートブックの設定のハードウェアアクセラレーターにGPUが選択されていることを確認しましょう。
```
from torch import optim#最適化アルゴリズム
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()#誤算関数を指定
# SGD
optimizer = optim.SGD(net.parameters(), lr=0.01)#optimizerを設定 lr学習係数
# 損失のログ
record_loss_train = []#訓練誤差記録用リスト
record_loss_test = []#テスト誤差記録用リスト
# 学習
for i in range(10): # 10エポック学習
net.train() # 訓練モードにする
loss_train = 0 #誤差初期値0に設定
for j, (x, t) in enumerate(train_loader): # ミニバッチ(x, t)を取り出す
# x, t = x.cuda(), t.cuda() # GPU対応
y = net(x)#先のモデルにx入力
loss = loss_fnc(y, t)#誤差を計算
loss_train += loss.item()#計算誤差を足していく、.itemはテンソルから値を取り出すメソッド
optimizer.zero_grad()#勾配の値をリセット
loss.backward()#誤差逆伝搬を行う,勾配の計算のみ。
optimizer.step()#パラメータ更新を行う
loss_train /= j+1#1エポック数の平均で割る
record_loss_train.append(loss_train)#record_loss_trainにappend
net.eval() # 評価モード
loss_test = 0#誤差初期値
for j, (x, t) in enumerate(test_loader): # ミニバッチ(x, t)を取り出す
# x, t = x.cuda(), t.cuda()
y = net(x)#順伝搬
loss = loss_fnc(y, t)#誤差計算
loss_test += loss.item()#誤差を足していく
loss_test /= j+1
record_loss_test.append(loss_test)
if i%1 == 0:
print("Epoch:", i, "Loss_Train:", loss_train, "Loss_Test:", loss_test)
```
## 誤差の推移
訓練データ、テストデータで誤差の推移をグラフ表示します。
```
import matplotlib.pyplot as plt
plt.plot(range(len(record_loss_train)), record_loss_train, label="Train")
plt.plot(range(len(record_loss_test)), record_loss_test, label="Test")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")
plt.show()
```
## 正解率
モデルの性能を把握するため、テストデータ使い正解率を測定します。
```
correct = 0#正解数
total = 0#
for i, (x, t) in enumerate(test_loader):
# x, t = x.cuda(), t.cuda() # GPU対応
x = x.view(-1, img_size*img_size)
y = net(x)
correct += (y.argmax(1) == t).sum().item()#
total += len(x)
print("正解率:", str(correct/total*100) + "%")
```
| github_jupyter |
```
import pathlib
import warnings
from typing import List, Dict
import gym
import gym_corridor # noqa: F401
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from IPython.display import HTML
from lcs import Perception
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
from myst_nb import glue
from tabulate import tabulate
from src.basic_rl import run_q_learning_alternating, run_r_learning_alternating, qlearning, rlearning
from src.bayes_estimation import bayes_estimate
from src.commons import NUM_EXPERIMENTS
from src.decorators import repeat, get_from_cache_or_run
from src.diminishing_reward import common_metrics
from src.observation_wrappers import CorridorObservationWrapper
from src.payoff_landscape import get_all_state_action, plot_payoff_landscape
from src.runner import run_experiments_alternating
from src.utils import build_cache_dir_path, build_plots_dir_path
from src.visualization import PLOT_DPI, diminishing_reward_colors
plt.ioff() # turn off interactive plotting
plt.style.use('../../../src/phd.mplstyle')
root_dir = pathlib.Path().cwd().parent.parent.parent
cwd_dir = pathlib.Path().cwd()
plot_dir = build_plots_dir_path(root_dir) / cwd_dir.name
cache_dir = build_cache_dir_path(root_dir) / cwd_dir.name
def extract_specific_index(runs, env_idx):
"""Selects run metrics for certain environment, ie Corridor 40"""
return [run[env_idx] for run in runs]
def average_experiment_runs(run_df: pd.DataFrame) -> pd.DataFrame:
return run_df.groupby(['agent', 'trial', 'phase']).mean().reset_index(level='phase')
def plot_pop_and_rho(df, trials, plot_filename=None):
colors = diminishing_reward_colors()
expl_df = df[df['phase'] == 'exploit']
fig, axs = plt.subplots(2, 1, figsize=(18, 16), sharex=True)
xmax = trials / 2
# Steps in trial plot
for alg in ['ACS2', 'AACS2_v1', 'AACS2_v2', 'Q-Learning', 'R-Learning']:
alg_df = expl_df.loc[alg]
idx = pd.Index(name='exploit trial', data=np.arange(1, len(alg_df) + 1))
alg_df.set_index(idx, inplace=True)
alg_df['steps_in_trial'].rolling(window=250).mean().plot(ax=axs[0], label=alg, linewidth=2, color=colors[alg])
axs[0].set_xlim(0, xmax)
axs[0].set_xlabel("Exploit trial")
axs[0].xaxis.set_major_locator(MultipleLocator(500))
axs[0].xaxis.set_minor_locator(MultipleLocator(100))
axs[0].xaxis.set_major_formatter(FormatStrFormatter('%1.0f'))
axs[0].xaxis.set_tick_params(which='major', size=10, width=2, direction='in')
axs[0].xaxis.set_tick_params(which='minor', size=5, width=1, direction='in')
axs[0].set_ylabel("Number of steps")
axs[0].set_yscale('log')
axs[0].set_title('Steps in trial')
axs[0].legend(loc='upper right', frameon=False)
# Rho plot
for alg in ['AACS2_v1', 'AACS2_v2', 'R-Learning']:
alg_df = expl_df.loc[alg]
idx = pd.Index(name='exploit trial', data=np.arange(1, len(alg_df) + 1))
alg_df.set_index(idx, inplace=True)
alg_df['rho'].rolling(window=1).mean().plot(ax=axs[1], label=alg, linewidth=2, color=colors[alg])
axs[1].set_xlim(0, xmax)
axs[1].set_xlabel("Exploit trial")
axs[1].xaxis.set_major_locator(MultipleLocator(500))
axs[1].xaxis.set_minor_locator(MultipleLocator(100))
axs[1].xaxis.set_major_formatter(FormatStrFormatter('%1.0f'))
axs[1].xaxis.set_tick_params(which='major', size=10, width=2, direction='in')
axs[1].xaxis.set_tick_params(which='minor', size=5, width=1, direction='in')
axs[1].set_ylabel(r"$\mathregular{\rho}$")
axs[1].yaxis.set_major_locator(MultipleLocator(25))
axs[1].yaxis.set_minor_locator(MultipleLocator(5))
axs[1].yaxis.set_tick_params(which='major', size=10, width=2, direction='in')
axs[1].yaxis.set_tick_params(which='minor', size=5, width=1, direction='in')
axs[1].set_ylim(0, 100)
axs[1].set_title(r'Estimated average $\mathregular{\rho}$')
if plot_filename:
fig.savefig(plot_filename, dpi=PLOT_DPI, bbox_inches='tight')
return fig
# Params
trials = 10_000
USE_RAY= True
learning_rate = 0.8
discount_factor = 0.95
epsilon = 0.2
zeta = 0.0001
glue('51-e1-trials', trials, display=False)
```
# Experiment 1 - Straight Corridor
The following section describes the differences observed between using the ACS2 with standard discounted reward distribution and two proposed modifications. In all cases, the experiments were performed in an explore-exploit manner for the total number of {glue:}`51-e1-trials` trials, where the mode was alternating in each trial. Additionally, for better reference and benchmarking purposes, basic implementations of Q-Learning and R-Learning algorithms were also introduced and used with the same parameter settings as ACS2 and AACS2.
The most important thing was to distinguish whether the new reward distribution proposition still allows the agent to successfully update the classifier's parameter to allow the exploitation of the environment. To illustrate this, figures presenting the number of steps to the final location, estimated average change during learning, and the reward payoff landscape across all possible state-action pairs were plotted for the [](section-topics-environments-corridor) of size $n=20$ - Figure {numref}`{number} <51-corridor-payoff-fig>`.
To assure that the modification worked as expected, the statistical inference of obtained result was performed on a scaled version of the problem. Each experiment is averaged over {glue:}`num_experiments` independent runs.
```
def corridor20_env_provider():
import gym_corridor # noqa: F401
return CorridorObservationWrapper(gym.make(f'corridor-20-v0'))
def corridor40_env_provider():
import gym_corridor # noqa: F401
return CorridorObservationWrapper(gym.make(f'corridor-40-v0'))
def corridor100_env_provider():
import gym_corridor # noqa: F401
return CorridorObservationWrapper(gym.make(f'corridor-100-v0'))
# Set ACS2/AACS2 configuration parameter dictionary
basic_cfg = {
'perception_bits': 1,
'possible_actions': 2,
'do_ga': False,
'beta': learning_rate,
'epsilon': epsilon,
'gamma': discount_factor,
'zeta': zeta,
'user_metrics_collector_fcn': common_metrics,
'biased_exploration_prob': 0,
'metrics_trial_freq': 1
}
def run_multiple_qlearning(env_provider):
corridor_env = env_provider()
init_Q = np.zeros((corridor_env.observation_space.n, corridor_env.action_space.n))
return run_q_learning_alternating(NUM_EXPERIMENTS, trials, corridor_env, epsilon, learning_rate, discount_factor,
init_Q, perception_to_state_mapper=lambda p: int(p[0]))
def run_multiple_rlearning(env_provider):
corridor_env = env_provider()
init_R = np.zeros((corridor_env.observation_space.n, corridor_env.action_space.n))
return run_r_learning_alternating(NUM_EXPERIMENTS, trials, corridor_env, epsilon, learning_rate, zeta, init_R,
perception_to_state_mapper=lambda p: int(p[0]))
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/acs2.dill')
@repeat(num_times=NUM_EXPERIMENTS, use_ray=USE_RAY)
def run_acs2():
corridor20 = run_experiments_alternating(corridor20_env_provider, trials, basic_cfg)
corridor40 = run_experiments_alternating(corridor40_env_provider, trials, basic_cfg)
corridor100 = run_experiments_alternating(corridor100_env_provider, trials, basic_cfg)
return corridor20, corridor40, corridor100
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/qlearning.dill')
def run_qlearning():
corridor20 = run_multiple_qlearning(corridor20_env_provider)
corridor40 = run_multiple_qlearning(corridor40_env_provider)
corridor100 = run_multiple_qlearning(corridor100_env_provider)
return corridor20, corridor40, corridor100
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/rlearning.dill')
def run_rlearning():
corridor20 = run_multiple_rlearning(corridor20_env_provider)
corridor40 = run_multiple_rlearning(corridor40_env_provider)
corridor100 = run_multiple_rlearning(corridor100_env_provider)
return corridor20, corridor40, corridor100
# run computations
acs2_runs_details = run_acs2()
q_learning_runs = run_qlearning()
r_learning_runs = run_rlearning()
# average runs and create aggregated metrics data frame
corridor20_acs2_metrics = pd.concat([m_df for _, _, _, m_df in extract_specific_index(acs2_runs_details, 0)])
corridor20_qlearning_metrics = pd.DataFrame(q_learning_runs[0])
corridor20_rlearning_metrics = pd.DataFrame(r_learning_runs[0])
corridor40_acs2_metrics = pd.concat([m_df for _, _, _, m_df in extract_specific_index(acs2_runs_details, 1)])
corridor40_qlearning_metrics = pd.DataFrame(q_learning_runs[1])
corridor40_rlearning_metrics = pd.DataFrame(r_learning_runs[1])
corridor100_acs2_metrics = pd.concat([m_df for _, _, _, m_df in extract_specific_index(acs2_runs_details, 2)])
corridor100_qlearning_metrics = pd.DataFrame(q_learning_runs[2])
corridor100_rlearning_metrics = pd.DataFrame(r_learning_runs[2])
agg_df = pd.concat([
average_experiment_runs(corridor20_acs2_metrics),
average_experiment_runs(corridor20_qlearning_metrics),
average_experiment_runs(corridor20_rlearning_metrics)]
)
# payoff landscape
def calculate_state_action_payoffs(state_actions: List, pop_acs2, pop_aacs2v1, pop_aacs2v2, Q, R) -> Dict:
payoffs = {}
for sa in state_actions:
p = Perception((sa.state,))
# ACS2
acs2_match_set = pop_acs2.form_match_set(p)
acs2_action_set = acs2_match_set.form_action_set(sa.action)
# AACS2_v1
aacs2v1_match_set = pop_aacs2v1.form_match_set(p)
aacs2v1_action_set = aacs2v1_match_set.form_action_set(sa.action)
# AACS2_v2
aacs2v2_match_set = pop_aacs2v2.form_match_set(p)
aacs2v2_action_set = aacs2v2_match_set.form_action_set(sa.action)
# Check if all states are covered
for alg, action_set in zip(['ACS2', 'AACS2_v1', 'AACS2_v2'],
[acs2_action_set, aacs2v1_action_set,
aacs2v2_action_set]):
if len(action_set) == 0:
warnings.warn(f"No {alg} classifiers for perception: {p}, action: {sa.action}")
payoffs[sa] = {
'ACS2': np.mean(list(map(lambda cl: cl.r, acs2_action_set))),
'AACS2_v1': np.mean(list(map(lambda cl: cl.r, aacs2v1_action_set))),
'AACS2_v2': np.mean(list(map(lambda cl: cl.r, aacs2v2_action_set))),
'Q-Learning': Q[int(sa.state), sa.action],
'R-Learning': R[int(sa.state), sa.action]
}
return payoffs
# Take first of each algorithm population pass for presenting payoff landscape
corridor_env = corridor20_env_provider()
state_action = get_all_state_action(corridor_env.unwrapped._state_action())
pop_acs2, pop_aacs2v1, pop_aacs2v2, _ = extract_specific_index(acs2_runs_details, 0)[0]
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/qlearning-single.dill')
def run_single_qlearning():
Q_init = np.zeros((corridor_env.observation_space.n, corridor_env.action_space.n))
Q, _ = qlearning(corridor_env, trials, Q_init, epsilon, learning_rate, discount_factor, perception_to_state_mapper=lambda p: int(p[0]))
return Q
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/rlearning-single.dill')
def run_single_rlearning():
R_init = np.zeros((corridor_env.observation_space.n, corridor_env.action_space.n))
R, rho, _ = rlearning(corridor_env, trials, R_init, epsilon, learning_rate, zeta, perception_to_state_mapper=lambda p: int(p[0]))
return R, rho
Q = run_single_qlearning()
R, rho = run_single_rlearning()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
payoffs = calculate_state_action_payoffs(state_action, pop_acs2, pop_aacs2v1, pop_aacs2v2, Q, R)
corridor_performance_fig = plot_pop_and_rho(agg_df, trials=trials, plot_filename=f'{plot_dir}/corridor-performance.png')
corridor_payoff_fig = plot_payoff_landscape(payoffs, rho=rho, rho_text_location={'x': 18, 'y': 250}, plot_filename=f'{plot_dir}/corridor-payoff-landscape.png')
glue('51-corridor-fig', corridor_performance_fig, display=False)
glue('51-corridor-payoff-fig',corridor_payoff_fig , display=False)
```
## Results
**Parameters**
$\beta=0.8$, $\gamma=0.95$, $\epsilon=0.2$, $\theta_r = 0.9$, $\theta_i=0.1$, $m_u=0$, $\chi=0$, $\zeta=0.0001$.
````{tabbed} Performance
```{glue:figure} 51-corridor-fig
:name: "51-corridor-fig"
Performance in Corridor-20 environment. Plots averaged over {glue:}`num_experiments` independent runs. Number of steps in exploit trials is averaged over 250 last data points.
```
````
````{tabbed} Payoff Landscape
```{glue:figure} 51-corridor-payoff-fig
:name: "51-corridor-payoff-fig"
Payoff Landscape for Corridor-20 environment. Payoff values were obtained after {glue:}`51-e1-trials` trials. For the Q-Learning and R-Learning, the same learning parameters were applied. The ACS2 and Q-Learning generate exactly the same payoffs for each state-action pair.
```
````
## Statistical verification
To statistically assess the population size, the posterior data distribution was modelled using {glue:}`num_experiments` metric values collected in the last trial and then sampled with 100,000 draws.
```
def build_models(dfs: Dict[str, pd.DataFrame], field: str, query_condition: str):
results = {}
for name, df in dfs.items():
data_arr = df.query(query_condition)[field].to_numpy()
bayes_model = bayes_estimate(data_arr)
results[name] = (bayes_model['mu'], bayes_model['std'])
return results
experiments_data = {
'corridor20_acs2': corridor20_acs2_metrics.query('agent == "ACS2"'),
'corridor20_aacs2v1': corridor20_acs2_metrics.query('agent == "AACS2_v1"'),
'corridor20_aacs2v2': corridor20_acs2_metrics.query('agent == "AACS2_v2"'),
'corridor20_qlearning': pd.DataFrame(q_learning_runs[0]),
'corridor20_rlearning': pd.DataFrame(r_learning_runs[0]),
'corridor40_acs2': corridor40_acs2_metrics.query('agent == "ACS2"'),
'corridor40_aacs2v1': corridor40_acs2_metrics.query('agent == "AACS2_v1"'),
'corridor40_aacs2v2': corridor40_acs2_metrics.query('agent == "AACS2_v2"'),
'corridor40_qlearning': pd.DataFrame(q_learning_runs[1]),
'corridor40_rlearning': pd.DataFrame(r_learning_runs[1]),
'corridor100_acs2': corridor100_acs2_metrics.query('agent == "ACS2"'),
'corridor100_aacs2v1': corridor100_acs2_metrics.query('agent == "AACS2_v1"'),
'corridor100_aacs2v2': corridor100_acs2_metrics.query('agent == "AACS2_v2"'),
'corridor100_qlearning': pd.DataFrame(q_learning_runs[2]),
'corridor100_rlearning': pd.DataFrame(r_learning_runs[2]),
}
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/bayes/steps.dill')
def build_steps_models(dfs: Dict[str, pd.DataFrame]):
return build_models(dfs, field='steps_in_trial', query_condition=f'trial == {trials - 1}')
@get_from_cache_or_run(cache_path=f'{cache_dir}/corridor/bayes/rho.dill')
def build_rho_models(dfs: Dict[str, pd.DataFrame]):
filtered_dfs = {}
for k, v in dfs.items():
if any(r_model for r_model in ['aacs2v1', 'aacs2v2', 'rlearning'] if k.endswith(r_model)):
filtered_dfs[k] = v
return build_models(filtered_dfs, field='rho', query_condition=f'trial == {trials - 1}')
steps_models = build_steps_models(experiments_data)
rho_models = build_rho_models(experiments_data)
def print_bayes_table(name_prefix, steps_models, rho_models):
print_row = lambda r: f'{round(r[0].mean(), 2)} ± {round(r[0].std(), 2)}'
rho_data = [print_row(v) for name, v in rho_models.items() if name.startswith(name_prefix)]
bayes_table_data = [
['steps in last trial'] + [print_row(v) for name, v in steps_models.items() if name.startswith(name_prefix)],
['average reward per step', '-', rho_data[0], rho_data[1], '-', rho_data[2]]
]
table = tabulate(bayes_table_data,
headers=['', 'ACS2', 'AACS2v1', 'AACS2v2', 'Q-Learning', 'R-Learning'],
tablefmt="html", stralign='center')
return HTML(table)
# add glue outputs
glue('51-corridor20-bayes', print_bayes_table('corridor20', steps_models, rho_models), display=False)
glue('51-corridor40-bayes', print_bayes_table('corridor40', steps_models, rho_models), display=False)
glue('51-corridor100-bayes', print_bayes_table('corridor100', steps_models, rho_models), display=False)
```
```{tabbed} Corridor 20
{glue:}`51-corridor20-bayes`
```
```{tabbed} Corridor 40
{glue:}`51-corridor40-bayes`
```
```{tabbed} Corridor 100
{glue:}`51-corridor100-bayes`
```
## Observations
The average number of steps can be calculated $\frac{\sum_0^{n} n}{n-1}$, where $n$ is the number of distinct Corridor states. For the tested environment it gives the approximate value of $11.05$, therefore the average reward per step estimation should be close to $1000 / 11.05 = 90.49$, which corresponds to the Figure {numref}`{number} <51-corridor-fig>`.
The same Figure demonstrates that all investigated agents learned the environments. The anticipatory classifier systems obtained an optimal number of steps after the same number of exploit trials, which is about 200. In addition, the AACS2-v2 updates the $\rho$ value more aggressively in earlier phases, but the estimate converges near the optimal reward per step.
For the payoff-landscape in Figure {numref}`{number} <51-corridor-payoff-fig>`, all allowed state--action pairs were identified in the environment (38 in this case). The final population of learning classifiers was established after 100 trials and was the same size. Both Q-table and R-learning tables were populated using the same parameters and number of trials.
The relative distance between adjacent state-action pairs can be divided into three groups. The first one relates to the discounted reward agents (ACS2, Q-Learning). Both generate almost a similar reward payoff for each state--action. Later, there is the R-Learning algorithm, which estimates the $\rho$ value and separates states evenly. Furthermore, two AACS2 agents are performing very similarly. The $\rho$ value calculated by the R-Learning algorithm is lower than the average estimation by the AACS2 algorithm.
Scaled problem instances revealed interesting properties:
- the Q-Learning algorithm was not capable of executing the optimal number of steps in environments with $n=40$, $n=100$,
- for the most challenging problem of $n=1000$, the AACS2 modification yield better performance than ACS2,
- all algorithms with undiscounted reward critera managed to calculate the average reward $\rho$
| github_jupyter |
# Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from previous chapters
`make_system`, `plot_results`, and `calc_total_infected` are unchanged.
```
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
```
Here's an updated version of `run_simulation` that uses `unpack`.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
```
**Exercise:** Write a version of `update_func` that uses `unpack`.
```
# Original
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
unpack(system)
s, i, r = state
infected = beta * i * s
recovered = gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
```
Test the updated code with this example.
```
system = make_system(0.333, 0.25)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)
```
### Sweeping beta
Make a range of values for `beta`, with constant `gamma`.
```
beta_array = linspace(0.1, 1.1, 11)
gamma = 0.25
```
Run the simulation once for each value of `beta` and print total infections.
```
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(system.beta, calc_total_infected(results))
```
Wrap that loop in a function and return a `SweepSeries` object.
```
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
```
Sweep `beta` and plot the results.
```
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected')
savefig('figs/chap06-fig01.pdf')
```
### Sweeping gamma
Using the same array of values for `beta`
```
beta_array
```
And now an array of values for `gamma`
```
gamma_array = [0.2, 0.4, 0.6, 0.8]
```
For each value of `gamma`, sweep `beta` and plot the results.
```
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'γ = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected',
loc='upper left')
savefig('figs/chap06-fig02.pdf')
```
** Exercise:** Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts.
```
beta_array = linspace(0.4, 0.5, 100)
gamma = 0.5
infected_sweep = sweep_beta(beta_array, gamma)
# Solution goes here
```
| github_jupyter |
```
!pip install geopy
!pip install folium
!pip install geocoder
print("Installed!")
```
<h3>Use geopy library to get the latitude and longitude values of Toronto City</h3>
```
import folium
import requests
import json
import matplotlib.cm as cm
import matplotlib.colors as colors
import pandas as pd
import numpy as np
import geocoder
from geopy.geocoders import Nominatim
from pandas.io.json import json_normalize
from sklearn.cluster import KMeans
from geopy.geocoders import Nominatim
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
print("Imported!")
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
df_list = pd.read_html(url)[0]
# clean dataframe
df_list = df_list[df_list.Borough!='Not assigned']
df_list = df_list[df_list.Borough!=0]
df_list.reset_index(drop = True, inplace = True)
i = 0
for i in range(0,df_list.shape[0]):
if df_list.iloc[i][2] == 'Not assigned':
df_list.iloc[i][2] = df_list.iloc[i][1]
i = i+1
df = df_list.groupby(['Postal Code','Borough'])['Neighbourhood'].apply(', '.join).reset_index()
def get_latilong(postal_code):
lati_long_coords = None
while(lati_long_coords is None):
g = geocoder.arcgis('{}, Toronto, Ontario'.format(postal_code))
lati_long_coords = g.latlng
return lati_long_coords
# Retrieving Postal Code Co-ordinates
postal_codes = df['Postal Code']
coords = [ get_latilong(postal_code) for postal_code in postal_codes.tolist() ]
# Adding Columns Latitude & Longitude
df_coords = pd.DataFrame(coords, columns=['Latitude', 'Longitude'])
df['Latitude'] = df_coords['Latitude']
df['Longitude'] = df_coords['Longitude']
df.head()
address = 'Scarborough, Toronto'
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Scarborough are {}, {}.'.format(latitude, longitude))
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighbourhood in zip(df['Latitude'], df['Longitude'], df['Borough'], df['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
scarborough_data = df[df['Borough'] == 'Scarborough'].reset_index(drop=True)
scarborough_data.head()
address = 'Scarborough, Toronto'
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Scarborough are {}, {}.'.format(latitude, longitude))
# create map of Scarborough using latitude and longitude values
map_scarborough = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighbourhood in zip(scarborough_data['Latitude'], scarborough_data['Longitude'], scarborough_data['Borough'], scarborough_data['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_scarborough)
map_scarborough
```
<b>Define Foursquare Credentials and Version
```
CLIENT_ID = 'EM1DPOY2TSWUIQJJQ42LCCT3XY4SLA0KCOWZU0DHQTZR3DOT' # your Foursquare ID
CLIENT_SECRET = 'EBUWTCYITSH2GLCPIKEPC1WY1GT4GY4Y5DDYRZP5ACLEDROY' # your Foursquare Secret
VERSION = '20201229' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
```
<b>Let's explore the first neighborhood in our dataframe
```
scarborough_data.loc[0, 'Neighbourhood']
```
<b>Get the neighborhood's latitude and longitude values
```
neighborhood_latitude = scarborough_data.loc[0, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = scarborough_data.loc[0, 'Longitude'] # neighborhood longitude value
neighborhood_name = scarborough_data.loc[0, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
```
<b>Now, let's get the top 100 venues that are in Malvern , Rouge within a radius of 500 meters¶
```
LIMIT = 100
radius = 500
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
results = requests.get(url).json()
results
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
```
<b>
Clean the json and structure it into a pandas dataframe
```
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
```
<b>Function to repeat the same process to all the neighborhoods in Scarborough¶
```
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighbourhood Latitude',
'Neighbourhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
scarborough_venues = getNearbyVenues(names=scarborough_data['Neighbourhood'],
latitudes=scarborough_data['Latitude'],
longitudes=scarborough_data['Longitude']
)
print(scarborough_venues.shape)
scarborough_venues.head()
```
<b>No. of venues returned for each neighborhood¶
```
scarborough_venues.groupby('Neighbourhood').count()
```
<b>Unique categories curated from all the returned venues
```
print('There are {} uniques categories.'.format(len(scarborough_venues['Venue Category'].unique())))
```
<h2>Analyze Each Neighborhood¶
</h2>
```
# one hot encoding
scarborough_onehot = pd.get_dummies(scarborough_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
scarborough_onehot['Neighbourhood'] = scarborough_venues['Neighbourhood']
# move neighborhood column to the first column
fixed_columns = [scarborough_onehot.columns[-1]] + list(scarborough_onehot.columns[:-1])
scarborough_onehot = scarborough_onehot[fixed_columns]
scarborough_onehot.head()
scarborough_onehot.shape
```
<b>Group rows by neighborhood and by taking the mean of the frequency of occurrence of each category¶
```
scarborough_grouped = scarborough_onehot.groupby('Neighbourhood').mean().reset_index()
scarborough_grouped
scarborough_grouped.shape
```
<b>Print each neighborhood along with the top 5 most common venues
```
num_top_venues = 5
for hood in scarborough_grouped['Neighbourhood']:
print("----"+hood+"----")
temp = scarborough_grouped[scarborough_grouped['Neighbourhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
```
<b>Put this into a pandas dataframe¶
```
#function to sort the venues in descending order
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
#create the new dataframe and display the top 10 venues for each neighborhood
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighbourhoods_venues_sorted = pd.DataFrame(columns=columns)
neighbourhoods_venues_sorted['Neighbourhood'] = scarborough_grouped['Neighbourhood']
for ind in np.arange(scarborough_grouped.shape[0]):
neighbourhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(scarborough_grouped.iloc[ind, :], num_top_venues)
neighbourhoods_venues_sorted.head()
```
<h2>Cluster Neighborhoods
</h2>
```
#Run k-means to cluster the neighborhood into 5 clusters
# set number of clusters
kclusters = 5
scarborough_grouped_clustering = scarborough_grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(scarborough_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
#create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood
neighbourhoods_venues_sorted
# add clustering labels
neighbourhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
scarborough_merged = scarborough_data
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
scarborough_merged = scarborough_merged.join(neighbourhoods_venues_sorted.set_index('Neighbourhood'), on='Neighbourhood')
scarborough_merged.head() # check the last columns!
#visualize the resulting clusters
import numpy as np
import matplotlib.cm as cm
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(scarborough_merged['Latitude'], scarborough_merged['Longitude'], scarborough_merged['Neighbourhood'], scarborough_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster)],
fill=True,
fill_color=rainbow[int(cluster)],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
```
<b> Examine Cluster
<b>Cluster 1
```
scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 0, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]]
```
<b>Cluster 2
```
scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 1, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]]
```
<b>Cluster 3
```
scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 2, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]]
```
<b>Cluster 4
```
scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 3, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]]
```
<b>Cluster 5
```
scarborough_merged.loc[scarborough_merged['Cluster Labels'] == 4, scarborough_merged.columns[[1] + list(range(5, scarborough_merged.shape[1]))]]
```
| github_jupyter |
# Writing Your Own Graph Algorithms
The analytical engine in GraphScope derives from [GRAPE](https://dl.acm.org/doi/10.1145/3282488), a graph processing system proposed on SIGMOD-2017. GRAPE differs from prior systems in its ability to parallelize sequential graph algorithms as a whole. In GRAPE, sequential algorithms can be easily **plugged into** with only minor changes and get parallelized to handle large graphs efficiently.
In this tutorial, we will show how to define and run your own algorithm in PIE and Pregel models.
Sounds like fun? Excellent, here we go!
## Writing algorithm in PIE model
GraphScope enables users to write algorithms in the [PIE](https://dl.acm.org/doi/10.1145/3282488) programming model in a pure Python mode, first of all, you should import **graphscope** package and the **pie** decorator.
```
import graphscope
from graphscope.framework.app import AppAssets
from graphscope.analytical.udf.decorators import pie
```
We use the single source shortest path ([SSSP](https://en.wikipedia.org/wiki/Shortest_path_problem)) algorithm as an example. To implement the PIE model, you just need to **fulfill this class**
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
pass
@staticmethod
def PEval(frag, context):
pass
@staticmethod
def IncEval(frag, context):
pass
```
The **pie** decorator contains two params named `vd_type` and `md_type` , which represent the vertex data type and message type respectively.
You may specify types for your own algorithms, optional values are `int`, `double`, and `string`.
In our **SSSP** case, we compute the shortest distance to the source for all nodes, so we use `double` value for `vd_type` and `md_type` both.
In `Init`, `PEval`, and `IncEval`, it has **frag** and **context** as parameters. You can use these two parameters to access the fragment data and intermediate results. Detail usage please refer to [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html).
### Fulfill Init Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
pass
@staticmethod
def IncEval(frag, context):
pass
```
The `Init` function are responsable for 1) setting the initial value for each node; 2) defining the strategy of message passing; and 3) specifing aggregator for handing received message on each rounds.
Note that the algorithm you defined will run on a property graph. So we should get the vertex label first by `v_label_num = frag.vertex_label_num()`, then we can traverse all nodes with the same label
and set the initial value by `nodes = frag.nodes(v_label_id)` and `context.init_value(nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate)`.
Since we are computing the shorest path between the source node and others nodes. So we use `PIEAggregateType.kMinAggregate` as the aggregator for mesaage aggregation, which means it will
perform `min` operation upon all received messages. Other avaliable aggregators are `kMaxAggregate`, `kSumAggregate`, `kProductAggregate`, and `kOverwriteAggregate`.
At the end of `Init` function, we register the sync buffer for each node with `MessageStrategy.kSyncOnOuterVertex`, which tells the engine how to pass the message.
### Fulfill PEval Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
src = int(context.get_config(b"src"))
graphscope.declare(graphscope.Vertex, source)
native_source = False
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
if frag.get_inner_node(v_label_id, src, source):
native_source = True
break
if native_source:
context.set_node_value(source, 0)
else:
return
e_label_num = frag.edge_label_num()
for e_label_id in range(e_label_num):
edges = frag.get_outgoing_edges(source, e_label_id)
for e in edges:
dst = e.neighbor()
distv = e.get_int(2)
if context.get_node_value(dst) > distv:
context.set_node_value(dst, distv)
@staticmethod
def IncEval(frag, context):
pass
```
In `PEval` of **SSSP**, it gets the queried source node by `context.get_config(b"src")`.
`PEval` checks each fragment whether it contains source node by `frag.get_inner_node(v_label_id, src, source)`. Note that the `get_inner_node` method needs a `source` parameter in type `Vertex`, which you can declare by `graphscope.declare(graphscope.Vertex, source)`
If a fragment contains the source node, it will traverse the outgoing edges of the source with `frag.get_outgoing_edges(source, e_label_id)`. For each vertex, it computes the distance from the source, and updates the value if the it less than the initial value.
### Fulfill IncEval Function
```
@pie(vd_type="double", md_type="double")
class SSSP_PIE(AppAssets):
@staticmethod
def Init(frag, context):
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
nodes = frag.nodes(v_label_id)
context.init_value(
nodes, v_label_id, 1000000000.0, PIEAggregateType.kMinAggregate
)
context.register_sync_buffer(v_label_id, MessageStrategy.kSyncOnOuterVertex)
@staticmethod
def PEval(frag, context):
src = int(context.get_config(b"src"))
graphscope.declare(graphscope.Vertex, source)
native_source = False
v_label_num = frag.vertex_label_num()
for v_label_id in range(v_label_num):
if frag.get_inner_node(v_label_id, src, source):
native_source = True
break
if native_source:
context.set_node_value(source, 0)
else:
return
e_label_num = frag.edge_label_num()
for e_label_id in range(e_label_num):
edges = frag.get_outgoing_edges(source, e_label_id)
for e in edges:
dst = e.neighbor()
distv = e.get_int(2)
if context.get_node_value(dst) > distv:
context.set_node_value(dst, distv)
@staticmethod
def IncEval(frag, context):
v_label_num = frag.vertex_label_num()
e_label_num = frag.edge_label_num()
for v_label_id in range(v_label_num):
iv = frag.inner_nodes(v_label_id)
for v in iv:
v_dist = context.get_node_value(v)
for e_label_id in range(e_label_num):
es = frag.get_outgoing_edges(v, e_label_id)
for e in es:
u = e.neighbor()
u_dist = v_dist + e.get_int(2)
if context.get_node_value(u) > u_dist:
context.set_node_value(u, u_dist)
```
The only difference between `IncEval` and `PEval` of **SSSP** algorithm is that `IncEval` are invoked
on each fragment, rather than only the fragment with source node. A fragment will repeat the `IncEval` until there is no messages received. When all the fragments are finished computation, the algorithm is terminated.
### Run Your Algorithm on A Graph.
First, let's establish a session and load a graph for testing.
```
from graphscope.framework.loader import Loader
# the location of the property graph for testing
property_dir = '/home/jovyan/datasets/property'
graphscope.set_option(show_log=True)
k8s_volumes = {
"data": {
"type": "hostPath",
"field": {
"path": "/testingdata",
"type": "Directory"
},
"mounts": {
"mountPath": "/home/jovyan/datasets",
"readOnly": True
}
}
}
sess = graphscope.session(k8s_volumes=k8s_volumes)
graph = sess.g(directed=False)
graph = graph.add_vertices("/home/jovyan/datasets/property/p2p-31_property_v_0", label="person")
graph = graph.add_edges("/home/jovyan/datasets/property/p2p-31_property_e_0", label="knows")
```
Then initialize your algorithm and query the shorest path from vertex `6` over the graph.
```
sssp = SSSP_PIE()
ctx = sssp(graph, src=6)
```
Runing this cell, your algorithm should evaluate successfully. The results are stored in vineyard in the distributed machies. Let's fetch and check the results.
```
r1 = (
ctx.to_dataframe({"node": "v:person.id", "r": "r:person"})
.sort_values(by=["node"])
.to_numpy(dtype=float)
)
r1
```
### Dump and Reload Your Algorithm
You can dump and save your define algorithm for future use.
```
import os
# specify the path you want to dump
dump_path = os.path.expanduser("~/Workspace/sssp_pie.gar")
# dump
SSSP_PIE.to_gar(dump_path)
```
Now, you can find a package named `sssp_pie.gar` in your `~/Workspace`. Reload this algorithm with following code.
```
from graphscope.framework.app import load_app
# specify the path you want to dump
dump_path = os.path.expanduser("~/Workspace/sssp_pie.gar")
sssp2 = load_app("SSSP_PIE", dump_path)
```
### Write Algorithm in Pregel Model
In addition to the sub-graph based PIE model, GraphScope supports vertex-centric Pregel model. To define a Pregel algorithm, you should import **pregel** decorator and fulfil the functions defined on vertex.
```
import graphscope
from graphscope.framework.app import AppAssets
from graphscope.analytical.udf.decorators import pregel
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
pass
@staticmethod
def Compute(messages, v, context):
pass
```
The **pregel** decorator has two parameters named `vd_type` and `md_type`, which represent the vertex data type and message type respectively.
You can specify the types for your algorithm, options are `int`, `double`, and `string`. For **SSSP**, we set both to `double`.
Since Pregel model are defined on vertex, the `Init` and `Compute` functions has a parameter `v` to access the vertex data. See more details in [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html).
### Fulfill Init Function¶
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
pass
```
The `Init` function sets the initial value for each node by `v.set_value(1000000000.0)`
### Fulfill Compute function¶
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
src_id = context.get_config(b"src")
cur_dist = v.value()
new_dist = 1000000000.0
if v.id() == src_id:
new_dist = 0
for message in messages:
new_dist = min(message, new_dist)
if new_dist < cur_dist:
v.set_value(new_dist)
for e_label_id in range(context.edge_label_num()):
edges = v.outgoing_edges(e_label_id)
for e in edges:
v.send(e.vertex(), new_dist + e.get_int(2))
v.vote_to_halt()
```
The `Compute` function for **SSSP** computes the new distance for each node by the following steps:
1) Initialize the new value with value 1000000000
2) If the vertex is source node, set its distance to 0.
3) Compute the `min` value of messages received, and set the value if it less than the current value.
Repeat these, until no more new messages(shorter distance) are generated.
### Optional Combiner
Optionally, we can define a combiner to reduce the message communication overhead.
```
@pregel(vd_type="double", md_type="double")
class SSSP_Pregel(AppAssets):
@staticmethod
def Init(v, context):
v.set_value(1000000000.0)
@staticmethod
def Compute(messages, v, context):
src_id = context.get_config(b"src")
cur_dist = v.value()
new_dist = 1000000000.0
if v.id() == src_id:
new_dist = 0
for message in messages:
new_dist = min(message, new_dist)
if new_dist < cur_dist:
v.set_value(new_dist)
for e_label_id in range(context.edge_label_num()):
edges = v.outgoing_edges(e_label_id)
for e in edges:
v.send(e.vertex(), new_dist + e.get_int(2))
v.vote_to_halt()
@staticmethod
def Combine(messages):
ret = 1000000000.0
for m in messages:
ret = min(ret, m)
return ret
```
### Run Your Pregel Algorithm on Graph.
Next, let's run your Pregel algorithm on the graph, and check the results.
```
sssp_pregel = SSSP_Pregel()
ctx = sssp_pregel(graph, src=6)
r2 = (
ctx.to_dataframe({"node": "v:person.id", "r": "r:person"})
.sort_values(by=["node"])
.to_numpy(dtype=float)
)
r2
```
It is important to release resources when they are no longer used.
```
sess.close()
```
### Aggregator in Pregel
Pregel aggregators are a mechanism for global communication, monitoring, and counting. Each vertex can provide a value to an aggregator in superstep `S`, the system combines these
values using a reducing operator, and the resulting value is made available to all vertices in superstep `S+1`. GraphScope provides a number of predefined aggregators for Pregel algorithms, such as `min`, `max`, or `sum` operations on data types.
Here is a example for use a builtin aggregator, more details can be found in [Cython SDK API](https://graphscope.io/docs/reference/cython_sdk.html)
```
@pregel(vd_type="double", md_type="double")
class Aggregators_Pregel_Test(AppAssets):
@staticmethod
def Init(v, context):
# int
context.register_aggregator(
b"int_sum_aggregator", PregelAggregatorType.kInt64SumAggregator
)
context.register_aggregator(
b"int_max_aggregator", PregelAggregatorType.kInt64MaxAggregator
)
context.register_aggregator(
b"int_min_aggregator", PregelAggregatorType.kInt64MinAggregator
)
# double
context.register_aggregator(
b"double_product_aggregator", PregelAggregatorType.kDoubleProductAggregator
)
context.register_aggregator(
b"double_overwrite_aggregator",
PregelAggregatorType.kDoubleOverwriteAggregator,
)
# bool
context.register_aggregator(
b"bool_and_aggregator", PregelAggregatorType.kBoolAndAggregator
)
context.register_aggregator(
b"bool_or_aggregator", PregelAggregatorType.kBoolOrAggregator
)
context.register_aggregator(
b"bool_overwrite_aggregator", PregelAggregatorType.kBoolOverwriteAggregator
)
# text
context.register_aggregator(
b"text_append_aggregator", PregelAggregatorType.kTextAppendAggregator
)
@staticmethod
def Compute(messages, v, context):
if context.superstep() == 0:
context.aggregate(b"int_sum_aggregator", 1)
context.aggregate(b"int_max_aggregator", int(v.id()))
context.aggregate(b"int_min_aggregator", int(v.id()))
context.aggregate(b"double_product_aggregator", 1.0)
context.aggregate(b"double_overwrite_aggregator", 1.0)
context.aggregate(b"bool_and_aggregator", True)
context.aggregate(b"bool_or_aggregator", False)
context.aggregate(b"bool_overwrite_aggregator", True)
context.aggregate(b"text_append_aggregator", v.id() + b",")
else:
if v.id() == b"1":
assert context.get_aggregated_value(b"int_sum_aggregator") == 62586
assert context.get_aggregated_value(b"int_max_aggregator") == 62586
assert context.get_aggregated_value(b"int_min_aggregator") == 1
assert context.get_aggregated_value(b"double_product_aggregator") == 1.0
assert (
context.get_aggregated_value(b"double_overwrite_aggregator") == 1.0
)
assert context.get_aggregated_value(b"bool_and_aggregator") == True
assert context.get_aggregated_value(b"bool_or_aggregator") == False
assert (
context.get_aggregated_value(b"bool_overwrite_aggregator") == True
)
context.get_aggregated_value(b"text_append_aggregator")
v.vote_to_halt()
```
| github_jupyter |
# SuStaIn tutorial using simulated data
Written by Alex Young in April 2020, updated in April 2021. Please email alexandra.young@kcl.ac.uk with any questions.
This tutorial demonstrates how to run Subtype and Stage Inference (SuStaIn) using simulated data. SuStaIn is an unsupervised learning algorithm that identifies subgroups of individuals with distinct biomarker progression patterns. See [Young et al. Nature Communications 2018](https://doi.org/10.1038/s41467-018-05892-0) for more details.
SuStaIn is a generalisable algorithm in which you can choose how to model the progression of biomarkers within a subtype. In this tutorial I use the linear z-score model I used in [Young et al. 2018](https://doi.org/10.1038/s41467-018-05892-0), but it is possible to use other models of biomarker evolution, such as the event-based model.
For a demonstration of the 'event-based'/mixture modelling version of SuStaIn (i.e. MixtureSustain), see simrun.py, available in the /sim subdirectory of the pySuStaIn package.
## Installing SuStaIn and setting it up to run in a notebook
To get SuStaIn up and running first you need to install the package. I'm using Anaconda and had some conflicts with existing packagaes so I had to create a new environment and set it up to be accessible from a jupyter notebook. For me the whole set up process looked like this...
Step 1: Create a new environment "sustain_env" in anaconda that uses python 3.7 and activate the environment ready to install pySuStaIn.
```console
conda create --name sustain_env python=3.7
conda activate sustain_env
```
Step 2: Install pySuStaIn within the environment. The first line installs the kde-ebm (not needed for this tutorial), and the second line installs pySutaIn. Using the -e option enables you to edit the code (most users won't need to do this so can remove the -e).
```console
pip install -e ./awkde
pip install -e .
```
Step 3: To get the new environment to run inside a notebook I had to install ipykernel and use that to add the environment as a new kernel.
```console
conda install ipykernel
python3.7 -m ipykernel install --user --name sustain_env
```
When running the notebook you then need to make sure you're using the new kernel (it should appear as sustain_env in the top right hand corner, or you need to select it using Kernel>Change kernel>sustain_env).
## Importing packages
```
# import the python packages needed to generate simulated data for the tutorial
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pickle
from pathlib import Path
import sklearn.model_selection
import pandas as pd
import pylab
import sys
import pySuStaIn
sys.path.insert(0,'../sim/')
from simfuncs import generate_random_Zscore_sustain_model, generate_data_Zscore_sustain
```
## The linear z-score model
The linear z-score model I use in this tutorial describes a subtype progression pattern as the linear evolution of biomarkers between different z-scores. Figure 1 below shows an example of what this model looks like for a single subtype. The model is indexed by a set of discrete stages. Each stage corresponds to a biomarker reaching a new z-score from the set of z-scores for each biomarker, Z_vals. Each biomarker starts with a minimum value of 0 at stage 0 and reaches a maximum of Z_max at the final stage of the progression. The number of stages is determined by the number of biomarkers and z-scores in Z_vals. The SuStaIn algorithm identifies subgroups of individuals and their progression patterns - for the linear z-score model the progression pattern would be the ordering of the different biomarker z-scores in Z_vals - gt_sequence below.
You can play around generating different sequences and altering the settings of the linear z-score model using the code below.
```
N = 5 # number of biomarkers
SuStaInLabels = []
for i in range(N):
SuStaInLabels.append( 'Biomarker '+str(i)) # labels of biomarkers for plotting
Z_vals = np.array([[1,2,3]]*N) # Z-scores for each biomarker
Z_max = np.array([5]*N) # maximum z-score
# To demonstrate how to set different biomarkers to have different z-scores,
# set biomarker 0 to have z-scores of 1 and 2 only and a maximum of 3
# to do this change the corresponding row of Z_vals to read 1 2 0
# and change the corresponding row of Z_max to 3
Z_vals[np.array(0),np.array(2)] = 0
Z_max[np.array(0)] = 3
# and set biomarker 2 to have a z-score of 1 only and a maximum of 2
# to do this change the corresponding row of Z_vals to read 1 0 0
# and change the corresponding row of Z_max to 2
Z_vals[np.array(2),np.array([1,2])] = 0
Z_max[np.array(2)] = 2
# generate a random sequence for the linear z-score model
gt_sequence = generate_random_Zscore_sustain_model(Z_vals,
1)
# ignore this part, it's only necessary so that the generate_data_sustain function
# can be used in this demo setting
gt_stages = np.array([0])
gt_subtypes = np.array([0])
# this code generates data from z-score sustain
# - here i've just output the z-score model itself rather than any datapoints
_, _, gt_stage_value = generate_data_Zscore_sustain(gt_subtypes,
gt_stages,
gt_sequence,
Z_vals,
Z_max)
# ignore this part, just calculates some parameters of sustain to output below
stage_zscore = np.array([y for x in Z_vals.T for y in x])
stage_zscore = stage_zscore.reshape(1,len(stage_zscore))
IX_select = stage_zscore>0
stage_zscore = stage_zscore[IX_select]
stage_zscore = stage_zscore.reshape(1,len(stage_zscore))
num_zscores = Z_vals.shape[1]
IX_vals = np.array([[x for x in range(N)]] * num_zscores).T
stage_biomarker_index = np.array([y for x in IX_vals.T for y in x])
stage_biomarker_index = stage_biomarker_index.reshape(1,len(stage_biomarker_index))
stage_biomarker_index = stage_biomarker_index[IX_select]
stage_biomarker_index = stage_biomarker_index.reshape(1,len(stage_biomarker_index))
# print out some of the values and plot a picture of the model
print('Simulated sequence:',(gt_sequence.astype(int).flatten()))
print('At the beginning of the progression (stage 0) the biomarkers have scores of 0')
print('At the stages:',1+np.arange(np.array(stage_zscore).shape[1]))
print('the biomarkers:',stage_biomarker_index[:,gt_sequence.astype(int).flatten()].flatten())
print('reach z-scores of:',stage_zscore[:,gt_sequence.astype(int).flatten()].flatten())
print('At the end of the progression (stage',np.array(stage_zscore).shape[1]+2,') the biomarkers reach scores of:',Z_max)
print('The z-score model assumes individuals belong to some unknown stage of this progression,')
print('with gaussian noise with a standard deviation of 1 for each biomarker')
temp_stages = np.array(range(np.array(stage_zscore).shape[1]+2))
for b in range(N):
ax = plt.plot(temp_stages, gt_stage_value[b,:,:])
_ = plt.xlabel('SuStaIn stage')
_ = plt.ylabel('Z-score')
_ = plt.legend(SuStaInLabels)
_ = plt.title('Figure 1')
```
## Important note on the linear z-score model
It's natural to think of the progression pattern in Figure 1 as linear in time but this isn't necessarily the case. For example, the time between stages 2 and 3 may be much longer than between stages 8 and 9. This means that the shape of the trajectories may look quite different if indexed by time (although the general order in which the biomarkers progress to different z-scores would remain the same). The linear z-score model simply describes the patient snapshots you would expect to see in a cross-sectional dataset for any particular subtype at a particular stage. The subtypes and stages are considered as hidden variables, which the SuStaIn algorithm identifies directly from the data.
## Generating simulated data
This section of code generates simulated data for the tutorial. Any variables labelled as 'gt' (ground truth) are for generating the simulated data only and would typically not be known beforehand in a real dataset.
You can ignore many of the settings below for now, but in particular
- N_S_gt is the number of subtypes to simulate
- gt_f is the proportion of individuals belonging to each subtype
- gt_sequence is the order in which each biomarker approaches each z-score for each subtype
- gt_subtypes is the subtype of each individual
- gt_stages is the stage of each individual along the progression pattern of their subtype
You can alter these to get a feel for how SuStaIn works on different simulated datasets.
```
N = 5 # number of biomarkers
M = 500 # number of observations ( e.g. subjects )
M_control = 100 # number of these that are control subjects
N_S_gt = 2 # number of ground truth subtypes
SuStaInLabels = []
for i in range(N):
SuStaInLabels.append( 'Biomarker '+str(i)) # labels of biomarkers for plotting
Z_vals = np.array([[1,2,3]]*N) # Z-scores for each biomarker
Z_max = np.array([5]*N) # maximum z-score
# ground truth proportion of individuals belonging to each subtype
gt_f = [1+0.5*x for x in range(N_S_gt)]
gt_f = [x/sum(gt_f) for x in gt_f][::-1]
# ground truth sequence for each subtype
gt_sequence = generate_random_Zscore_sustain_model(Z_vals,
N_S_gt)
# simulate subtypes and stages for individuals, including a control population at stage 0
N_k = np.sum(Z_vals>0)+1
gt_subtypes = np.random.choice(range(N_S_gt), M, replace=True, p=gt_f)
gt_stages_control = np.zeros((M_control,1))
gt_stages = np.concatenate((gt_stages_control,
np.ceil(np.random.rand(M-M_control,1)*N_k)),
axis=0)
# generate simulated data
data, gt_data_denoised, gt_stage_value = generate_data_Zscore_sustain(gt_subtypes,
gt_stages,
gt_sequence,
Z_vals,
Z_max)
# ignore this part, just calculates some parameters of sustain to output below
stage_zscore = np.array([y for x in Z_vals.T for y in x])
stage_zscore = stage_zscore.reshape(1,len(stage_zscore))
IX_select = stage_zscore>0
stage_zscore = stage_zscore[IX_select]
stage_zscore = stage_zscore.reshape(1,len(stage_zscore))
num_zscores = Z_vals.shape[1]
IX_vals = np.array([[x for x in range(N)]] * num_zscores).T
stage_biomarker_index = np.array([y for x in IX_vals.T for y in x])
stage_biomarker_index = stage_biomarker_index.reshape(1,len(stage_biomarker_index))
stage_biomarker_index = stage_biomarker_index[IX_select]
stage_biomarker_index = stage_biomarker_index.reshape(1,len(stage_biomarker_index))
for s in range (N_S_gt):
# print out the parameters
print('For subtype',s,'(',gt_f[s]*100,'% of individuals)')
print('Simulated sequence:',(gt_sequence[s,:].astype(int).flatten()))
print('At the beginning of the progression (stage 0) the biomarkers have scores of 0')
print('At the stages:',1+np.arange(np.array(stage_zscore).shape[1]))
print('the biomarkers:',stage_biomarker_index[:,gt_sequence[s,:].astype(int).flatten()].flatten())
print('reach z-scores of:',stage_zscore[:,gt_sequence[s,:].astype(int).flatten()].flatten())
print('At the end of the progression (stage',np.array(stage_zscore).shape[1]+2,') the biomarkers reach scores of:',Z_max)
print('')
```
## The SuStaIn algorithm
SuStaIn identifies subtypes with distinct progression patterns from cross-sectional data. The algorithm proceeds hierarchically, first fitting a single subtype to the data, then two, then three, etc., up to a maximum number of subtypes chosen by the user. The fitting of the nth subtype model works but splitting each of the previous n-1 clusters into two and then using this as an initialisation to fit the n subtype model. For each of the n subtype models, SuStaIn uses MCMC sampling to estimate the uncertainty in the subtype progression patterns and the proportion of individuals that belong to each subtype. The optimal number of subtypes is selected by using cross-validation to compute the cross-validation information criterion (CVIC).
## Preparing data for SuStaIn
The data for SuStaIn needs to be z-scored relative to a control population such that the mean of the control population is 0 and the standard deviation of the control population is 1. To do this simply subtract the mean of the control population from your data and divide by the standard deviation of the control population. Double check that if you apply this transformation to the control population only that the control population has a mean of 0 and a standard deviation of 1. The data further needs to be transformed to increase in z-score with disease progression. If the biomarkers you are using decrease with disease progression you need to multiply the data for those biomarkers by -1.
I'd suggest the following workflow for getting your data ready to run SuStaIn on.
1. Regress out the effects of covariates. Learn the effects of covariates in a control population and use this model to regress out the effect of covariates for all the subjects. Learning the model in the control population will avoid regressing out disease effects, which you want to keep in your dataset.
2. Calculate the mean and standard deviation of each biomarker in your control dataset, mean_control and std_control.
3. Z-score your data by taking (data-mean_control)/std_control.
4. Identify any biomarkers that decrease with disease progression, these will have mean_data < mean_control. Multiply the data for these biomarkers by -1.
Steps 2-4 are illustrated in the section below but will have little effect on the simulated data because it is generated as z-scores already.
```
# extract data for control subjects
data_control = data[np.tile(gt_stages,(1,N))==0].reshape(M_control,N)
# compute the mean and standard deviation of the control population
mean_control = np.mean(data_control,axis=0)
std_control = np.std(data_control,axis=0)
# z-score the data
data = (data-mean_control)/std_control
data_control = (data_control-mean_control)/std_control
# multiply data for decreasing biomarkers by -1
IS_decreasing = np.mean(data,axis=0)<np.mean(data_control,axis=0)
data[np.tile(IS_decreasing,(M,1))] = -1*data[np.tile(IS_decreasing,(M,1))]
data_control[np.tile(IS_decreasing,(M_control,1))] = -1*data_control[np.tile(IS_decreasing,(M_control,1))]
# Check that the mean of the control population is 0
print('Mean of controls is ',np.mean(data_control,axis=0))
# Check that the standard deviation of the control population is 1
print('Standard deviation of controls is ',np.std(data_control,axis=0))
# Check that the mean of the whole dataset is positive
print('Mean of whole dataset is ',np.mean(data,axis=0))
# Check that the standard deviation of the whole dataset is greater than 1
print('Standard deviation of whole dataset is ',np.std(data,axis=0))
```
## Choosing the settings
The SuStaIn algorithm requires the following inputs.
### data
The data you want to run SuStaIn on, of size M subjects by N biomarkers. This needs to be z-scored in the way described in the previous section.
### Z_vals
This is the set of z-scores you want to include for each biomarker. The more z-scores you use the longer the SuStaIn algorithm will take to run. Z_vals has size N biomarkers by Z z-scores. If you have more z-scores for some biomarkers than others you can simply leave zeros at the end of biomarker rows with fewer z-scores.
### Z_max
The maximum z-score reached at the end of the progression, with size N biomarkers by 1. I'd suggest choosing a value around the 95th percentile of your data but you can experiment with different values. I typically choose an integer for interpretability but you don't have to.
### SuStaInLabels
The names of the biomarkers you are using, for plotting purposes.
### N_startpoints
The number of startpoints to use when fitting the subtypes hierarchichally. I'd suggest using 25.
### N_S_max
The maximum number of subtypes to fit. I'd suggest starting with a lower number - maybe three - and then increasing that if you're getting a significantly better fit with the maximum number of subtypes. You can judge this roughly from the MCMC plot. To properly evaluate the optimal number of subtypes you need to run cross-validation.
### N_iterations_MCMC
The number of iterations for the MCMC sampling of the uncertainty in the progression pattern. I'd recommend using 1x10^5 or 1x10^6.
### output_folder
Choose an output folder for the results.
### dataset_name
Name the results files outputted by SuStaIn.
### use_parellel_startpoints
Boolean for whether or not to parallelize the startpoints.
### Additional note
There are approximate and exact versions of the computation of the data likelihood for the linear z-score model. Currently the python version only supports the approximate version. If you want to use the exact version please see the Matlab version at https://github.com/ucl-pond/SuStaInMatlab.
```
# Input the settings for z-score SuStaIn
# To make the tutorial run faster I've set
# N_startpoints = 10 and N_iterations_MCMC = int(1e4)
# I recommend using N_startpoints = 25 and
# N_iterations_MCMC = int(1e5) or int(1e6) in general though
N_startpoints = 10
N_S_max = N_S_gt+1
N_iterations_MCMC = int(1e4)
output_folder = os.path.join(os.getcwd(), 'sim')
dataset_name = 'sim'
sustain_input = pySuStaIn.ZscoreSustain(data,
Z_vals,
Z_max,
SuStaInLabels,
N_startpoints,
N_S_max,
N_iterations_MCMC,
output_folder,
dataset_name,
False)
```
## Deleting previous SuStaIn results if necessary
This code snippet deletes any previous SuStaIn results. By default the SuStaIn code checks for previous results to avoid running the algorithm again unnecessarily so you'll need to run this section each time you generate a new simulated dataset that you want to fit the SuStaIn model. If you don't want to overwrite your previous results you can choose a new dataset_name and/or output_folder.
```
if os.path.exists(output_folder):
shutil.rmtree(output_folder)
# output_folder = os.path.join(os.getcwd(), 'sim2')
# dataset_name = 'sim2'
# sustain_input = ZscoreSustain(data,
# Z_vals,
# Z_max,
# SuStaInLabels,
# N_startpoints,
# N_S_max,
# N_iterations_MCMC,
# output_folder,
# dataset_name,
# False)
```
## Create folder for results if it doesn't exist already
```
if not os.path.isdir(output_folder):
os.mkdir(output_folder)
```
## Running the SuStaIn algorithm
We're finally ready to run the SuStaIn algorithm! The main outputs are samples_sequence and samples_f. samples_sequence gives MCMC samples of the ordering of the biomarker z-scores for each n subtype model. samples_f gives MCMC samples of the proportion of individuals that belong to each subtype for each n subtype model. These can be found in the outputted files.
```
# runs the sustain algorithm with the inputs set in sustain_input above
samples_sequence, \
samples_f, \
ml_subtype, \
prob_ml_subtype, \
ml_stage, \
prob_ml_stage, \
prob_subtype_stage = sustain_input.run_sustain_algorithm()
# Just added this to demonstrate what happens if you re-run the SuStaIn algorithm
# with the same dataset name and folder as previously
# The code recognises the files are there already rather than re-running SuStaIn
# This is useful if you want to increase the number of subtypes without
# starting right from the beginning again
samples_sequence, \
samples_f, \
ml_subtype, \
prob_ml_subtype, \
ml_stage, \
prob_ml_stage, \
prob_subtype_stage = sustain_input.run_sustain_algorithm()
```
## Comparison with ground truth
Figure 3 shows the expected progression patterns for the simulated data in the form of positional variance diagrams so it's easier to compare the output of SuStaIn with the ground truth from the simulated data. In a positional variance diagram each entry tells you the probability each biomarker has reached each z-score at each SuStaIn stage. Here, z-scores of 1 are shown in red, 2 in magenta and 3 in blue. I've plotted the positional variance diagrams in Figure 3 without any uncertainty. You'd expect the results from the simulated dataset to have a similar progression pattern on average to those in Figure 3, but with uncertainty due to the simulated noise. Figure 4 shows the output of SuStaIn for the ground truth number of subtypes.
```
# Output a figure showing the ground truth
temp_gt_sequence = gt_sequence.reshape((gt_sequence.shape[0],gt_sequence.shape[1],1))
temp_gt_f = np.asarray(gt_f).reshape(len(gt_f),1)
pySuStaIn.ZscoreSustain._plot_sustain_model(sustain_input,temp_gt_sequence,temp_gt_f,M)
_ = plt.suptitle('Figure 3: Ground truth progression pattern')
# The code below opens the results for the ground truth number of subtypes
# and plots the output
s = N_S_gt-1
pickle_filename_s = output_folder + '/pickle_files/' + dataset_name + '_subtype' + str(s) + '.pickle'
pickle_filepath = Path(pickle_filename_s)
pickle_file = open(pickle_filename_s, 'rb')
loaded_variables = pickle.load(pickle_file)
samples_sequence = loaded_variables["samples_sequence"]
samples_f = loaded_variables["samples_f"]
pickle_file.close()
pySuStaIn.ZscoreSustain._plot_sustain_model(sustain_input,samples_sequence,samples_f,M)
_ = plt.suptitle('Figure 4: SuStaIn output')
```
## Assessing the output
Now we've run the SuStaIn algorithm, we need to assess the output and decide whether to change any of the settings.
### MCMC trace
The first thing to look at is the MCMC trace (Figure 5 below). It should be periodic, i.e. with a structure that regularly repeats itself, rather than having long flat sections where it gets stuck at a particular likelihood. If this isn't the case SuStaIn is not working well on your data. There's some troubleshooting you can try:
- Check that your data is z-scored correctly
- Check that the choice of z-scores (Z_vals and Z_max) are sensible
- Check that your data looks normally distributed in your control population
-- if not, try an alternative version of SuStaIn that might be more suitable for your data, e.g. SuStaIn using a KDE event-based model for non normally distributed data
- Increase the number of startpoints (N_startpoints)
- Increase the number of MCMC samples (N_iterations_MCMC)
### Histograms of model likelihood
The next thing to look at are histograms of the model likelihood (Figure 6 below). Whilst the number of subtypes should be determined through cross-validation, these histograms can give a reasonable indication of the number of subtypes in your dataset, which will enable you to decide whether to fit more subtypes and what to set as the maximum number of subtypes for the cross-validation. When there's a large overlap between the histograms of the model likelihood as you increase the subtypes it means that the likelihood isn't improving very much when you increase the number of subtypes, which means you've probably gone past the optimal number of subtypes. You want to fit enough subtypes so that at least one model is too complex (has too many subtypes) for your data so that you can be sure you've chosen the optimal number of subtypes. If you're not seeing overlapping histograms you need to increase N_S_max and run SuStaIn again from your previous setting of N_S_max.
### Positional variance diagrams
If the end stages of the positional variance diagrams look very blurry with no clear predominant progression pattern, it usually means that there aren't many individuals that fit well with the end stages of the progression. If this is the case you might want to consider removing some biomarker z-scores and re-running SuStaIn.
```
# go through each subtypes model and plot MCMC samples of the likelihood
for s in range(N_S_max):
pickle_filename_s = output_folder + '/pickle_files/' + dataset_name + '_subtype' + str(s) + '.pickle'
pickle_filepath = Path(pickle_filename_s)
pickle_file = open(pickle_filename_s, 'rb')
loaded_variables = pickle.load(pickle_file)
samples_likelihood = loaded_variables["samples_likelihood"]
pickle_file.close()
_ = plt.figure(0)
_ = plt.plot(range(N_iterations_MCMC), samples_likelihood, label="subtype" + str(s))
_ = plt.figure(1)
_ = plt.hist(samples_likelihood, label="subtype" + str(s))
_ = plt.figure(0)
_ = plt.legend(loc='upper right')
_ = plt.xlabel('MCMC samples')
_ = plt.ylabel('Log likelihood')
_ = plt.title('Figure 5: MCMC trace')
_ = plt.figure(1)
_ = plt.legend(loc='upper right')
_ = plt.xlabel('Log likelihood')
_ = plt.ylabel('Number of samples')
_ = plt.title('Figure 6: Histograms of model likelihood')
```
## Cross-validation
To determine the optimal number of subtypes it's necessary to perform cross-validation and compute the cross-validation information criterion (CVIC).
## Stratified cross-validation
It's a good idea to use stratified training and test sets so you have similar numbers of cases and controls in each fold, as shown below.
```
# identify a control population
index_control = np.reshape(gt_stages,(M))==0
# label cases and controls to perform stratified cross-validation
labels = 1 * np.ones(data.shape[0], dtype=int)
labels[index_control] = 0
# choose the number of folds - here i've used three for speed but i recommend 10 typically
N_folds = 3
# generate stratified cross-validation training and test set splits
cv = sklearn.model_selection.StratifiedKFold(n_splits=N_folds,
shuffle=True)
cv_it = cv.split(data, labels)
test_idxs = []
for train, test in cv_it:
test_idxs.append(test)
test_idxs = np.array(test_idxs)
```
## Performing cross-validation
Next you need to run the cross-validation on your training folds and validate on the test folds. The code below does this sequentially for all folds. It's also possible to specify a specific fold if you wanted to run each fold of the cross-validation separately in parallel (e.g. on the cluster).
```
# perform cross-validation and output the cross-validation information criterion and
# log-likelihood on the test set for each subtypes model and fold combination
CVIC, loglike_matrix = sustain_input.cross_validate_sustain_model(test_idxs)
# Just added this to demonstrate what happens if you re-run the cross-validation
# with the same dataset name and folder as previously
# The code recognises the files are there already rather than re-running SuStaIn
# This is useful if you want to increase the number of subtypes without
# starting right from the beginning again
CVIC, loglike_matrix = sustain_input.cross_validate_sustain_model(test_idxs)
```
## Choosing the optimal number of subtypes
The optimal number of subtypes is chosen using the CVIC, shown in Figure 7 below. The CVIC is an information criterion (like the AIC/BIC/WAIC) that balances model complexity with model accuracy, with a lower CVIC indicating a better balance between the two. Generally speaking, the model with the lowest CVIC is the best. However, you do sometimes get a very small improvement (less than ~6) in the CVIC with a more complex model, in which case I would tend to favour the less complex (i.e. fewer subtypes) model.
Another useful metric to look at is the log-likelihood of each subtypes model on the test set, shown in Figure 8. A better model should show a consistent improvement in the test set log-likelihood across folds.
```
# go through each subtypes model and plot the log-likelihood on the test set and the CVIC
print("CVIC for each subtype model: " + str(CVIC))
print("Average test set log-likelihood for each subtype model: " + str(np.mean(loglike_matrix, 0)))
_ = plt.figure(1)
_ = plt.plot(np.arange(N_S_max,dtype=int),CVIC)
_ = plt.xticks(np.arange(N_S_max,dtype=int))
_ = plt.ylabel('CVIC')
_ = plt.xlabel('Subtypes model')
_ = plt.title('Figure 7: CVIC')
_ = plt.figure(0)
df_loglike = pd.DataFrame(data = loglike_matrix, columns = ["s_" + str(i) for i in range(sustain_input.N_S_max)])
df_loglike.boxplot(grid=False)
for i in range(sustain_input.N_S_max):
y = df_loglike[["s_" + str(i)]]
x = np.random.normal(1+i, 0.04, size=len(y)) # Add some random "jitter" to the x-axis
pylab.plot(x, y, 'r.', alpha=0.2)
_ = plt.ylabel('Log likelihood')
_ = plt.xlabel('Subtypes model')
_ = plt.title('Figure 8: Test set log-likelihood across folds')
```
## Cross-validated positional variance diagrams
Another useful output of the cross-validation that you can look at are positional variance diagrams averaged across cross-validation folds. These give you an idea of the variability in the progression patterns across different training datasets.
```
#this part estimates cross-validated positional variance diagrams
for i in range(N_S_max):
sustain_input.combine_cross_validated_sequences(i+1, N_folds)
# Output a figure showing the ground truth
temp_gt_sequence = gt_sequence.reshape((gt_sequence.shape[0],gt_sequence.shape[1],1))
temp_gt_f = np.asarray(gt_f).reshape(len(gt_f),1)
pySuStaIn.ZscoreSustain._plot_sustain_model(sustain_input,temp_gt_sequence,temp_gt_f,M)
_ = plt.suptitle('Figure 9: Ground truth progression pattern')
# The code below opens the results for the ground truth number of subtypes
# and plots the output
s = N_S_gt-1
pickle_filename_s = output_folder + '/pickle_files/' + dataset_name + '_subtype' + str(s) + '.pickle'
pickle_filepath = Path(pickle_filename_s)
pickle_file = open(pickle_filename_s, 'rb')
loaded_variables = pickle.load(pickle_file)
samples_sequence = loaded_variables["samples_sequence"]
samples_f = loaded_variables["samples_f"]
pickle_file.close()
pySuStaIn.ZscoreSustain._plot_sustain_model(sustain_input,samples_sequence,samples_f,M)
_ = plt.suptitle('Figure 10: SuStaIn output')
sustain_input.combine_cross_validated_sequences(N_S_gt, N_folds)
_ = plt.suptitle('Figure 11: Cross-validated SuStaIn output')
```
## Subtyping and staging
Once you've used the CVIC to choose the optimal number of subtypes you can use the SuStaIn output from that subtypes model to subtype and stage individuals in your dataset. This will already have been output by the SuStaIn algorithm, you can extract the outputs using the code below. The outputs are
### ml_subtype and prob_ml_subtype
The maximum likelihood subtype and the probability of that subtype for each individual.
### ml_stage and prob_ml_stage
The maximum likelihood stage and the probability of that stage for each individual.
### prob_subtype, prob_stage and prob_subtype_stage
The probability each individual belongs to each subtype, to each stage, and to each subtype and stage combination.
```
s = N_S_gt-1
pickle_filename_s = output_folder + '/pickle_files/' + dataset_name + '_subtype' + str(s) + '.pickle'
pickle_filepath = Path(pickle_filename_s)
pickle_file = open(pickle_filename_s, 'rb')
loaded_variables = pickle.load(pickle_file)
ml_subtype = loaded_variables["ml_subtype"]
prob_ml_subtype = loaded_variables["prob_ml_subtype"]
ml_stage = loaded_variables["ml_stage"]
prob_ml_stage = loaded_variables["prob_ml_stage"]
prob_subtype = loaded_variables["prob_subtype"]
prob_stage = loaded_variables["prob_stage"]
prob_subtype_stage = loaded_variables["prob_subtype_stage"]
pickle_file.close()
# You can also subtype and stage new data using
# N_samples = 1000
# ml_subtype, \
# prob_ml_subtype, \
# ml_stage, \
# prob_ml_stage, \
# prob_subtype, \
# prob_stage, \
# prob_subtype_stage = sustain_input.subtype_and_stage_individuals_newData(new_data,
# samples_sequence,
# samples_f,
# N_samples)
```
## Subtyping and staging using cross-validated models
The code below subtypes and stages individuals using the cross-validated positional variance diagrams in Figure 11.
```
s = N_S_gt-1
Nfolds = len(test_idxs)
for fold in range(Nfolds):
pickle_filename_fold_s = sustain_input.output_folder + '/pickle_files/' + sustain_input.dataset_name + '_fold' + str(fold) + '_subtype' + str(s) + '.pickle'
pickle_filepath = Path(pickle_filename_fold_s)
pickle_file = open(pickle_filename_fold_s, 'rb')
loaded_variables = pickle.load(pickle_file)
samples_sequence = loaded_variables["samples_sequence"]
samples_f = loaded_variables["samples_f"]
pickle_file.close()
if fold == 0:
samples_sequence_cval = samples_sequence
samples_f_cval = samples_f
else:
samples_sequence_cval = np.concatenate((samples_sequence_cval, samples_sequence), axis=2)
samples_f_cval = np.concatenate((samples_f_cval, samples_f), axis=1)
N_samples = 1000
ml_subtype_cval, \
prob_ml_subtype_cval, \
ml_stage_cval, \
prob_ml_stage_cval, \
prob_subtype_cval, \
prob_stage_cval, \
prob_subtype_stage_cval = sustain_input.subtype_and_stage_individuals_newData(data,
samples_sequence_cval,
samples_f_cval,
N_samples)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/s-c-soma/RL_Project_FlappyBird_D3QN/blob/main/colab/Flappy_DoubleDueling_DQN_GoogleColab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Double Dueling Deep Q Network Learning with Priortized Experienced Reply
This colab is used to run the code from our google drive.
#Setup
```
from google.colab import drive
drive.mount('/content/drive')
!pip install torch
!pip install torchvision
!pip install opencv-python
import os,sys
import cv2
import numpy as np
import time
%matplotlib inline
import matplotlib.pyplot as plt
os.chdir('../')
!git clone https://github.com/s-c-soma/RL_Project_FlappyBird_D3QN.git
!ls
os.chdir('flappy-bird-deep-q-learning')
!ls
#sys.path.append('game/')
os.chdir('game')
!pip install pygame
import pygame
DISPLAY = True
if not DISPLAY:
os.environ["SDL_VIDEODRIVER"] = "dummy"
```
# Alternative Part- To run From Google Drive
```
import os
os.environ['SDL_VIDEODRIVER']='dummy'
import pygame
pygame.display.set_mode((640,480))
pygame.init()
pygame.display.init()
print( pygame.display.list_modes() )
!ls
os.getcwd()
!ls
sys.path.append('game/')
```
#Loading *flappy_wrapped* from drive
## Change File Paths
Please upload the code in google shared drive and change the paths accordingly
```
# the base Google Drive directory
root_dir = "/content/drive/Shared drives/"
# path to save the project files in google drive
project_folder = "MeanSquare-Drive/RL-assignment/flappy-bird-deep-q-learning/"
#folder_path in shared drive
folder_path = '/content/drive/Shared drives/MeanSquare-Drive/RL-assignment/flappy-bird-deep-q-learning'
import os
# Set your working directory to a folder in your Google Drive. This way, if your notebook times out,
# your files will be saved in your Google Drive!
root_dir = root_dir
project_folder = project_folder
def create_and_set_working_directory(project_folder):
# check if your project folder exists. if not, it will be created.
if os.path.isdir(root_dir + project_folder) == False:
os.mkdir(root_dir + project_folder)
print(root_dir + project_folder + ' did not exist but was created.')
# change the OS to use your project folder as the working directory
os.chdir(root_dir + project_folder)
# create a test file to make sure it shows up in the right place
!touch 'new_file_in_working_directory.txt'
print('\nYour working directory was changed to ' + root_dir + project_folder + \
"\n\nAn empty text file was created there. You can also run !pwd to confirm the current working directory." )
create_and_set_working_directory(project_folder)
import sys
sys.path.append(folder_path)
os.getcwd()
import flappy_wrapped as game
```
# KERNEL
```
KERNEL = np.array([[-1,-1,-1], [-1, 9,-1],[-1,-1,-1]])
def processFrame(frame):
frame = frame[55:288,0:400] #crop image
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) #convert image to black and white
frame = cv2.resize(frame,(84,84),interpolation=cv2.INTER_AREA)
_ , frame = cv2.threshold(frame,50,255,cv2.THRESH_BINARY)
#frame = cv2.blur(frame,(5,5))
frame = cv2.filter2D(frame,-1,KERNEL)
#frame = cv2.Canny(frame,100,200)
frame = frame.astype(np.float64)/255.0
return frame
```
# Dueling DQN
```
import torch
import torch.nn as nn
import torch.optim as optim
#Dueling DQN
class DDQN(nn.Module):
def __init__(self,input_shape,nactions):
super(DDQN,self).__init__()
self.nactions = nactions
self.conv = nn.Sequential(
nn.Conv2d(input_shape[0],32,kernel_size=4,stride=2),
nn.ReLU(),
nn.Conv2d(32,64,kernel_size=3,stride=2),
nn.ReLU(),
nn.Conv2d(64,64,kernel_size=2,stride=1),
nn.ReLU()
)
conv_out_size = self._get_conv_out(input_shape)
#scs - action network with 'nactions with q value' as output
self.fca = nn.Sequential(
nn.Linear( conv_out_size, 512),
nn.ReLU(),
nn.Linear( 512, nactions )
)
#scs - value network with single value output
self.fcv = nn.Sequential(
nn.Linear(conv_out_size,512),
nn.ReLU(),
nn.Linear(512,1)
)
def _get_conv_out(self,shape):
o = self.conv( torch.zeros(1,*shape) )
return int(np.prod(o.size()))
def forward(self,x):
conv_out = self.conv(x).view(x.size()[0], -1)
action_v = self.fca(conv_out)
value_v = self.fcv(conv_out).expand(x.size(0), self.nactions)
#scs aggregation layer: Q(s,a) = V(s) + A(s,a)- mean(A(s,a))
return value_v + action_v - action_v.mean(1).unsqueeze(1).expand(x.size(0), self.nactions)
```
# Hyperparam
```
ACTIONS = [0,1]
EXPERIENCE_BUFFER_SIZE = 2000 #2000-> update 4000, 20000,500
STATE_DIM = 4
GAMMA = 0.99 #0.99 # -> update
EPSILON_START = 1
EPSILON_FINAL = 0.001
EPSILON_DECAY_FRAMES = (10**4)/3 # -> update
MEAN_GOAL_REWARD = 10 #10 # +1 for escaping death
BATCH_SIZE = 32 #-> update
MIN_EXP_BUFFER_SIZE = 500 #
SYNC_TARGET_FRAMES = 30 #15,60, 90tau ,go in times of 30 or 30/2 value- change this value for differeny sync time with target_net
LEARNING_RATE = 1e-4 #-> update
SKIP_FRAME = 2 #->update
INITIAL_SKIP = [0,1,0,1,0,1,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,0,1]
```
# Prioritized Experience Buffer
```
import collections
class ExperienceBuffer():
def __init__(self,capacity):
self.buffer = collections.deque(maxlen=capacity)
self.priority = collections.deque(maxlen=capacity)
def clear(self):
self.buffer.clear()
self.priority.clear()
def __len__(self):
return len(self.buffer)
def append(self,exp,p):
self.buffer.append(exp)
self.priority.append(p)
def sample(self,batch_size):
#scs- probability or priority computation
probs = np.array(self.priority)/sum(np.array(self.priority))
indices = np.random.choice( range(len(self.buffer)), batch_size, p = probs)
states,actions,rewards,dones,next_states = zip(*[ self.buffer[idx] for idx in indices ])
return np.array(states), np.array(actions), np.array(rewards, dtype=np.float32),\
np.array(dones,dtype=np.uint8), np.array(next_states)
```
# Agent
```
class Agent():
def __init__(self,env,buffer,state_buffer_size = STATE_DIM):
self.env = env
self.exp_buffer = buffer
self.state = collections.deque(maxlen = STATE_DIM)
self.next_state= collections.deque(maxlen = STATE_DIM)
self._reset()
def _reset(self):
self.total_rewards = 0
self.state.clear()
self.next_state.clear()
for i in INITIAL_SKIP[:-7]:
frame,reward,done = self.env.frame_step(i)
self.total_rewards+=reward
if done:
self._reset()
frame = processFrame(frame)
self.state.append(frame)
self.next_state.append(frame)
for i in INITIAL_SKIP[-7:-5]:
frame,reward,done = self.env.frame_step(i)
self.total_rewards+=reward
if done:
self._reset()
frame = processFrame(frame)
self.state.append(frame)
self.next_state.append(frame)
for i in INITIAL_SKIP[-5:-3]:
frame,reward,done = self.env.frame_step(i)
self.total_rewards+=reward
if done:
self._reset()
frame = processFrame(frame)
self.state.append(frame)
self.next_state.append(frame)
for i in INITIAL_SKIP[-3:-1]:
frame,reward,done = self.env.frame_step(i)
self.total_rewards+=reward
if done:
self._reset()
frame = processFrame(frame)
self.state.append(frame)
self.next_state.append(frame)
def step(self,net,tgt_net,epsilon=0.9,device='cpu'):
self.total_rewards = 0
if np.random.random() < epsilon:
#scs random exploration
action = np.random.choice(ACTIONS)
else:
#scs taking state and action
state_v = torch.tensor(np.array([self.state],copy=False),dtype=torch.float32).to(device)
action = int(torch.argmax(net(state_v)))
frame,reward,done = self.env.frame_step(action)
self.total_rewards += reward
for _ in range(SKIP_FRAME):
frame,reward,done = self.env.frame_step(action)
self.total_rewards += reward
if done:
break
frame = processFrame(frame)
self.next_state.append(frame)
if len(self.next_state)==STATE_DIM and len(self.state)==STATE_DIM:
#scs PER - Prioritized Experience Replay
o = net( torch.tensor( np.array([self.state]),dtype=torch.float32).to(device)).to('cpu').detach().numpy()[0][action]
e = float(torch.max(tgt_net( torch.tensor( np.array([self.next_state]),dtype=torch.float32).to(device))))
#scs priority
p = abs(o-e)+0.0001
#print("priority ", p)
#scs (exp = state, action, reward, state_next), priority: storing value to the PER
self.exp_buffer.append((self.state.copy(),action,int(self.total_rewards),done,self.next_state.copy()),p)
self.state.append(frame)
end_reward = int(self.total_rewards)
if done:
self._reset()
return end_reward
```
# Loss
```
def calc_loss(batch,net,tgt_net,device='cpu'):
states,actions,rewards,dones,next_states = batch
states_v = torch.tensor(states,dtype=torch.float32).to(device)
actions_v = torch.tensor(actions,dtype=torch.long).to(device)
rewards_v = torch.tensor(rewards).to(device)
dones_v = torch.ByteTensor(dones).to(device)
next_states_v = torch.tensor(next_states,dtype=torch.float32).to(device)
state_action_values = net(states_v).gather(1, actions_v.unsqueeze(-1)).squeeze(-1) # Q(S,A) = V + A
next_state_action_values = tgt_net(next_states_v).max(1)[0] # max(Q(A))
next_state_action_values[dones_v] = 0.0
next_state_action_values = next_state_action_values.detach()
expected_values = rewards_v + next_state_action_values * GAMMA # scs rewards + max(Q(A)) * discount_factor
return nn.MSELoss()(state_action_values,expected_values) #loss computation: Target- Computed
```
# Training
```
all_losses = []
device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" )
#Double Dueling DQN
#scs primary network
net = DDQN( (STATE_DIM,84,84), len(ACTIONS) ).to(device)
#print("net: ", net)
#target network
tgt_net = DDQN( (STATE_DIM,84,84), len(ACTIONS) ).to(device)
#print("tgt_net: ",tgt_net)
env = game.GameState()
#print("env: ",env)
buffer = ExperienceBuffer(EXPERIENCE_BUFFER_SIZE)
#print("buffer: ",buffer)
agent = Agent(env,buffer)
#print("agent: ",agent)
epsilon = EPSILON_START
#print("epsilon: ",epsilon)
optimizer = optim.Adam(net.parameters(),lr=LEARNING_RATE)
#print("optimizer: ",optimizer)
# initializing
total_rewards = []
best_mean_reward = float('-inf')
last_mean = float('-inf')
game_id = 0
while True:
epsilon = max( EPSILON_FINAL , EPSILON_START - game_id/EPSILON_DECAY_FRAMES )
#print("inside loop, epsilon",epsilon)
reward = agent.step(net,tgt_net,epsilon,device=device)
#print("reward: ",reward)
if reward != 0:
game_id += 1
total_rewards.append(reward)
mean_reward = np.mean(total_rewards[-100:]) # 100
#print("mean_reward: ",mean_reward)
if game_id%5 == 0: #print after every 5 iteration
print("GAME : {} | EPSILON : {:.4f} | MEAN REWARD : {}".format( game_id, epsilon, mean_reward ))
#print("##############################################################")
if best_mean_reward < mean_reward:
best_mean_reward = mean_reward
#print("best_mean_reward ", best_mean_reward)
#print("last_mean: ", last_mean)
if best_mean_reward - last_mean >= 0.1:
torch.save(net.state_dict(),'model/flappy_best_model.dat')
print("REWARD {} -> {}. Model Saved".format(last_mean,mean_reward))
last_mean = best_mean_reward
if game_id % SYNC_TARGET_FRAMES == 0: #update after Tau step
tgt_net.load_state_dict(net.state_dict()) # step of duelling, using net DDQN weights to load target DDQN
if mean_reward >= MEAN_GOAL_REWARD:
print("Learned in {} Games.".format(game_id))
break
if len(buffer) < MIN_EXP_BUFFER_SIZE:
continue
optimizer.zero_grad()
batch = buffer.sample(BATCH_SIZE)
#print("batch: ", batch)
loss_t = calc_loss(batch,net,tgt_net,device=device)
#print("loss_t: ", float(loss_t))
all_losses.append(float(loss_t))
loss_t.backward()
optimizer.step()
```
# Plot on Result
```
mean_rewards2 = []
#'https://github.com/s-c-soma/RL_Project_FlappyBird_D3QN/blob/main/result_csv/d3qn_mean%20reward.csv'
file_path= '/content/drive/Shared drives/MeanSquare-Drive/RL-Project/csvfile/d3qn.csv'
with open(file_path,'r') as f:
#print(f.readlines())
for x in f.readlines():
#print(x)
if x != '\n':
mean_rewards2.append(float(x.replace('\n','')))
print(len(mean_rewards2))
```
## Draw Plot
```
import matplotlib.pyplot as plt
plt.title("D3QN")
plt.plot(mean_rewards2, label='DDDQN')
#plt.plot(moving_average(rewards_ql), label='qlearning')
plt.grid()
plt.legend()
plt.ylim(-1, 2)
plt.show()
```
# Run Model
```
import play_game
os.getcwd()
!ls
#!python 'play_game.py' --model 'checkpoints/flappy_best_model_game.dat'
```
# References
* https://www.fromkk.com/posts/using-ddqn-to-play-flappy-bird/
* https://towardsdatascience.com/use-reinforcement-learning-to-train-a-flappy-bird-never-to-die-35b9625aaecc
* https://towardsdatascience.com/practical-reinforcement-learning-02-getting-started-with-q-learning-582f63e4acd9
* http://cs229.stanford.edu/proj2015/362_report.pdf
* https://github.com/adityajn105
* https://www.freecodecamp.org/news/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8/
* https://www.freecodecamp.org/news/improvements-in-deep-q-learning-dueling-double-dqn-prioritized-experience-replay-and-fixed-58b130cc5682/
* https://markelsanz14.medium.com/introduction-to-reinforcement-learning-part-4-double-dqn-and-dueling-dqn-b349c9a61ea1
* https://arxiv.org/pdf/1511.06581.pdf
* https://arxiv.org/abs/1509.06461
| github_jupyter |
# Math and Statistics Review for ML
Using the smallpox data set, review relevant mathematical and statistical methods commonly used in machine learning. An example will be shown using the Utah data. Choose another state and perform the same operations on the data for that state.
```
import pandas as pd
import numpy as np
data = pd.read_csv('http://apmonitor.com/pds/uploads/Main/smallpox.txt')
data.head()
```
### Summary Statistics with Pandas describe() Function
Pandas imports data, generates summary statistics, and manipulates data tables. There are many functions that allow efficient manipulation for the preliminary steps of data analysis problems. The data.describe() command gives a table of summary statistics.
```
data.describe()
```
### Summary Statistics with Pandas Profiling
Pandas Profiling generates more detailed analysis than the pandas describe function. It produces an overview of data with only a couple lines of code. Use minimial=True to avoid running the correlation plots that take a long time with large data sets. Run the code to generate the profile of the data and view either as a Notebook widget or as an html file. Explore the data to determine the relationships between the variables. The `minimal=True` avoids the analysis sections that take a long time with large data sets.
```
from pandas_profiling import ProfileReport
profile = ProfileReport(data, explorative=True, minimal=True)
try:
profile.to_widgets() # view as widget in Notebook
except:
profile.to_file('smallpox.html') # save as html file
```
# Mathematics
### Scalars, Vectors, and Matrices
**Scalars** are simple numerical values, and are typically denoted by an italicized letter, like _x_ or _a_.
**Vectors** are ordered lists of scalar values, and are denoted with bold letters, such as **x** or **w**. The specific attribute of a vector is denoted by a superscript, like so: _w_$^{(j)}$. For example, the vector **w** = [3,2] has the attribute _w_$^{(1)}$ = 3.
**Matrices** are rectangular arrays of numbers arranged in rows or columns, and are denoted with bold capital letters, such as **A** or **W**.
In Python, matrices and vectors are called "arrays."
```
UT = data[data['state'] == 'UT']
# Create a matrix containing data from the 'cases' and 'indicence_per_capita' columns
matrix = UT[['cases','incidence_per_capita']].values
print('Matrix:',matrix)
print('Dimensions:',matrix.shape)
# The vector is composed of the values in the '0'
# column (first columns) of the matrix
# Use the : to get all values in the row, and
# the 0 to get all values in the '0' column
vector = matrix[:,0]
print('Vector:',vector)
print('Dimension:',vector.shape)
# Create a scalar out of the 6th attribute of the vector
# Python starts counts with the number 0, so the [5]
# indicates the 6th value in the array
scalar = vector[5]
print('Scalar:',scalar)
print('Dimension:',scalar.shape)
```
**Activity:**
1. Create a matrix **X** out of the 'cases' and 'incidence_per_capita' columns of a state of your choice
2. Create a vector **x** containing all the values in the 'incidence_per_capita' column of **X**
3. Create a scalar _x_$^{(12)}$ from the 12th value in **x** (remember to use x[11])
4. Find the dot product between the 2 distinct vectors in matrix **X**. It may be helpful to import numpy and use np.dot()
### Summations and Products
It's sometimes useful to sum or multiply all attributes of a vector. The notation for summations over a vector **x** = [x$^{(1)}$,x$^{(2)}$,...x$^{(n-1)}$,x$^{(n)}$] looks like this:
$$\sum_{i=1}^nx^{(i)} = x^{(1)} + x^{(2)} + ...+x^{(n-1)} + x^{(n)}$$
Similarly, the product is summarized using the capital pi notation:
$$\prod_{i=1}^nx^{(i)} = x^{(1)} \cdot x^{(2)} \cdot ...\cdot x^{(n-1)} \cdot x^{(n)}$$
```
# Find the total number of smallpox cases over all time in Utah
UT_sum = np.sum(UT['cases'])
print('Sum:',UT_sum)
```
**Activity**: Find the total number of cases in another state.
### Derivatives and Gradients
You should be familiar with analytical derivatives and gradients from calculus courses. While these are most commonly used behind the scenes to solve problems, it's useful to know about these for two reasons. First, you should understand how a numerical derivative works. Second, you may find it beneficial to use the numerical derivative as an additional feature for your machine learning model. You can calculate the numerical derivative as follows:
```
# Be sure the data is sorted according to week
UT = UT.sort_values(by=['week'])
# Find the difference in cases for each consecutive week
UT['dcases'] = UT['cases'].diff()
# Find the difference in weeks for each consecutive week
# should be 1, except for when there are missing weeks
UT['dweeks'] = UT['week'].diff()
# Find the numerical derivative, dcases/dweeks
UT['dcases/dweeks'] = UT['dcases'] / UT['dweeks']
# Plot the numerical derivative
UT['dcases/dweeks'].plot()
```
**Activity:**
1. Find the weekly difference in smallpox cases for your state.
2. Calculate the numerical derivative (finite difference) for weekly smallpox cases.
3. Plot the numerical derivative for weekly smallpox cases. Interpret the graph.
Repeat the activity by looking at the difference in cases over a time period of 10 weeks. You can use the argument `periods = 10` inside of the `.diff()` function.
# Statistics
### Distributions
The distribution describes the makeup of the data in each column. Distributions can be either discrete (set numbers) or continuous (all numbers in a range are possible). Plotting a distribution allows you not only to see what the data look like, but also to determine if there are suspicious values or if you need to do additional preprocessing.
```
# Plot the distribution of 'incidence_per_capita'; use only non-zero values
UT[UT['incidence_per_capita'] != 0]['incidence_per_capita'].hist()
```
**Activity:**
1. Plot the distribution of `'incidence_per_capita'` for your state of choice.
2. Sometimes the log scaled values are more useful or give better results. Log scale the `'incidence_per_capita'` and observe the difference.
### Mean and standard deviation
These values are useful in describing a normal distribution. There are also other potentially useful statistical measures (such as kurtosis and skew, which describe deviations from a normal distribution). These statistical measures are not only useful for describing data, but you can potentially use them as features in your machine learning model.
```
print('All Utah cases')
UT['cases'].describe()
UT_non0 = UT[UT['cases'] != 0]
print('Non-zero Utah cases')
UT_non0['cases'].describe()
```
**Activity:**
1. Display the mean, standard deviation, quartiles, max, and min for the data in another state besides Utah.
2. Save the statistics as variables for later use. Use `UT['cases'].mean()`, `.std()`, `.quantile()`, `.max()`, and `.min()`.
3. Find the kurtosis and skew of the data. Use `.kurtosis()` and `.skew()`
| github_jupyter |
```
import os
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
from scipy.io import wavfile
import collections
DATA_DIR = "/path/to/downloaded/WHAM/Vox/datasets"
WORKING_DIR = "/path/to/working/dir"
```
# WHAM Noise
Analyse the metadata of the WHAM Noise test set.
https://wham.whisper.ai/
We did not use the WHAM dataset at all while training our network.
The WHAM Dataset contains an estimate of the SNR of the included recordings.
We only include files with an estimated SNR lower than a certain threshold to remove the probability of there being audible speech in the noise.
```
# Load test set metadata.
wham_test = pd.read_csv(os.path.join(DATA_DIR, "wham_noise", "metadata", "noise_meta_tt.csv"))
print(f"Number of WHAM test noise files: {len(wham_test):d}")
# Plot estimated SNR vavlues.
ax = wham_test["noise_snr"].plot(kind="kde")
ax.set_xlabel("estimated SNR [dB]")
ax.grid()
```
We decided on a threshld of -12dB. The SNR provided in the WHAM! metadata was obtained using the iZotope RX speech isolation functionality which can result in inaccurate estimates, espcially for very noisy files.
Using our in-house SNR estimation technique, we observed that specifying an SNR cutoff of -12 leads to most files actually having an SNR below -6dB.
```
threshold_snr_wham = -12
wham_test = wham_test[wham_test["noise_snr"] <= threshold_snr_wham]
print(f"Number of files with estimated SNR below threshold of {threshold_snr_wham:d}dB: {len(wham_test)}.")
wham_test.head()
```
Calculate and plot the duraction of all snippets.
```
# Calculation duration of all snippets.
duration = {}
for index, row in wham_test.iterrows():
fs, audio_data = wavfile.read(os.path.join(DATA_DIR, "wham_noise", "tt", row["utterance_id"]))
duration[index] = audio_data.shape[0] / fs
duration = pd.DataFrame.from_dict(duration, orient="index", columns=["duration"])
wham_test = wham_test.join(duration)
print("Min duration:", float(duration.min()))
print("Max duration:", float(duration.max()))
print("Average duration:", float(duration.mean()))
wham_test.head()
ax = wham_test.duration.plot(kind="kde")
ax.set_xlabel("Duration [s]")
ax.grid()
```
# VoxCeleb2 speech files
Analyse metadata of the VoxCeleb2 test set.
https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html
We do not use any samples of the test set of VoxCeleb2 during training of our neural network.
We calculated the duration and the estimated SNR (using our in-house SNR estimation technique) of VoxCeleb2 test set samples and stored this information in a CSV file.
```
# This csv contains all the files in the test set.
vox2 = pd.read_csv(os.path.join(WORKING_DIR, "estimated_snr_vox2.csv"), header=0)
vox2["speaker_id"] = vox2["file"].map(lambda x: x.split("/")[0])
print("Number of total test examples:", len(vox2))
vox2.head()
# Plot SNR and duration.
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(6, 8))
vox2["snr_estimate"].plot(kind="kde", ax=axes[0])
axes[0].grid()
axes[0].set_xlim(0, 30)
axes[0].set_xlabel("Estimated SNR [dB]")
axes[0].set_title("Estimated SNR of VoxCeleb2 test set samples")
vox2["duration"].plot(kind="kde", ax=axes[1])
axes[1].grid()
axes[1].set_xlabel("Duration [s]")
axes[1].set_xlim(0, 10)
axes[1].set_title("Duration of VoxCelec2 test set samples")
fig.tight_layout()
```
Keep only files from the Vox2 data with high duration and SNR. There are so many that we can afford
to be picky.
```
# Use these threshold values
threshold_snr_vox2 = 20 # [dB]
minimum_duration = 8 # [seconds]
vox2 = vox2[(vox2["duration"] >= minimum_duration) & (vox2["snr_estimate"] >= threshold_snr_vox2)]
print("Number of samples meeting the requirements:", len(vox2))
print("Number of distinct speakers:", vox2["speaker_id"].nunique())
```
Plot number of examples per speaker. Some speakers have more examples than others.
```
vox2_speakers = vox2.groupby("speaker_id").count()["file"]
fix, *axes = plt.subplots()
vox2_speakers.plot(kind="hist", ax=axes[0], bins=20)
axes[0].set_xlabel("Number of examples per speaker")
```
Load metadata of speakers. We use this to associate speaker ID with gender and make sure
that in the end we have a gender balanced dataset.
```
meta = pd.read_csv(os.path.join(WORKING_DIR, "vox2_meta.csv"))
meta = meta.applymap(lambda x: x.strip(" "))
meta.columns = meta.columns.map(lambda x: x.strip(" "))
meta = meta.set_index("VoxCeleb2 ID")
meta = meta[meta.index.isin(set(vox2_speakers.index))]
meta.head()
# Check gender balance of total test set.
gender_balance = (meta.groupby("Gender").count() / len(meta)).iloc[:, 0]
gender_balance
```
# SNR distribution
Try to approximate the distribution from [1]. This is the SNR distribtuion of typical situations people with hearing loss encounter during the day.
#### References:
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5824438/figure/F4/
```
dist = stats.norm(loc=8, scale=7)
x = np.linspace(-20, 40, 1000)
y = dist.pdf(x)
fig, *axes = plt.subplots(nrows=1, ncols=1)
axes[0].plot(x, y)
axes[0].grid()
ax.set_xlabel("SNR [db]")
ax.set_title("WHAMVox SNR distribution")
```
# WHAMVox
Now we want to select speech files, noise files and SNR values to create the actual WHAMVox eval set.
For the noise files from WHAM, selection is easy. We just take the files from the dataframe.
```
NUM_SAMPLES = 1941
SEED = 42
prefix_VoxCeleb = "aac"
prefix_WHAM = "tt"
files_WHAM = sorted(wham_test["utterance_id"].tolist())[:NUM_SAMPLES]
rand_WHAM = random.Random(SEED)
rand_WHAM.shuffle(files_WHAM)
print("Number of WHAM files:", len(files_WHAM))
```
For the speech from Vox, we want a balanced gender distribution and not too many files from a single speaker.
```
# Get a separate list of all files for each gender.
vox2_files = vox2.groupby("speaker_id")["file"].apply(list)
vox2_files = vox2_files.to_frame().join(meta["Gender"])
vox2_files = vox2_files.groupby("Gender")["file"].apply(sum).to_frame()
vox2_files["num_files"] = vox2_files["file"].apply(len)
vox2_files
# Take all the female files (since there are fewer of them) and then take an equal
# amount of male files. Then shuffle. This does not guarantee a 50/50 split but almost.
files_VoxCeleb = []
files_VoxCeleb += vox2_files.loc["f", "file"]
files_VoxCeleb += vox2_files.loc["m", "file"][:len(files_VoxCeleb)]
rand_vox = random.Random(SEED + 7)
rand_vox.shuffle(files_VoxCeleb)
# We need to replace the extension from wav to m4a because when estimating the SNR, the
# file extension was wav (due to format conversion).
files_VoxCeleb = [file.replace(".wav", ".m4a") for file in files_VoxCeleb]
print("Number of VoxCeleb files:", len(files_VoxCeleb))
```
Sample SNR values. These will be the _default_ SNR values, but can be adjusted by
the dataset generation script.
```
snr_values = dist.rvs(random_state=SEED, size=(NUM_SAMPLES,))
plt.hist(snr_values, bins=20)
ax = plt.gca()
ax.set_xlabel("SNR [db]")
ax.set_ylabel("Count")
ax.set_title("WHAMVox sampled SNR values")
```
Save file combination to CSV together with gender and suggested SNR.
```
MixedFile = collections.namedtuple(
"MixedFile", ("file_noise", "file_speech", "SNR", "gender", "speaker_id")
)
files = []
for i in range(NUM_SAMPLES):
file_speech = os.path.join(prefix_VoxCeleb, files_VoxCeleb[i])
file_noise = os.path.join(prefix_WHAM, files_WHAM[i])
snr = snr_values[i]
speaker_id = files_VoxCeleb[i].split("/")[0]
gender = meta.loc[speaker_id, "Gender"]
files.append(
MixedFile(
file_noise=file_noise,
file_speech=file_speech,
SNR=snr,
gender=gender,
speaker_id=speaker_id
)
)
files = pd.DataFrame(files)
files.head()
# Add the URL to the youtube videos from which the speech snippets are extracted. This is mostly for reference.
files['speech_youtube_url'] = files.file_speech.apply(lambda x: "https://www.youtube.com/watch?v="+x.split('/')[2])
files.head()
```
Verify that gender balance is okay.
```
files.groupby(["gender"]).nunique()
# Save to CSV
files.to_csv(os.path.join(WORKING_DIR, "WHAMVox_test.csv"), index=False)
```
| github_jupyter |
# Gas Mixtures: Perfect and Semiperfect Models
This Notebook is an example about how to declare and use *Gas Mixtures* with **pyTurb**. Gas Mixtures in **pyTurb** are treated as a combination of different gases of **pyTurb**:
- *PerfectIdealGas*: Ideal Equation of State ($pv=R_gT$) and constant $c_p$, $c_v$, $\gamma_g$
- *SemiperfectIdealGas*: Ideal Equation of State and $c_p\left(T\right)$, $c_v\left(T\right)$, $\gamma_g\left(T\right)$ as a function of temperature
The *Gas Mixture* class and the rest of the gas models can be found at the following folder:
- pyturb
- gas_models
- thermo_prop
- PerfectIdealGas
- SemiperfectIdealGas
- GasMixture
```python
from pyturb.gas_models import GasMixture
from pyturb.gas_models import PerfectIdealGas
from pyturb.gas_models import SemiperfectIdealGas
from pyturb.gas_models import GasMixture
```
When the `GasMixture` object is imported the gas model must be selected: The mixture can be treated as a *Perfect Gas* or *Semiperfect Gas*. Note that both options are *ideal* gases (the *ideal equation of state* $pv=R_gT$ is available). Thus:
- If the gas is Perfect: $c_v, c_p, \gamma_g \equiv constant$
- If the gas is Semiperfect: $c_v(T), c_p(T), \gamma_g(T) \equiv f(T)$
To choose one of the gas models simply specify it when creating the Gas Mixture object:
```python
gas_mix_perfect = GasMixture(gas_model='Perfect')
gas_mix_semiperfect = GasMixture(gas_model='Semiperfect')
```
Note that 'gas_model' options are not case sensitive e.g. `Semi-perfect`, `semiperfect` or `Semiperfect` yield the same result.
A *gas mixture* can be defined adding the gas species that conform the mixture. For that purpose, the method `add_gas` can be used:
```python
gas_mix = GasMixture()
gas_mix.add_gas(species, moles=quantity)
gas_mix.add_gas(species, mass=quantity)
```
Note that the gas species (pure substance) specified in `species` must be available as a `PerfectIdealGas` or `SemiperfectIdealGas`. The gas availability can be checked using the `is_available` function at `ThermoProperties`.
When using `add_gas`, the quantity of the gas to be added must be specified. This can be done by introducing the moles or the mass of the gas. For example, if a mixture of $1.5mol$ of $Ar$ and $3mol$ of $He$ is intended:
```python
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('Ar', moles=1.5)
gas_mix.add_gas('He', moles=3.5)
```
Whilst a mix of $500g$ of $O_2$ and $500g$ of $H_2$ would be:
```python
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('O2', mass=0.5)
gas_mix.add_gas('H2', mass=0.5)
```
Finally, the gas mixture provides the same outputs of a `PerfectIdealGas` or `SemiperfectIdealGas`, plus the molar and mass fractions:
- **Gas properties:** Ru, Rg, Mg, cp, cp_molar, cv, cv_molar, gamma
- **Gas enthalpies, moles and mass:** h0, h0_molar, mg, Ng
- **Mixture condition:** Molar fraction, mass fraction
---
### Gas Mixture example:
Let's create a mixture Perfect Gases, with $500g$ of $O_2$ and $500g$ of $H_2$
```
from pyturb.gas_models import GasMixture
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('O2', mass=0.5)
gas_mix.add_gas('H2', mass=0.5)
```
To inspect the gas mixture contidions, we can use *Pandas Dataframe* contained in `gas_mixture`:
```
gas_mix.mixture_gases
```
Note that the `gas_mixture` dataframe contains the information of the mixture: amount of moles, amount of mass, molar and mass fractions and the objects containing the pure subtance information.
---
It is also possible to create a gas mixtures defining moles:
```
gas_mix2 = GasMixture(gas_model='Perfect')
gas_mix2.add_gas('O2', moles=0.5)
gas_mix2.add_gas('H2', moles=0.5)
gas_mix2.mixture_gases
```
One can also define the mixture defining some pure substances as moles and some as mass:
```
gas_mix3 = GasMixture(gas_model='Perfect')
gas_mix3.add_gas('O2', mass=0.5)
gas_mix3.add_gas('H2', moles=0.121227)
gas_mix3.mixture_gases
```
Note that `gas_mix` and `gas_mix3` are equivalent.
---
### Perfect Air as a mixture
In this example we will create a gas mixture following the air composition (as a perfect mix of oxygen, nitrogen, argon and carbon dioxide) and we will compare it to the 'Air' substance from `PerfectIdelGas`.
>Note that **Air** is an available gas at the *Nasa Glenn* coefficients and is therefore available as a `PerfectIdealGas` and as `SemiperfectIdeal`.
>Thus there is no need to declare Air as a gas mixture from pyTurb. However, for the sake of clarity, we will compare both mixtures.
From the `PerfectIdealGas` class:
```
from pyturb.gas_models import PerfectIdealGas
air_perfgas = PerfectIdealGas('Air')
print(air_perfgas.thermo_prop)
```
And now, applying a mixture of molar quantities (per unit mole):
- Diatomic Oxygen: $O_2$ 20.9476\%
- Diatomic nitrogen: $N_2$ 78.0840\%
- Argon: $Ar$ 0.9365\%
- Carbon dioxide: $CO_2$ 0.0319\%
```
pyturb_mix = GasMixture('Perfect')
pyturb_mix.add_gas('O2', 0.209476)
pyturb_mix.add_gas('N2', 0.78084)
pyturb_mix.add_gas('Ar', 0.009365)
pyturb_mix.add_gas('CO2', 0.000319)
```
Therefore, the mixture is composed of:
```
pyturb_mix.mixture_gases
```
Where the gas constant, heat capacity at constant pressure, heat capacity at constant volume and the heat capacity ratio are:
```
print('pyTurb air mixture: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(pyturb_mix.Rg, pyturb_mix.cp(), pyturb_mix.cv(), pyturb_mix.gamma()))
print('Perfect air: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(air_perfgas.Rg, air_perfgas.cp(), air_perfgas.cv(), air_perfgas.gamma()))
```
---
### Semiperfect Gas Mixture
Following the last example, a Semi Perfect model can be used by just changing the `gas_model` option:
```
# Objective temperature:
T = 1500 #K
# Gas mixture:
pyturb_mix_sp = GasMixture('Semiperfect')
pyturb_mix_sp.add_gas('O2', 0.209476)
pyturb_mix_sp.add_gas('N2', 0.78084)
pyturb_mix_sp.add_gas('Ar', 0.009365)
pyturb_mix_sp.add_gas('CO2', 0.000319)
print('pyTurb air mixture: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(pyturb_mix_sp.Rg, pyturb_mix_sp.cp(T), pyturb_mix_sp.cv(T), pyturb_mix_sp.gamma(T)))
```
| github_jupyter |
# DAT210x - Programming with Python for DS
## Module4- Lab5
```
import pandas as pd
from scipy import misc
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
import matplotlib.pyplot as plt
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
samples = []
```
Create a regular Python list (not NDArray) and name it `samples`:
```
# .. your code here ..
from scipy import misc
import os
for file in os.listdir('Datasets/ALOI/32'):
a = os.path.join('Datasets/ALOI/32', file)
img = misc.imread(a).reshape(-1)
samples.append(img)
print (len(samples))
color = []
for i in range(72):
color.append('b')
for j in range(12):
color.append('r')
df = pd.DataFrame(samples)
```
Code up a for-loop that iterates over the images in the `Datasets/ALOI/32/` folder. Look in the folder first, so you know how the files are organized, and what file number they start from and end at.
Load each `.png` file individually in your for-loop using the instructions provided in the Feature Representation reading. Once loaded, flatten the image into a single-dimensional NDArray and append it to your `samples` list.
**Optional**: You can resample the image down by a factor of two if you have a slower computer. You can also scale the image from `0-255` to `0.0-1.0` if you'd like--doing so shouldn't have any effect on the algorithm's results.
```
# .. your code here ..
```
Convert `samples` to a DataFrame named `df`:
```
# .. your code here ..
from sklearn import manifold
iso = manifold.Isomap(n_neighbors = 6, n_components = 3)
Z = iso.fit_transform(df)
```
Import any necessary libraries to perform Isomap here, reduce `df` down to three components and using `K=6` for your neighborhood size:
```
def Plot2D(T, title, x, y, num_to_plot=40):
# This method picks a bunch of random samples (images in your case)
# to plot onto the chart:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel('Component: {0}'.format(x))
ax.set_ylabel('Component: {0}'.format(y))
x_size = (max(T[:,x]) - min(T[:,x])) * 0.08
y_size = (max(T[:,y]) - min(T[:,y])) * 0.08
# It also plots the full scatter:
ax.scatter(T[:,x],T[:,y], marker='.',alpha=0.7,c=color)
Plot2D(Z, "Isomap transformed data, 2D", 0, 1)
```
Create a 2D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker. Graph the first two isomap components:
```
# .. your code here ..
# .. your code here ..
def Plot3D(T, title, x, y, z, num_to_plot=40):
# This method picks a bunch of random samples (images in your case)
# to plot onto the chart:
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.set_title(title)
ax.set_xlabel('Component: {0}'.format(x))
ax.set_ylabel('Component: {0}'.format(y))
ax.set_zlabel('Component: {0}'.format(z))
x_size = (max(T[:,x]) - min(T[:,x])) * 0.08
y_size = (max(T[:,y]) - min(T[:,y])) * 0.08
z_size = (max(T[:,z]) - min(T[:,z])) * 0.08
# It also plots the full scatter:
ax.scatter(T[:,x],T[:,y],T[:,z], marker='o',alpha=0.7)
```
Chart a 3D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker:
```
# .. your code here ..
Plot3D(Z, 'Isomap 3D', 0, 1, 2, num_to_plot=40)
```
Answer the first two lab questions!
Create another for loop. This time it should iterate over all the images in the `Datasets/ALOI/32_i` directory. Just like last time, load up each image, process them the way you did previously, and append them into your existing `samples` list:
```
# .. your code here ..
print(len(samples))
from scipy import misc
import os
for file in os.listdir('Datasets/ALOI/32i'):
a = os.path.join('Datasets/ALOI/32i', file)
img = misc.imread(a).reshape(-1)
samples.append(img)
print (len(samples))
df = pd.DataFrame(samples)
```
Convert `samples` to a DataFrame named `df`:
```
# .. your code here ..
from sklearn import manifold
iso = manifold.Isomap(n_neighbors = 6, n_components = 3)
Z = iso.fit_transform(df)
```
Import any necessary libraries to perform Isomap here, reduce `df` down to three components and using `K=6` for your neighborhood size:
```
# .. your code here ..
Plot2D(Z, "Isomap transformed data, 2D", 0, 1)
Plot3D(Z, 'Isomap 3D', 0, 1, 2, num_to_plot=40)
```
Create a 2D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker. Graph the first two isomap components:
```
# .. your code here ..
```
Chart a 3D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker:
```
# .. your code here ..
```
| github_jupyter |
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [See reference](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [See reference](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`.
- The BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv Hint](https://keras.io/layers/convolutional/#conv2d)
- [BatchNorm Hint](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Addition Hint](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1,1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f,f), strides = (1,1), padding='same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1,1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1,1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here're some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully conected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, 3, [128,128,512], stage=3, block='a', s=2)
X = identity_block(X, 3, [128,128,512], stage=3, block='b')
X = identity_block(X, 3, [128,128,512], stage=3, block='c')
X = identity_block(X, 3, [128,128,512], stage=3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, 3, [256, 256, 1024], stage=4, block='a', s=2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = convolutional_block(X, 3, [512, 512, 2048], stage=5, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2,2), name='avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
<font color='blue'>
**What you should remember:**
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
from fastai.gen_doc.gen_notebooks import *
from pathlib import Path
```
### To update this notebook
Run `tools/sgen_notebooks.py
Or run below:
You need to make sure to refresh right after
```
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
```
# Metadata generated below
```
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
```
| github_jupyter |
# Домашняя работа №1
В этой домашней работе вам предлагается ознакомиться с базовой функциональностью Python и устройством Jupyter Notebook, а так же познакомиться с простыми функциями из пакетов NumPy и matplotlib.
Следуйте инструкциям нотебука, решайте задачи, а ответы заполняйте в следующую форму: https://forms.gle/gxG8D5BGeH1nxcSU8
```
import numpy as np
import matplotlib.pyplot as plt
from tasks import find_fold_number, rle, test_rle_str
# увеличим изначальный размер графиков
plt.figure(figsize=(15, 10))
# отображение графиков внутри нотебука
%matplotlib inline
# графики в svg выглядят более четкими
%config InlineBackend.figure_format = 'svg'
# автоматическая перезагрузка функций из локальных модулей
%load_ext autoreload
%autoreload 2
```
# NumPy & matplotlib
Для следующих заданий необходимо реализовать код внутри ячейки. Постарайтесь воспользоваться функциональностью пакетов, избегая ненужных циклов и т.п.
Про `NumPy` можно почитать тут:
[NumPy quickstart](https://docs.scipy.org/doc/numpy/user/quickstart.html)
Про `matplotlib` тут:
[PyPlot tutorial](https://matplotlib.org/tutorials/introductory/pyplot.html)
В данной части задания будут базироваться на датасете $\textit{Ирисы Фишера}$, первым делом его необходимо скачать
```
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data -P assignment_1/data
```
Ирисы Фишера состоят из данных о 150 экземплярах ириса — Ирис щетинистый (Iris setosa), Ирис виргинский (Iris virginica) и Ирис разноцветный (Iris versicolor). Для каждого экземпляра измерялись четыре характеристики (в сантиметрах):
1. Длина наружной доли околоцветника (англ. sepal length);
2. Ширина наружной доли околоцветника (англ. sepal width);
3. Длина внутренней доли околоцветника (англ. petal length);
4. Ширина внутренней доли околоцветника (англ. petal width).
```
iris_full = np.genfromtxt('assignment_1/data/iris.data', delimiter=',', dtype='object')
names = ('sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'species')
iris_vals = iris_full[:, :-1].astype(np.float)
iris_name = iris_full[:, -1].astype(np.str)
n_iris = iris_vals.shape[0]
n_rows = 10
template = '{:^15}' * len(names)
print(template.format(*names))
for vals, name in zip(iris_vals[:n_rows], iris_name[:n_rows]):
print(template.format(*vals, name))
```
## 1. Какое максимальное значение для каждого признака
Ответом будет последовательность из 4 чисел
Например: `5.1 3.5 1.4 0.2`
```
maxs = iris_vals.max(axis=0); print(maxs)
```
## 2. Сколько каждого типа ириса представленно в данных
Ответом будет последовательность из 3 чисел в порядке: `Iris-setosa, Iris-versicolor, Iris-virginica`
Например: `10 10 10`
```
names, counts = np.unique(iris_name, return_counts=True); print(names, counts)
```
## 3. Среднее значение признака `petalwidth` для каждого типа ириса
Ответом будет последовательность в возврастающем порядке с округлением до 2 знаков после запятой.
Например: `1.23 4.56 7.89`
```
print([(np.mean(np.compress(iris_name == name, iris_vals[:, 3]), dtype=np.float64), name) for name in names])
```
## 4. Попарное скалярное произведение признаков
Ответом будет среднее значение попарных скалярных произведений векторов признаков с округлением до 2 знаков после запятой.
Скалярное произведение вектора с самим собой учитывать не надо.
Например: `12.34`
```
np.mean([iris_vals[:, i] @ iris_vals[:, j] for i in range(0, 4) for j in range(0, 4) if i != j], dtype=np.float64)
```
## 5. У какого типа ириса самое маленькое значение признака `sepalwidth`
```
print([(np.min(np.compress(iris_name == name, iris_vals[:, 1])), name) for name in names])
```
## 6. Постройте график распределения значений для `petallength` взависимости от типа ириса
В качестве ответа укажите тип ириса с наименьшей дисперсией (самый "узкий")
```
[plt.plot(np.linspace(0, 10, 50), np.compress(iris_name == name, iris_vals[:, 2]), color) for (name, color) in np.vstack((names, ['r', 'g', 'b'])).T];
print([(np.var(np.compress(iris_name == name, iris_vals[:, 2])), name) for name in names])
```
## 7. Отобразите зависимость между `petallength` и `petalwidth` взависимости от типа ириса
по оси X - petallength
по оси Y - petalwidth
В ответе укажите класс, который отделился от остальных
```
[plt.plot(np.compress(iris_name == name, iris_vals[:, 2]), np.compress(iris_name == name, iris_vals[:, 3]), color) for (name, color) in np.vstack((names, ['r', 'g', 'b'])).T];
```
## 8. Постройте boxplot признака `sepallength` для каждого типа Ириса
В ответе укажите количество выбросов в данных
Каждый выброс отображается точкой, подробнее про boxplot можно почитать [здесь](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51)
```
[plt.boxplot(np.compress(iris_name == name, iris_vals[:, 0]), flierprops=dict(markerfacecolor='g', marker='D')) for name in names];
```
# Python
Для следующих заданий необходимо реализовать соответствующую функцию в файле `tasks.py`.
После реализации, выполните соответствующую ячейку, не изменяя её содержимое.
## 9. За сколько шагов можно получить однозначное число перемножая цифры предыдущего числа.
Например, для $88$ ответ $3$:
$$
88 \rightarrow 8 \times 8 = 64 \rightarrow 6 \times 4 = 24 \rightarrow 2 \times 4 = 8
$$
Для этого задания, реализуйте функцию `find_fold_number`
```
plt.plot(np.arange(100), [find_fold_number(x) for x in np.arange(100)], 'g-');
```
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
```
''.join(map(str, (find_fold_number(i) for i in range(500))))
```
## 10. Кодирование длин серии
Кодирование длин серий (RLE) — алгоритм сжатия данных, заменяющий повторяющиеся символы на один символ и число его повторов. Серией называется последовательность, состоящая из нескольких одинаковых символов (более одного). При кодировании строка одинаковых символов, составляющих серию, заменяется строкой, содержащей сам повторяющийся символ и количество его повторов.
Например, для $\textit{AAAAAAAAAAAAAAABAAAA}$ будет сжата в $\textit{A15BA4}$
Для этого задания реализуйте функцию `rle`
```
print(rle('AAAAAAAAAAAAAAABAAAA'))
assert rle('AAAAAAAAAAAAAAABAAAA') == 'A15BA4', "неправильный ответ для строки из примера"
```
В качестве ответа укажите результат выполнения следующей ячейки (без кавычек)
```
rle(test_rle_str)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gustavocac/FromScratch/blob/master/NLP_Lesson_1.1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<h1><div align="center">Processamento de Linguagem Natural</div></h1>
<div align="center">Gustavo C. A Corradi</div>
<div align="center"><a href="http://robotreport.me/">robotreport.me</a></div>
<div align="center">@gustavocorradi</div>
# Lição I - Representação de Texto
Nesta lição, veremos em alguns detalhes como podemos representar melhor o texto em nosso aplicativo. Vamos começar importando os módulos que usaremos
```
import string
from collections import Counter
from pprint import pprint
import gzip
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import re
import nltk
from bs4 import BeautifulSoup
url = "http://www.rb.org.br/detalhe_artigo.asp?id=2295&idioma=Portugues"
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html)
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
#lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
#chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
#text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
```
We choose a well known nursery rhyme, that has the added distinction of having been the first audio ever recorded, to be the short snippet of text that we will use in our examples:
## Tokenization
The first step in any analysis is to tokenize the text. What this means is that we will extract all the individual words in the text. For the sake of simplicity, we will assume that our text is well formed and that our words are delimited either by white space or punctuation characters.
```
def extract_words(text):
temp = text.split() # Split the text on whitespace
text_words = []
print(temp)
for word in temp:
# Remove any punctuation characters present in the beginning of the word
print(word[0])
while word[0] in string.punctuation:
word = word[1:]
# Remove any punctuation characters present in the end of the word
while word[-1] in string.punctuation:
word = word[:-1]
# Append this word into our list of words.
text_words.append(word.lower())
return text_words
```
After this step we now have our text represented as an array of individual, lowercase, words:
```
text_words = extract_words(text)
print(text_words)
```
As we saw during the video, this is a wasteful way to represent text. We can be much more efficient by representing each word by a number
```
word_dict = {}
word_list = []
vocabulary_size = 0
text_tokens = []
for word in text_words:
# If we are seeing this word for the first time, create an id for it and added it to our word dictionary
if word not in word_dict:
word_dict[word] = vocabulary_size
word_list.append(word)
vocabulary_size += 1
# add the token corresponding to the current word to the tokenized text.
text_tokens.append(word_dict[word])
```
When we were tokenizing our text, we also generated a dictionary **word_dict** that maps words to integers and a **word_list** that maps each integer to the corresponding word.
```
print("Word list:", word_list, "\n\n Word dictionary:")
pprint(word_dict)
```
These two datastructures already proved their usefulness when we converted our text to a list of tokens.
```
print(text_tokens)
```
Unfortunately, while this representation is convenient for memory reasons it has some severe limitations. Perhaps the most important of which is the fact that computers naturally assume that numbers can be operated on mathematically (by addition, subtraction, etc) in a way that doesn't match our understanding of words.
## One-hot encoding
One typical way of overcoming this difficulty is to represent each word by a one-hot encoded vector where every element is zero except the one corresponding to a specific word.
```
def one_hot(word, word_dict):
"""
Generate a one-hot encoded vector corresponding to *word*
"""
vector = np.zeros(len(word_dict))
vector[word_dict[word]] = 1
return vector
```
So, for example, the word "fleece" would be represented by:
```
fleece_hot = one_hot("fleece", word_dict)
print(fleece_hot)
```
This vector has every element set to zero, except element 6, since:
```
print(word_dict["fleece"])
fleece_hot[6] == 1
```
## Bag of words
We can now use the one-hot encoded vector for each word to produce a vector representation of our original text, by simply adding up all the one-hot encoded vectors:
```
text_vector1 = np.zeros(vocabulary_size)
for word in text_words:
hot_word = one_hot(word, word_dict)
text_vector1 += hot_word
print(text_vector1)
```
In practice, we can also easily skip the encoding step at the word level by using the *word_dict* defined above:
```
text_vector = np.zeros(vocabulary_size)
for word in text_words:
text_vector[word_dict[word]] += 1
print(text_vector)
```
Naturally, this approach is completely equivalent to the previous one and has the added advantage of being more efficient in terms of both speed and memory requirements.
This is known as the __bag of words__ representation of the text. It should be noted that these vectors simply contains the number of times each word appears in our document, so we can easily tell that the word *mary* appears exactly 6 times in our little nursery rhyme.
```
text_vector[word_dict["mary"]]
```
A more pythonic (and efficient) way of producing the same result is to use the standard __Counter__ module:
```
word_counts = Counter(text_words)
pprint(word_counts)
```
From which we can easily generate the __text_vector__ and __word_dict__ data structures:
```
items = list(word_counts.items())
# Extract word dictionary and vector representation
word_dict2 = dict([[items[i][0], i] for i in range(len(items))])
text_vector2 = [items[i][1] for i in range(len(items))]
```
And let's take a look at them:
```
print("Text vector:", text_vector2, "\n\nWord dictionary:")
pprint(word_dict2)
```
The results using this approach are slightly different than the previous ones, because the words are mapped to different integer ids but the corresponding values are the same:
```
for word in word_dict.keys():
if text_vector[word_dict[word]] != text_vector2[word_dict2[word]]:
print("Error!")
```
As expected, there are no differences!
## Term Frequency
The bag of words vector representation introduced above relies simply on the frequency of occurence of each word. Following a long tradition of giving fancy names to simple ideas, this is known as __Term Frequency__.
Intuitively, we expect the the frequency with which a given word is mentioned should correspond to the relevance of that word for the piece of text we are considering. For example, **Mary** is a pretty important word in our little nursery rhyme and indeed it is the one that occurs the most often:
```
sorted(items, key=lambda x:x[1], reverse=True)
```
However, it's hard to draw conclusions from such a small piece of text. Let us consider a significantly larger piece of text, the first 100 MB of the english Wikipedia from: http://mattmahoney.net/dc/textdata. For the sake of convenience, text8.gz has been included in this repository in the **data/** directory. We start by loading it's contents into memory as an array of words:
```
data = []
for line in gzip.open("data/text8.gz", 'rt'):
data.extend(line.strip().split())
```
Now let's take a look at the most common words in this large corpus:
```
counts = Counter(data)
sorted_counts = sorted(list(counts.items()), key=lambda x:x[1], reverse=True)
for word, count in sorted_counts[:10]:
print(word, count)
```
Surprisingly, we find that the most common words are not particularly meaningful. Indeed, this is a common occurence in Natural Language Processing. The most frequent words are typically auxiliaries required due to gramatical rules.
On the other hand, there is also a large number of words that occur very infrequently as can be easily seen by glancing at the word freqency distribution.
```
dist = Counter(counts.values())
dist = list(dist.items())
dist.sort(key=lambda x:x[0])
dist = np.array(dist)
norm = np.dot(dist.T[0], dist.T[1])
plt.loglog(dist.T[0], dist.T[1]/norm)
plt.xlabel("count")
plt.ylabel("P(count)")
plt.title("Word frequency distribution")
```
## Stopwords
One common technique to simplify NLP tasks is to remove what are known as Stopwords, words that are very frequent but not meaningful. If we simply remove the most common 100 words, we significantly reduce the amount of data we have to consider while losing little information.
```
stopwords = set([word for word, count in sorted_counts[:100]])
clean_data = []
for word in data:
if word not in stopwords:
clean_data.append(word)
print("Original size:", len(data))
print("Clean size:", len(clean_data))
print("Reduction:", 1-len(clean_data)/len(data))
```
Wow, our dataset size was reduced almost in half!
In practice, we don't simply remove the most common words in our corpus but rather a manually curate list of stopwords. Lists for dozens of languages and applications can easily be found online.
## Term Frequency/Inverse Document Frequency
One way of determining of the relative importance of a word is to see how often it appears across multiple documents. Words that are relevant to a specific topic are more likely to appear in documents about that topic and much less in documents about other topics. On the other hand, less meaningful words (like **the**) will be common across documents about any subject.
To measure the document frequency of a word we will need to have multiple documents. For the sake of simplicity, we will treat each sentence of our nursery rhyme as an individual document:
```
corpus_text = text.split('.')
corpus_words = []
for document in corpus_text:
doc_words = extract_words(document)
corpus_words.append(doc_words)
```
Now our corpus is represented as a list of word lists, where each list is just the word representation of the corresponding sentence:
```
pprint(corpus_words)
```
Let us now calculate the number of documents in which each word appears:
```
document_count = {}
for document in corpus_words:
word_set = set(document)
for word in word_set:
document_count[word] = document_count.get(word, 0) + 1
pprint(document_count)
```
As we can see, the word __Mary__ appears in all 4 of our documents, making it useless when it comes to distinguish between the different sentences. On the other hand, words like __white__ which appear in only one document are very discriminative. Using this approach we can define a new quantity, the ___Inverse Document Frequency__ that tells us how frequent a word is across the documents in a specific corpus:
```
def inv_doc_freq(corpus_words):
number_docs = len(corpus_words)
document_count = {}
for document in corpus_words:
word_set = set(document)
for word in word_set:
document_count[word] = document_count.get(word, 0) + 1
IDF = {}
for word in document_count:
IDF[word] = np.log(number_docs/document_count[word])
return IDF
```
Where we followed the convention of using the logarithm of the inverse document frequency. This has the numerical advantage of avoiding to have to handle small fractional numbers.
We can easily see that the IDF gives a smaller weight to the most common words and a higher weight to the less frequent:
```
IDF = inv_doc_freq(corpus_words)
pprint(IDF)
```
As expected **Mary** has the smallest weight of all words 0, meaning that it is effectively removed from the dataset. You can consider this as a way of implicitly identify and remove stopwords. In case you do want to keep even the words that appear in every document, you can just add a 1. to the argument of the logarithm above:
\begin{equation}
\log\left[1+\frac{N_d}{N_d\left(w\right)}\right]
\end{equation}
When we multiply the term frequency of each word by it's inverse document frequency, we have a good way of quantifying how relevant a word is to understand the meaning of a specific document.
```
def tf_idf(corpus_words):
IDF = inv_doc_freq(corpus_words)
TFIDF = []
for document in corpus_words:
TFIDF.append(Counter(document))
for document in TFIDF:
for word in document:
document[word] = document[word]*IDF[word]
return TFIDF
tf_idf(corpus_words)
```
Now we finally have a vector representation of each of our documents that takes the informational contributions of each word into account. Each of these vectors provides us with a unique representation of each document, in the context (corpus) in which it occurs, making it posssible to define the similarity of two documents, etc.
## Porter Stemmer
There is still, however, one issue with our approach to representing text. Since we treat each word as a unique token and completely independently from all others, for large documents we will end up with many variations of the same word such as verb conjugations, the corresponding adverbs and nouns, etc.
One way around this difficulty is to use stemming algorithm to reduce words to their root (or stem) version. The most famous Stemming algorithm is known as the **Porter Stemmer** and was introduced by Martin Porter in 1980 [Program 14, 130 (1980)](https://dl.acm.org/citation.cfm?id=275705)
The algorithm starts by defining consonants (C) and vowels (V):
```
V = set('aeiouy')
C = set('bcdfghjklmnpqrstvwxz')
```
The stem of a word is what is left of that word after a speficic ending has been removed. A function to do this is easy to implement:
```
def get_stem(suffix, word):
"""
Extract the stem of a word
"""
if word.lower().endswith(suffix.lower()): # Case insensitive comparison
return word[:-len(suffix)]
return None
```
It also defines words (or stems) to be sequences of vowels and consonants of the form:
\begin{equation}
[C](VC)^m[V]
\end{equation}
where $m$ is called the **measure** of the word and [] represent optional sections.
```
def measure(orig_word):
"""
Calculate the "measure" m of a word or stem, according to the Porter Stemmer algorthim
"""
word = orig_word.lower()
optV = False
optC = False
VC = False
m = 0
pos = 0
# We can think of this implementation as a simple finite state machine
# looks for sequences of vowels or consonants depending of the state
# in which it's in, while keeping track of how many VC sequences it
# has encountered.
# The presence of the optional V and C portions is recorded in the
# optV and optC booleans.
# We're at the initial state.
# gobble up all the optional consonants at the beginning of the word
while pos < len(word) and word[pos] in C:
pos += 1
optC = True
while pos < len(word):
# Now we know that the next state must be a vowel
while pos < len(word) and word[pos] in V:
pos += 1
optV = True
# Followd by a consonant
while pos < len(word) and word[pos] in C:
pos += 1
optV = False
# If a consonant was found, then we matched VC
# so we should increment m by one. Otherwise,
# optV remained true and we simply had a dangling
# V sequence.
if not optV:
m += 1
return m
```
Let's consider a simple example. The word __crepusculars__ should have measure 4:
[cr] (ep) (usc) (ul) (ars)
and indeed it does.
```
word = "crepusculars"
print(measure(word))
```
The Porter algorithm sequentially applies a series of transformation rules over a series of 5 steps (step 1 is divided in 3 substeps and step 5 in 2). The rules are only applied if a certain condition is true.
In addition to possibily specifying a requirement on the measure of a word, conditions can make use of different boolean functions as well:
```
def ends_with(char, stem):
"""
Checks the ending of the word
"""
return stem[-1] == char
def double_consonant(stem):
"""
Checks the ending of a word for a double consonant
"""
if len(stem) < 2:
return False
if stem[-1] in C and stem[-2] == stem[-1]:
return True
return False
def contains_vowel(stem):
"""
Checks if a word contains a vowel or not
"""
return len(set(stem) & V) > 0
```
Finally, we define a function to apply a specific rule to a word or stem:
```
def apply_rule(condition, suffix, replacement, word):
"""
Apply Porter Stemmer rule.
if "condition" is True replace "suffix" by "replacement" in "word"
"""
stem = get_stem(suffix, word)
if stem is not None and condition is True:
# Remove the suffix
word = stem
# Add the replacement suffix, if any
if replacement is not None:
word += replacement
return word
```
Now we can see how rules can be applied. For example, this rule, from step 1b is successfully applied to __pastered__:
```
word = "plastered"
suffix = "ed"
stem = get_stem(suffix, word)
apply_rule(contains_vowel(stem), suffix, None, word)
```
While try applying the same rule to **bled** will fail to pass the condition resulting in no change.
```
word = "bled"
suffix = "ed"
stem = get_stem(suffix, word)
apply_rule(contains_vowel(stem), suffix, None, word)
```
For a more complex example, we have, in Step 4:
```
word = "adoption"
suffix = "ion"
stem = get_stem(suffix, word)
apply_rule(measure(stem) > 1 and (ends_with("s", stem) or ends_with("t", stem)), suffix, None, word)
```
In total, the Porter Stemmer algorithm (for the English language) applies several dozen rules (see https://tartarus.org/martin/PorterStemmer/def.txt for a complete list). Implementing all of them is both tedious and error prone, so we abstain from providing a full implementation of the algorithm here. High quality implementations can be found in all major NLP libraries such as [NLTK](http://www.nltk.org/howto/stem.html).
The dificulties of defining matching rules to arbitrary text cannot be fully resolved without the use of Regular Expressions (typically implemented as Finite State Machines like our __measure__ implementation above), a more advanced topic that is beyond the scope of this course.
| github_jupyter |
# maysics.calculus模块使用说明
calculus模块包含七个函数
|名称|作用|
|---|---|
|lim|极限|
|ha|哈密顿算符|
|grad|梯度|
|nebla_dot|nebla算子点乘|
|nebla_cross|nebla算子叉乘|
|laplace|拉普拉斯算子|
|inte|积分|
<br></br>
## 求极限:lim
lim(f, x0, acc=0.01, method='both')
<br>求函数```f```在```acc```的误差下,$x\rightarrow x_{0}$的函数值
<br>```method```可选'both'、'+'、'-',分别表示双边极限、右极限、左极限
### DEMO 1-1:求函数$y=\frac{sin(x)}{x}$中$x\rightarrow0$时的值
```
from maysics.calculus import lim
import numpy as np
def f(x):
return np.sin(x) / x
lim(f, 0)
```
<br></br>
## 哈密顿算符:ha
哈密顿算符:$\hat{H}=-\frac{\hbar^{2}{\nabla^{2}}}{2m}+U$
<br>ha(f, m, U, acc=0.1)
<br>求函数```f```在```acc```误差下,粒子质量为```m```,势能为```U```时,通过哈密顿算符生成的新函数
<br>```f```需要以数组作为输入(不能是数)
<br>```U```是常数或函数
### DEMO 2-1:求函数$y=x$通过哈密顿算符生成的新函数
```
from maysics.calculus import ha
def f(x):
return x
# m=1, U=2
f_new = ha(f, 1, 2)
# 输出x=(1, 2, 3)时的函数值
f_new([1, 2, 3])
```
<br></br>
## 梯度:grad
grad(f, x, acc=0.1)
<br>在acc误差下计算函数f在x处的梯度
### DEMO 3-1:求函数$y=x^{2}+y^{2}$在点$(3, 3)$处的梯度
```
from maysics.calculus import grad
def f(x):
return x[0]**2 + x[1]**2
grad(f, [3, 3])
```
<br></br>
## nebla算子:nebla_dot和nebla_cross
nebla_dot用于点乘矢量函数:$\nabla\centerdot\vec{f}$
<br>nebla_dot(f, x, acc=0.1)
<br>nebla_cross用于叉乘矢量函数:$\nabla\times\vec{f}$(此时函数f的输出必须是三维的)
<br>nebla_cross(f, x, acc=0.1)
<br>用法类似grad函数
### DEMO 4-1:$\nabla\centerdot\vec{f}$,$\vec{f}=x^{2}\vec{i}+y^{2}\vec{j}+z^{2}\vec{k}$在点$(1,1,1)$的函数值
```
from maysics.calculus import nebla_dot
def f(x):
return x[0]**2, x[1]**2, x[2]**2
nebla_dot(f, [1, 1, 1])
```
### DEMO 4-2:$\nabla\times\vec{f}$,$\vec{f}=x^{2}\vec{i}+y^{2}\vec{j}+z^{2}\vec{k}$在点$(1,1,1)$的函数值
```
from maysics.calculus import nebla_cross
def f(x):
return x[0]**2, x[1]**2, x[2]**2
nebla_cross(f, [1, 1, 1])
```
<br></br>
## 拉普拉斯算子:laplace
$\Delta=\nabla^{2}$
<br>laplace(f, x, acc=0.1)
<br>函数```f```需以一维数组作为输入,且不支持批量输入
### DEMO 5-1:不支持小批量输入函数:$f(x,y,z)=x^{2}+y^{2}+z^{2}$在点$(1,1,1)$的$\Delta f$值
```
from maysics.calculus import laplace
def f(x):
return sum(x**2)
laplace(f, [1,1,1])
```
### DEMO 5-2:支持小批量输入函数:$f(x,y,z)=x^{2}+y^{2}+z^{2}$在点集${(1,1,1),(2,2,2)}$的$\Delta f$值
```
from maysics.calculus import laplace
def f(x):
return (x**2).sum(axis=1)
laplace(f, [[1,1,1],[2,2,2]])
```
<br></br>
## 定积分:inte
inte(func, area, method='rect', dim=1, args={}, condition=None, param={}, acc=0.1, loop=10000, height=1, random_state=None)
<br>```func```是被积函数
<br>```area```是一个二维数组,表示各个维度的积分范围
<br>```method```可选'rect'和'mc',分别表示使用矩形法和蒙特卡洛法进行积分,```acc```参数仅对矩形法起作用,```loop```、```height```和```random_state```参数仅对蒙特卡洛法起作用
<br>```dim```参数表示输入函数的维度,默认为一维函数
<br>```args```表示输入函数f除了自变量以外的其他参数
<br>```condition```是条件函数,当```condition```不为None时,只有满足```condition```(即输出为True)的点才会纳入积分范围
<br>```param```表示函数```condition```除了自变量以外的其他参数
<br>```acc```既可以是数类型,也可以是一维数组类型,前者表示各个维度精度一致,后者则可以各个维度精度不同
### 使用矩形法进行定积分
超矩形的大小为:$f(x)\times acc^{dim}$
### DEMO 6-1:求$f(x)=sin(x)$在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
inte(np.sin, [[0, np.pi]])
```
### DEMO 6-2:求$f(x)=Asin(x)$在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
def f(x, A):
return A * np.sin(x)
# 取A=2
inte(f, [[0, np.pi]], args={'A':2})
```
### DEMO 6-3:求$f(x)=2sin(x)$在0到π上函数值小于等于1区域的积分
```
from maysics.calculus import inte
import numpy as np
def c(x):
if 2 * np.sin(x) <= 1:
return True
else:
return False
# 取A=2
inte(np.sin, [[0, np.pi]], condition=c)
```
### DEMO 6-4:求$f(x,y)=x^{2}+y^{2}$在$x\in[-2,2]$,$y\in[-1,1]$的积分
```
from maysics.calculus import inte
def f(x):
return x[0]**2 + x[1]**2
inte(f, [[-2, 2], [-1, 1]])
```
### 使用蒙特卡洛法进行定积分
在$area\times height$的超矩形中随机产生loop个散点(注意$height\geq maxf(x)$在area中恒成立)
<br>将$y\leq f(x)$的散点数记为n,则积分$\approx\frac{n}{loop}\times area \times height$
<br>random_state是随机种子
### DEMO 6-5:求f(x)=2sin(x)在0到π上的积分
```
from maysics.calculus import inte
import numpy as np
def f(x):
return 2 * np.sin(x)
inte(f, [[0, np.pi]], method='mc', height=2)
```
### DEMO 6-6:求$f(x,y)=x^{2}+y^{2}$在$x\in[-2,2]$,$y\in[-1,1]$的积分
```
from maysics.calculus import inte
def f(x):
return x[0]**2 + x[1]**2
inte(f, [[-2, 2], [-1, 1]], method='mc', height=5)
```
| github_jupyter |
# Section 5: Manipulating and Cleaning Data
This section marks a subtle change. Up until now, we have been introducing ideas and techniques in order to prepare you with a toolbox of techniques to deal with real-world situations. We are now going to start using some of those tools while also giving you some ideas about how and when to use them in your own work with data.
Real-world data is messy. You will likely need to combine several data sources to get the data you actually want. The data from those sources will be incomplete. And it will likely not be formatted in exactly the way you want in order to perform your analysis. It's for these reasons that most data scientists will tell you that about 80 percent of any project is spent just getting the data into a form ready for analysis.
## Exploring `DataFrame` information
> **Learning goal:** By the end of this subsection, you should be comfortable finding general information about the data stored in pandas `DataFrames`.
Once you have loaded your data into pandas, it will more likely than not be in a `DataFrame`. However, if the data set in your `DataFrame` has 60,000 rows and 400 columns, how do you even begin to get a sense of what you're working with? Fortunately, pandas provides some convenient tools to quickly look at overall information about a `DataFrame` in addition to the first few and last few rows.
In order to explore this functionality, we will import the Python scikit-learn library and use an iconic dataset that every data scientist has seen hundreds of times: British biologist Ronald Fisher's *Iris* data set used in his 1936 paper "The use of multiple measurements in taxonomic problems":
```
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
iris_df = pd.DataFrame(data=iris['data'], columns=iris['feature_names'])
```
### `DataFrame.info`
Let's take a look at this dataset to see what we have:
```
iris_df.info()
```
From this, we know that the *Iris* dataset has 150 entries in four columns. All of the data is stored as 64-bit floating-point numbers.
### `DataFrame.head`
Next, let's see what the first few rows of our `DataFrame` look like:
```
iris_df.head()
```
> **Exercise:**
By default, `DataFrame.head` returns the first five rows of a `DataFrame`. In the code cell below, can you figure out how to get it to show more?
```
# Hint: Consult the documentation by using iris_df.head?
```
### `DataFrame.tail`
The flipside of `DataFrame.head` is `DataFrame.tail`, which returns the last five rows of a `DataFrame`:
```
iris_df.tail()
```
In practice, it is useful to be able to easily examine the first few rows or the last few rows of a `DataFrame`, particularly when you are looking for outliers in ordered datasets.
> **Takeaway:** Even just by looking at the metadata about the information in a DataFrame or the first and last few values in one, you can get an immediate idea about the size, shape, and content of the data you are dealing with.
## Dealing with missing data
> **Learning goal:** By the end of this subsection, you should know how to replace or remove null values from `DataFrames`.
Most of the time the datasets you want to use (of have to use) have missing values in them. How missing data is handled carries with it subtle tradeoffs that can affect your final analysis and real-world outcomes.
Pandas handles missing values in two ways. The first way you've seen before in previous sections: `NaN`, or Not a Number. This is a actually a special value that is part of the IEEE floating-point specification, and it is only used to indicate missing floating-point values.
For missing values apart from floats, pandas uses the Python `None` object. While it might seem confusing that you will encounter two different kinds of values that say essentially the same thing, there are sound programmatic reasons for this design choice and, in practice, going this route enables pandas to deliver a good compromise for the vast majority of cases. Notwithstanding this fact, both `None` and `NaN` carry restrictions that you need to be mindful of with regard to how they can be used.
### `None`: non-float missing data
Because `None` comes from Python, it cannot be used in NumPy and pandas arrays that are not of data type `object`. Remember, NumPy arrays (and the data structures in pandas) can contain only one type of data. This is what gives them their tremendous power for large-scale data and computational work, but it also limits their flexibility. Such arrays have to upcast to the “lowest common denominator,” the data type that will encompass everything in the array. When `None` is in the array, it means that you are working with Python objects.
To see this in action, consider the following example array (note the `dtype` for it):
```
import numpy as np
example1 = np.array([2, None, 6, 8])
example1
```
The reality of upcast data types carries two side effects with it. First, operations will be carried out at the level of interpreted Python code rather than compiled NumPy code. Essentially, this means that any operations involving `Series` or `DataFrames` with `None` in them will be slower. While you would probably not notice this performance hit, for large datasets it might become an issue.
The second side effect stems from the first. Because `None` essentially drags `Series` or `DataFrames` back into the world of regular Python, using NumPy/pandas aggregations like `sum()` or `min()` on arrays that contain a ``None`` value will generally produce an error:
```
example1.sum()
```
**Key takeaway**: Addition (and other operations) between integers and `None` values is undefined, which can limit what you can do with datasets that contain them.
### `NaN`: missing float values
In contrast to `None`, NumPy (and therefore pandas) supports `NaN` for its fast, vectorized operations and ufuncs. The bad news is that any arithmetic performed on `NaN` always results in `NaN`. For example:
```
np.nan + 1
np.nan * 0
```
The good news: aggregations thar are run on arrays with `NaN` in them don't pop errors. The bad news: the results are not uniformly useful:
```
example2 = np.array([2, np.nan, 6, 8])
example2.sum(), example2.min(), example2.max()
```
> **Exercise:**
```
# What happens if you add np.nan and None together?
```
Remember: `NaN` is just for missing floating-point values; there is no `NaN` equivalent for integers, strings, or Booleans.
### `NaN` and `None`: null values in pandas
Even though `NaN` and `None` can behave somewhat differently, pandas is nevertheless built to handle them interchangeably. To see what we mean, consider a `Series` of integers:
```
int_series = pd.Series([1, 2, 3], dtype=int)
int_series
```
> **Exercise:**
```
# Now set an element of int_series equal to None.
# How does that element show up in the Series?
# What is the dtype of the Series?
```
In the process of upcasting data types to establish data homogeneity in `Series` and `DataFrames`, pandas will willingly switch missing values between `None` and `NaN`. Because of this design feature, it can be helpful to think of `None` and `NaN` as two different flavors of "null" in pandas. Indeed, some of the core methods you will use to deal with missing values in pandas reflect this idea in their names:
- `isnull()`: Generates a Boolean mask indicating missing values
- `notnull()`: Opposite of `isnull()`
- `dropna()`: Returns a filtered version of the data
- `fillna()`: Returns a copy of the data with missing values filled or imputed
These are important methods to master and get comfortable with, so let's go over each of them in some depth.
### Detecting null values
Both `isnull()` and `notnull()` are your primary methods for detecting null data. Both return Boolean masks over your data.
```
example3 = pd.Series([0, np.nan, '', None])
example3.isnull()
```
Look closely at the output. Does any of it surprise you? While `0` is an arithmetic null, it's nevertheless a perfectly good integer and pandas treats it as such. `''` is a little more subtle. While we used it in Section 1 to represent an empty string value, it is nevertheless a string object and not a representation of null as far as pandas is concerned.
Now, let's turn this around and use these methods in a manner more like you will use them in practice. You can use Boolean masks directly as a ``Series`` or ``DataFrame`` index, which can be useful when trying to work with isolated missing (or present) values.
> **Exercise:**
```
# Try running example3[example3.notnull()].
# Before you do so, what do you expect to see?
```
**Key takeaway**: Both the `isnull()` and `notnull()` methods produce similar results when you use them in `DataFrames`: they show the results and the index of those results, which will help you enormously as you wrestle with your data.
### Dropping null values
Beyond identifying missing values, pandas provides a convenient means to remove null values from `Series` and `DataFrames`. (Particularly on large data sets, it is often more advisable to simply remove missing [NA] values from your analysis than to deal with them in other ways.) To see this in action, let's return to `example3`:
```
example3 = example3.dropna()
example3
```
Note that this should look like your output from `example3[example3.notnull()]`. The difference here is that, rather than just indexing on the masked values, `dropna` has removed those missing values from the `Series` `example3`.
Because `DataFrames` have two dimensions, they afford more options for dropping data.
```
example4 = pd.DataFrame([[1, np.nan, 7],
[2, 5, 8],
[np.nan, 6, 9]])
example4
```
(Did you notice that pandas upcast two of the columns to floats to accommodate the `NaNs`?)
You cannot drop a single value from a `DataFrame`, so you have to drop full rows or columns. Depending on what you are doing, you might want to do one or the other, and pandas gives you options for both. Because in data science, columns generally represent variables and rows represent observations, you are more likely to drop rows of data. The default setting for `dropna()` is to drop all rows that contain any null values:
```
example4.dropna()
```
If necessary, you can drop NA values from columns. Use `axis=1` to do so:
```
example4.dropna(axis='columns')
```
Notice that this can drop a lot of data that you might want to keep, particularly in smaller datasets. What if you just want to drop rows or columns that contain several or all null values? You specify those setting in `dropna` with the `how` and `thresh` parameters.
By default, `how='any'`. (If you would like to check for yourself or see what other parameters the method has, run `example4.dropna?` in a code cell). You could alternatively specify `how='all'` so as to drop only rows or columns that contain all null values. Let's expand our example `DataFrame` to see this in action.
```
example4[3] = np.nan
example4
```
> **Exercise:**
```
# How might you go about dropping just column 3?
# Hint: remember that you will need to supply both the axis parameter and the how parameter.
```
The `thresh` parameter gives you finer-grained control: you set the number of *non-null* values that a row or column needs to have in order to be kept:
```
example4.dropna(axis='rows', thresh=3)
```
Here, the first and last row have been dropped, because they contain only two non-null values.
### Filling null values
Depending on your dataset, it can sometimes make more sense to fill null values with valid ones rather than drop them. You could use `isnull` to do this in place, but that can be laborious, particularly if you have a lot of values to fill. Because this is such a common task in data science, pandas provides `fillna`, which returns a copy of the `Series` or `DataFrame` with the missing values replaced with one of your choosing. Let's create another example `Series` to see how this works in practice.
```
example5 = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
example5
```
You can fill all of the null entries with a single value, such as `0`:
```
example5.fillna(0)
```
> **Exercise:**
```
# What happens if you try to fill null values with a string, like ''?
```
You can **forward-fill** null values, which is to use the last valid value to fill a null:
```
example5.fillna(method='ffill')
```
You can also **back-fill** to propagate the next valid value backward to fill a null:
```
example5.fillna(method='bfill')
```
As you might guess, this works the same with `DataFrames`, but you can also specify an `axis` along which to fill null values:
```
example4
example4.fillna(method='ffill', axis=1)
```
Notice that when a previous value is not available for forward-filling, the null value remains.
> **Exercise:**
```
# What output does example4.fillna(method='bfill', axis=1) produce?
# What about example4.fillna(method='ffill') or example4.fillna(method='bfill')?
# Can you think of a longer code snippet to write that can fill all of the null values in example4?
```
You can be creative about how you use `fillna`. For example, let's look at `example4` again, but this time let's fill the missing values with the average of all of the values in the `DataFrame`:
```
example4.fillna(example4.mean())
```
Notice that column 3 is still valueless: the default direction is to fill values row-wise.
> **Takeaway:** There are multiple ways to deal with missing values in your datasets. The specific strategy you use (removing them, replacing them, or even how you replace them) should be dictated by the particulars of that data. You will develop a better sense of how to deal with missing values the more you handle and interact with datasets.
## Removing duplicate data
> **Learning goal:** By the end of this subsection, you should be comfortable identifying and removing duplicate values from `DataFrames`.
In addition to missing data, you will often encounter duplicated data in real-world datasets. Fortunately, pandas provides an easy means of detecting and removing duplicate entries.
### Identifying duplicates: `duplicated`
You can easily spot duplicate values using the `duplicated` method in pandas, which returns a Boolean mask indicating whether an entry in a `DataFrame` is a duplicate of an earlier one. Let's create another example `DataFrame` to see this in action.
```
example6 = pd.DataFrame({'letters': ['A','B'] * 2 + ['B'],
'numbers': [1, 2, 1, 3, 3]})
example6
example6.duplicated()
```
### Dropping duplicates: `drop_duplicates`
`drop_duplicates` simply returns a copy of the data for which all of the `duplicated` values are `False`:
```
example6.drop_duplicates()
```
Both `duplicated` and `drop_duplicates` default to consider all columns, but you can specify that they examine only a subset of columns in your `DataFrame`:
```
example6.drop_duplicates(['letters'])
```
> **Takeaway:** Removing duplicate data is an essential part of almost every data-science project. Duplicate data can change the results of your analyses and give you spurious results!
## Combining datasets: merge and join
> **Learning goal:** By the end of this subsection, you should have a general knowledge of the various ways to combine `DataFrames`.
Your most interesting analyses will often come from data melded together from more than one source. Because of this, pandas provides several methods of merging and joining datasets to make this necessary job easier:
- **`pandas.merge`** connects rows in `DataFrames` based on one or more keys.
- **`pandas.concat`** concatenates or “stacks” together objects along an axis.
- The **`combine_first`** instance method enables you to splice together overlapping data to fill in missing values in one object with values from another.
Let's examine merging data first, because it will be the most familiar data operation to course attendees who are already familiar with SQL or other relational databases.
### Categories of joins
`merge` carries out several types of joins: *one-to-one*, *many-to-one*, and *many-to-many*. You use the same basic function call to implement all of them, and we will examine all three (because you will need all three as some point in your data delving, depending on the data). We will start with one-to-one joins because they are generally the simplest type of example.
#### One-to-one joins
Consider combining two `DataFrames` that contain different information about the same employees in a company:
```
df1 = pd.DataFrame({'employee': ['Gary', 'Stu', 'Mary', 'Sue'],
'group': ['Accounting', 'Marketing', 'Marketing', 'HR']})
df1
df2 = pd.DataFrame({'employee': ['Mary', 'Stu', 'Gary', 'Sue'],
'hire_date': [2008, 2012, 2017, 2018]})
df2
```
Combine this information into a single `DataFrame` using the `merge` function:
```
df3 = pd.merge(df1, df2)
df3
```
Pandas joined on the `employee` column because it was the only column common to both `df1` and `df2`. (Note also that the original indices of `df1` and `df2` were discarded by `merge`; this is generally the case with merges unless you conduct them by index, which we will discuss later on.)
#### Many-to-one joins
A many-to-one join is like a one-to-one join except that one of the two key columns contains duplicate entries. The `DataFrame` resulting from such a join will preserve those duplicate entries as appropriate:
```
df4 = pd.DataFrame({'group': ['Accounting', 'Marketing', 'HR'],
'supervisor': ['Carlos', 'Giada', 'Stephanie']})
df4
pd.merge(df3, df4)
```
The resulting `DataFrame` has an additional column for `supervisor`. That column has an extra occurrence of 'Giada' that did not occur in `df4` because more than one employee in the merged `DataFrame` works in the 'Marketing' group.
Note that we didn’t specify which column to join on. When you don't specify that information, `merge` uses the overlapping column names as the keys. However, that can be ambiguous; several columns might meet that condition. For that reason, it is a good practice to explicitly specify on which key to join. You can do this with the `on` parameter:
```
pd.merge(df3, df4, on='group')
```
#### Many-to-many joins
What happens if the key columns in both of the `DataFrames` you are joining contain duplicates? That gives you a many-to-many join:
```
df5 = pd.DataFrame({'group': ['Accounting', 'Accounting',
'Marketing', 'Marketing', 'HR', 'HR'],
'core_skills': ['math', 'spreadsheets', 'writing', 'communication',
'spreadsheets', 'organization']})
df5
pd.merge(df1, df5, on='group')
```
Again, in order to avoid ambiguity as to which column to join on, it is a good idea to explicitly tell `merge` which one to use with the `on` parameter.
#### `left_on` and `right_on` keywords
What if you need to merge two datasets with no shared column names? For example, what if you are using a dataset in which the employee name is labeled as 'name' rather than 'employee'? In such cases, you will need to use the `left_on` and `right_on` keywords in order to specify the column names on which to join:
```
df6 = pd.DataFrame({'name': ['Gary', 'Stu', 'Mary', 'Sue'],
'salary': [70000, 80000, 120000, 90000]})
df6
pd.merge(df1, df6, left_on="employee", right_on="name")
```
> **Exercise:**
```
# Using the documentation, can you figure out how to use .drop() to get rid of the 'name' column?
# Hint: You will need to supply two parameters to .drop()
```
#### `left_index` and `right_index` keywords
Sometimes it can be more advantageous to merge on an index rather than on a column. The `left_index` and `right_index` keywords make it possible to join by index. Let's revisit some of our earlier example `DataFrames` to see what this looks like in action.
```
df1a = df1.set_index('employee')
df1a
df2a = df2.set_index('employee')
df2a
```
To merge on the index, specify the `left_index` and `right_index` parameters in `merge`:
```
pd.merge(df1a, df2a, left_index=True, right_index=True)
```
> **Exercise:**
```
# What happens if you specify only left_index or right_index?
```
You can also use the `join` method for `DataFrames`, which produces the same effect but merges on indices by default:
```
df1a.join(df2a)
```
You can also mix and match `left_index`/`right_index` with `right_on`/`left_on`:
```
pd.merge(df1a, df6, left_index=True, right_on='name')
```
#### Set arithmetic for joins
Let's return to many-to-many joins for a moment. A consideration that is unique to them is the *arithmetic* of the join, specifically the set arithmetic we use for the join. To illustrate what we mean by this, let's restructure an old example `DataFrame`:
```
df5 = pd.DataFrame({'group': ['Engineering', 'Marketing', 'Sales'],
'core_skills': ['math', 'writing', 'communication']})
df5
pd.merge(df1, df5, on='group')
```
Notice that after we have restructured `df5` and then rerun the merge with `df1`, we have only two entries in the result. This is because we merged on `group` and 'Marketing' was the only entry that appeared in the `group` column of both `DataFrames`.
In effect, what we have gotten is the *intersection* of both `DataFrames`. This is known as the inner join in the database world, and although it is the default setting for `merge` we can certainly specify it:
```
pd.merge(df1, df5, on='group', how='inner')
```
The complement of the inner join is the outer join, which returns the *union* of the two `DataFrames`.
> **Exercise:**
```
# The keyword for performing an outer join is how='outer'. How would you perform it?
# What do you expect the output of an outer join of df1 and df5 to be?
```
Notice in your resulting `DataFrame` that not every row in `df1` and `df5` had a value that corresponds to the union of the key values (the 'group' column). Pandas fills in these missing values with `NaNs`.
Inner and outer joins are not your only options. A *left join* returns all of the rows in the first (left-side) `DataFrame` supplied to `merge` along with rows from the other `DataFrame` that match up with the left-side key values (and `NaNs` rows with respective values):
```
pd.merge(df1, df5, how='left')
```
> **Exercise:**
```
# Now run the right merge between df1 and df5.
# What do you expect to see?
```
#### `suffixes` keyword: dealing with conflicting column names
Because you can join datasets, you will eventually join two with conflicting column names. Let's look at another example to see what we mean:
```
df7 = pd.DataFrame({'name': ['Gary', 'Stu', 'Mary', 'Sue'],
'rank': [1, 2, 3, 4]})
df7
df8 = pd.DataFrame({'name': ['Gary', 'Stu', 'Mary', 'Sue'],
'rank': [3, 1, 4, 2]})
df8
pd.merge(df7, df8, on='name')
```
Each column name in a `DataFrame` must be unique, so in cases where two joined `DataFrames` share column names (aside from the column serving as the key), the `merge` function automatically appends the suffix `_x` or `_y` to the conflicting column names in order to make them unique. In cases where it is best to control your column names, you can specify a custom suffix for `merge` to append through the `suffixes` keyword:
```
pd.merge(df7, df8, on='name', suffixes=['_left', '_right'])
```
Note that these suffixes work if there are multiple conflicting columns.
### Concatenation in NumPy
Concatenation in pandas is built off of the concatenation functionality for NumPy arrays. Here is what NumPy concatenation looks like:
- For one-dimensional arrays:
```
x = [1, 2, 3]
y = [4, 5, 6]
z = [7, 8, 9]
np.concatenate([x, y, z])
```
- For two-dimensional arrays:
```
x = [[1, 2],
[3, 4]]
np.concatenate([x, x], axis=1)
```
Notice that the `axis=1` parameter makes the concatenation occur along columns rather than rows. Concatenation in pandas looks similar to this.
### Concatenation in pandas
Pandas has a function, `pd.concat()`, that can be used for a simple concatenation of `Series` or `DataFrame` objects in a similar manner to `np.concatenate()` with ndarrays.
```
ser1 = pd.Series(['a', 'b', 'c'], index=[1, 2, 3])
ser2 = pd.Series(['d', 'e', 'f'], index=[4, 5, 6])
pd.concat([ser1, ser2])
```
It also concatenates higher-dimensional objects, such as ``DataFrames``:
```
df9 = pd.DataFrame({'A': ['a', 'c'],
'B': ['b', 'd']})
df9
pd.concat([df9, df9])
```
Notice that `pd.concat` has preserved the indexing even though that means that it has been duplicated. You can have the results re-indexed (and avoid potential confusion down the road) like so:
```
pd.concat([df9, df9], ignore_index=True)
```
By default, `pd.concat` concatenates row-wise within the `DataFrame` (that is, `axis=0` by default). You can specify the axis along which to concatenate:
```
pd.concat([df9, df9], axis=1)
```
Note that while pandas will display this without error, you will get an error message if you try to assign this result as a new `DataFrame`. Column names in `DataFrames` must be unique.
### Concatenation with joins
Just as you did with `merge` above, you can use inner and outer joins when concatenating `DataFrames` with different sets of column names.
```
df10 = pd.DataFrame({'A': ['a', 'd'],
'B': ['b', 'e'],
'C': ['c', 'f']})
df10
df11 = pd.DataFrame({'B': ['u', 'x'],
'C': ['v', 'y'],
'D': ['w', 'z']})
df11
pd.concat([df10, df11])
```
As we saw earlier, the default join for this is an outer join and entries for which no data is available are filled with `NaN` values. You can also do an inner join:
```
pd.concat([df10, df11], join='inner')
```
Another option is to directly specify the index of the remaininig colums using the `join_axes` argument, which takes a list of index objects. Here, we will specify that the returned columns should be the same as those of the first input (`df10`):
```
pd.concat([df10, df11], join_axes=[df10.columns])
```
#### `append()`
Because direct array concatenation is so common, ``Series`` and ``DataFrame`` objects have an ``append`` method that can accomplish the same thing in fewer keystrokes. For example, rather than calling ``pd.concat([df9, df9])``, you can simply call ``df9.append(df9)``:
```
df9.append(df9)
```
**Important point**: Unlike the `append()` and `extend()` methods of Python lists, the `append()` method in pandas does not modify the original object. It instead creates a new object with the combined data.
> **Takeaway:** A large part of the value you can provide as a data scientist comes from connecting multiple, often disparate datasets to find new insights. Learning how to join and merge data is thus an essential part of your skill set.
## Exploratory statistics and visualization
> **Learning goal:** By the end of this subsection, you should be familiar with some of the ways to visually explore the data stored in `DataFrames`.
Often when probing a new data set, it is invaluable to get high-level information about what the dataset holds. Earlier in this section we discussed using methods such as `DataFrame.info`, `DataFrame.head`, and `DataFrame.tail` to examine some aspects of a `DataFrame`. While these methods are critical, they are on their own often insufficient to get enough information to know how to approach a new dataset. This is where exploratory statistics and visualizations for datasets come in.
To see what we mean in terms of gaining exploratory insight (both visually and numerically), let's dig into one of the the datasets that come with the scikit-learn library, the Boston Housing Dataset (though you will load it from a CSV file):
```
df = pd.read_csv('housing_dataset.csv')
df.head()
```
This dataset contains information collected from the U.S Census Bureau concerning housing in the area of Boston, Massachusetts and was first published in 1978. The dataset has 14 columns:
- **CRIM**: Per-capita crime rate by town
- **ZN**: Proportion of residential land zoned for lots over 25,000 square feet
- **INDUS**: Proportion of non-retail business acres per town
- **CHAS**: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- **NOX**: Nitric oxides concentration (parts per 10 million)
- **RM**: Average number of rooms per dwelling
- **AGE**: Proportion of owner-occupied units built prior to 1940
- **DIS**: Weighted distances to five Boston employment centres
- **RAD**: Index of accessibility to radial highways
- **TAX**: Full-value property-tax rate per \$10,000
- **PTRATIO**: Pupil-teacher ratio by town
- **LSTAT**: Percent of lower-status portion of the population
- **MEDV**: Median value of owner-occupied homes in \$1,000s
One of the first methods we can use to better understand this dataset is `DataFrame.shape`:
```
df.shape
```
The dataset has 506 rows and 13 columns.
To get a better idea of the contents of each column we can use `DataFrame.describe`, which returns the maximum value, minimums value, mean, and standard deviation of numeric values in each column, in addition to the quartiles for each column:
```
df.describe()
```
Because datasets can have so many columns in them, it can often be useful to transpose the results of `DataFrame.describe` to better use them.
Note that you can also examine specific descriptive statistics for columns without having to invoke `DataFrame.describe`:
```
df['MEDV'].mean()
df['MEDV'].max()
df['AGE'].median()
```
> **Exercise:**
```
# Now find the maximum value in df['AGE'].
```
Other information that you will often want to see is the relationship between different columns. You do this with the `DataFrame.groupby` method. For example, you could examine the average MEDV (median value of owner-occupied homes) for each value of AGE (proportion of owner-occupied units built prior to 1940):
```
df.groupby(['AGE'])['MEDV'].mean()
```
> **Exercise:**
```
# Now try to find the median value for AGE for each value of MEDV.
```
You can also apply a lambda function to each element of a `DataFrame` column by using the `apply` method. For example, say you wanted to create a new column that flagged a row if more than 50 percent of owner-occupied homes were build before 1940:
```
df['AGE_50'] = df['AGE'].apply(lambda x: x>50)
```
Once applied, you also see how many values returned true and how many returned false by using the `value_counts` method:
```
df['AGE_50'].value_counts()
```
You can also examine figures from the groupby statement you created earlier:
```
df.groupby(['AGE_50'])['MEDV'].mean()
```
You can also group by more than one variable, such AGE_50 (the one you just created), CHAS (whether a town is on the Charles River), and RAD (an index measuring access to the Boston-area radial highways), and then evaluate each group for the average median home price in that group:
```
groupby_twovar=df.groupby(['AGE_50','RAD','CHAS'])['MEDV'].mean()
```
You can then see what values are in this stacked group of variables:
```
groupby_twovar
```
Let's take a moment to analyze these results in a little depth. The first row reports that communities with less the half of houses built before 1940, with a highway-access index of 1, and that are not situated on the Charles River have a mean house price of \$24,667 (1970s dollars). The next row shows that for communities similar to the first row except for being located on the Charles River, the mean house price is \$50,000.
One insight that pops out from continuing down this is that, all else being equal, being located next to the Charles River can significantly increase the value of newer housing stock. The story is more ambiguous for communities dominated by older houses: proximity to the Charles significantly increases home prices in one community (and that one presumably farther away from the city); for all others, being situated on the river either provided a modest increase in value or actually decreased mean home prices.
While groupings like this can be a great way to begin to interrogate your data, you might not care for the "tal" format it comes in. In that case, you can unstack the data into a "wide" format:
```
groupby_twovar.unstack()
```
> **Exercise:**
```
# How could you use groupby to get a sense of the proportion
# of residential land zoned for lots over 25,000 sq.ft.,
# the proportion of non-retail business acres per town,
# and the distance of towns from employment centers in Boston?
```
It is also often valuable to know how many unique values a column has in it with the `nunique` method:
```
df['CHAS'].nunique()
```
Complementary to that, you will also likely want to know what those unique values are, which is where the `unique` method helps:
```
df['CHAS'].unique()
```
You can use the `value_counts` method to see how many of each unique value there are in a column:
```
df['CHAS'].value_counts()
```
Or you can easily plot a bar graph to visually see the breakdown:
```
%matplotlib inline
df['CHAS'].value_counts().plot(kind='bar')
```
Note that the IPython magic command `%matplotlib inline` enables you to view the chart inline.
Let's pull back to the dataset as a whole for a moment. Two major things that you will look for in almost any dataset are trends and relationships. A typical relationship between variables to explore is the Pearson correlation, or the extent to which two variables are linearly related. The `corr` method will show this in table format for all of the columns in a `DataFrame`:
```
df.corr(method='pearson')
```
Suppose you just wanted to look at the correlations between all of the columns and just one variable? Let's examine just the correlation between all other variables and the percentage of owner-occupied houses build before 1940 (AGE). We will do this by accessing the column by index number:
```
corr = df.corr(method='pearson')
corr_with_homevalue = corr.iloc[-1]
corr_with_homevalue[corr_with_homevalue.argsort()[::-1]]
```
With the correlations arranged in descending order, it's easy to start to see some patterns. Correlating AGE with a variable we created from AGE is a trivial correlation. However, it is interesting to note that the percentage of older housing stock in communities strongly correlates with air pollution (NOX) and the proportion of non-retail business acres per town (INDUS). This tells us that, at least in 1978 metro Boston, older towns are more industrial.
Graphically, we can see the correlations by using a heatmap from the Seaborn library:
```
import seaborn as sns
sns.heatmap(df.corr(),cmap=sns.cubehelix_palette(20, light=0.95, dark=0.15))
```
Histograms are another valuable tool for investigating your data. For example, what is the overall distribution of prices of owner-occupied houses in the Boston area?
```
import matplotlib.pyplot as plt
plt.hist(df['MEDV'])
```
The default bin size for the matplotlib histogram (essentially big of buckets of percentages that you include in each histogram bar in this case) is pretty large and might mask smaller details. To get a finer-grained view of the AGE column, you can manually increase the number of bins in the histogram:
```
plt.hist(df['MEDV'],bins=50)
```
Seaborn has a somewhat more attractive version of the standard matplotlib histogram: the distribution plot. This is a combination histogram and kernel density estimate (KDE) plot (essentially a smoothed histogram):
```
sns.distplot(df['MEDV'])
```
Another commonly used plot is the Seaborn jointplot, which combines histograms for two columns along with a scatterplot:
```
sns.jointplot(df['RM'], df['MEDV'], kind='scatter')
```
Unfortunately, many of the dots print over each other. You can help address this by adding some alpha blending, a figure that sets the transparency for the dots so that concentrations of them drawing over one another will be apparent:
```
sns.jointplot(df['RM'], df['MEDV'], kind='scatter', alpha=0.3)
```
Another way to see patterns in your data is with a two-dimensional KDE plot. Darker colors here represent a higher concentration of data points:
```
sns.kdeplot(df['RM'], df['MEDV'], shade=True)
```
Note that while the KDE plot is very good at showing concentrations of data points, finer structures like linear relationships (such as the clear relationship between the number of rooms in homes and the house price) are lost in the KDE plot.
Finally, the pairplot in Seaborn allows you to see scatterplots and histograms for several columns in one table. Here we have played with some of the keywords to produce a more sophisticated and easier-to-read pairplot that incorporates both alpha blending and linear regression lines for the scatterplots.
```
sns.pairplot(df[['RM', 'AGE', 'LSTAT', 'DIS', 'MEDV']], kind="reg", plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})
```
Visualization is the start of the really cool, fun part of data science. So play around with these visualization tools and see what you can learn from the data!
> **Takeaway:** An old joke goes: “What does a data scientist see when they look at a dataset? A bunch of numbers.” There is more than a little truth in that joke. Visualization is often the key to finding patterns and correlations in your data. While visualization often cannot deliver precise results, it can point you in the right direction to ask better questions and efficiently find value in the data.
| github_jupyter |
# Autoencoder
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%matplotlib widget
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.max_open_warning'] = 0
```
Fetch our tools:
```
from lib.nn import Network, Layer, IdentityLayer, AffineLayer, MapLayer
from lib.nnbench import NNBench
from lib.nnvis import NNVis
```
A tool to plot the transfer function
```
def plot_ADC(net):
x = np.arange(-0.125, 1.125, 0.001).reshape(-1,1)
outs = net(x)
fig, ax = plt.subplots()
ax.plot(x, outs, label="bit")
ax.set_xlabel('learnings') # Add an x-label to the axes.
ax.set_ylabel('output') # Add a y-label to the axes.
ax.set_title("ADC output") # Add a title to the axes.
#ax.set_yscale('log')
ax.legend() # Add a legend.
```
# Experiment 1: Training to identity
First off, let's see how well a randomly-initialized affine layer can learn the identity transform
```
net = Network()
net.extend(AffineLayer(16,16))
bench = NNBench(net)
bench.checkpoint_net()
bench.accept_source_of_truth(((v, v) for v in (np.random.randn(16) for i in range(1<<31))))
#bench.training_batch_cluster(2)
vis = NNVis(bench)
net.eta = 0.032
vis.plot_learning(1000, 16)
bench.rollback_net() if False else bench.randomize_net()
rates = np.logspace(-4.5, -4, base=2, num=32)
#cube = bench.learn_loss_cube(1000, rates)
#vis.plot_loss_cube()
bench.rollback_net()
learned_track = bench.learn_track(1000, 16)
traja = bench.analyze_learning_track(learned_track)
vis.plot_trajectory(traja)
```
# Adopt a manifold
Let us choose a latent space for our input vectors, of
$$
y = a(x-b)^c \\
x \in [1,2) \\
a, c \in [1/2, 2] \\
b \in [-1,1]
$$
```
def bend_gen():
while True:
a, c = np.exp2(2 * np.random.rand(2) - 1)
b = 2 * np.random.rand() - 1
#print(a, b, c)
yield np.array([a * (x - b) ** c for x in np.arange(1,2,1/16)]) \
+ 0.0 * np.random.randn(16)
def doublem(g):
for x in g:
yield x, x
bench.accept_source_of_truth(doublem(bend_gen()))
bench.training_batch_cluster(2)
it = bend_gen()
ins = [next(it) for i in range(16)]
fig, ax = plt.subplots()
_ = [ax.plot(ins[i], label=f"{i}") for i in range(len(ins))]
ax.set_xlabel('x')
ax.set_ylabel('y')
_ = ax.set_title("examples")
#ax.set_yscale('log')
#ax.legend()
bench.randomize_net()
%%prun -l 30
net.eta = 0.001
vis.plot_learning(10000, 16)
net.eta = 0.0005
vis.plot_learning(20000, 16)
bench.rollback_net()
net.eta = 0.001
learned_track = bench.learn_track(100000, 16)
traja = bench.analyze_learning_track(learned_track)
vis.plot_trajectory(traja)
vis.knobs_plot_learning(1000, 16)
bench.rollback_net() if True else bench.randomize_net()
rates = np.logspace(-7, -5.5, base=2, num=100)
cube = bench.learn_loss_cube(1000, rates)
vis.plot_loss_cube()
net = Network()
net.extend(AffineLayer(8,8))
net.extend(MapLayer(np.tanh, lambda d: 1.0 - np.tanh(d)**2))
nets.append(net)
```
# Manifold
We adopt the a
```
%whos
```
| github_jupyter |
# Session 2b: Monte Carlo Methods in Grid World
## Colab Setup
```
#uncomment only if you're running from google colab
# !git clone https://github.com/Datatouille/rl-workshop
# !mv rl-workshop/* .
# !ls
```
## Imports
```
import numpy as np
from collections import defaultdict
import matplotlib.pyplot as plt
#cross check with our solutions once you finish
# from solutions.agents import GridworldAgent
from solutions.environments import Gridworld
```
## Fill in The Code
```
import numpy as np
from collections import defaultdict
import sys
"""
Coding assignment order:
1. get_v
2. get_q
3. mc_control_q
4. mc_control_glie
"""
class GridworldAgent:
def __init__(self, env, policy, gamma = 0.9,
start_epsilon = 0.9, end_epsilon = 0.1, epsilon_decay = 0.9):
self.env = env
self.n_action = len(self.env.action_space)
self.policy = policy
self.gamma = gamma
self.v = dict.fromkeys(self.env.state_space,0)
self.n_v = dict.fromkeys(self.env.state_space,0)
self.q = defaultdict(lambda: np.zeros(self.n_action))
self.n_q = defaultdict(lambda: np.zeros(self.n_action))
self.start_epsilon = start_epsilon
self.end_epsilon = end_epsilon
self.epsilon_decay = epsilon_decay
def get_epsilon(self,n_episode):
epsilon = max(self.start_epsilon * (self.epsilon_decay**n_episode),self.end_epsilon)
return(epsilon)
def get_v(self,start_state,epsilon = 0.):
episode = self.run_episode(start_state,epsilon)
"""
Write the code to calculate the state value function of a state
given a deterministic policy.
"""
v=0
return(v)
def get_q(self, start_state, first_action, epsilon=0.):
episode = self.run_episode(start_state,epsilon,first_action)
"""
Write the code to calculate the action function of a state
given a deterministic policy.
"""
q=0
return(q)
def select_action(self,state,epsilon):
probs = np.ones(self.n_action) * (epsilon / self.n_action)
best_action = self.policy[state]
probs[best_action] = 1 - epsilon + (epsilon / self.n_action)
action = np.random.choice(np.arange(self.n_action),p=probs)
return(action)
def print_policy(self):
for i in range(self.env.sz[0]):
print('\n----------')
for j in range(self.env.sz[1]):
p=self.policy[(i,j)]
out = self.env.action_text[p]
print(f'{out} |',end='')
def print_v(self, decimal = 1):
for i in range(self.env.sz[0]):
print('\n---------------')
for j in range(self.env.sz[1]):
out=np.round(self.v[(i,j)],decimal)
print(f'{out} |',end='')
def run_episode(self, start, epsilon, first_action = None):
result = []
state = self.env.reset(start)
#dictate first action to iterate q
if first_action is not None:
action = first_action
next_state,reward,done = self.env.step(action)
result.append((state,action,reward,next_state,done))
state = next_state
if done: return(result)
while True:
action = self.select_action(state,epsilon)
next_state,reward,done = self.env.step(action)
result.append((state,action,reward,next_state,done))
state = next_state
if done: break
return(result)
def update_policy_q(self):
for state in self.env.state_space:
self.policy[state] = np.argmax(self.q[state])
def mc_predict_v(self,n_episode=10000,first_visit=True):
for t in range(n_episode):
traversed = []
e = self.get_epsilon(t)
transitions = self.run_episode(self.env.start,e)
states,actions,rewards,next_states,dones = zip(*transitions)
for i in range(len(transitions)):
if first_visit and (states[i] not in traversed):
traversed.append(states[i])
self.n_v[states[i]]+=1
discounts = np.array([self.gamma**j for j in range(len(transitions)+1)])
self.v[states[i]]+= sum(rewards[i:]*discounts[:-(1+i)])
for state in self.env.state_space:
if state != self.env.goal:
self.v[state] = self.v[state] / self.n_v[state]
else:
self.v[state] = 0
def mc_predict_q(self,n_episode=10000,first_visit=True):
for t in range(n_episode):
traversed = []
e = self.get_epsilon(t)
transitions = self.run_episode(self.env.start,e)
states,actions,rewards,next_states,dones = zip(*transitions)
for i in range(len(transitions)):
if first_visit and ((states[i],actions[i]) not in traversed):
traversed.append((states[i],actions[i]))
self.n_q[states[i]][actions[i]]+=1
discounts = np.array([self.gamma**j for j in range(len(transitions)+1)])
self.q[states[i]][actions[i]]+= sum(rewards[i:]*discounts[:-(1+i)])
elif not first_visit:
self.n_q[states[i]][actions[i]]+=1
discounts = np.array([self.gamma**j for j in range(len(transitions)+1)])
self.q[states[i]][actions[i]]+= sum(rewards[i:]*discounts[:-(1+i)])
#print(self.q,self.n_q)
for state in self.env.state_space:
for action in range(self.n_action):
if state != self.env.goal:
self.q[state][action] = self.q[state][action] / self.n_q[state][action]
else:
self.q[state][action] = 0
def mc_control_q(self,n_episode=10000,first_visit=True):
"""
Write the code to perform Monte Carlo Control
Hint: You just need to do prediction then update the policy
"""
pass
def mc_control_glie(self,n_episode=10000,first_visit=True,lr=0.):
"""
Taking hints from the mc_predict_q and mc_control_q methods, write the code to
perform GLIE Monte Carlo control.
"""
pass
```

Source: [Sutton and Barto](https://cdn.preterhuman.net/texts/science_and_technology/artificial_intelligence/Reinforcement%20Learning%20%20An%20Introduction%20-%20Richard%20S.%20Sutton%20,%20Andrew%20G.%20Barto.pdf)
## Solving Reinforcement Learning Problems - Monte Carlo Methods
There are two main approaches in solving reinforcement learning: **model-based** and **model-free** approaches. A model-based approach assumes that we have some or full knowledge of how our environment works whereas a model-free approach relies on our agent to explore the environment without any prior knowledge.
In this workshop, we will focus on model-free approaches which usually involves two steps: evalulating the state or action value function based on the agent's interactions with the environment also known as **prediction problem** and changing the agent's policy to be closer to an optimal policy also known as **control problem**.
We start with the Monte Carlo Methods aka the trial-and-error-until-you-get-rich-or-broke methods.

### Prediction Problem
```
#stochastic environment
env = Gridworld(wind_p=0.2)
#initial policy
policy = {(0, 0): 3,
(0, 1): 3,
(0, 2): 2,
(1, 0): 3,
(1, 1): 3,
(1, 2): 0,
(2, 0): 3,
(2, 1): 0,
(2, 2): 0}
#stochastic agent - epsilon greedy with decays
a = GridworldAgent(env, policy = policy, gamma = 0.9,
start_epsilon=0.9,end_epsilon=0.3,epsilon_decay=0.9)
print('Reward Grid')
env.print_reward()
print('\n')
print('Policy: Reach Goal ASAP')
a.print_policy()
```
#### Monte Carlo State Value Prediction

```
a.mc_predict_v()
a.print_v()
```
#### Monte Carlo Action Value Prediction

```
a.mc_predict_q(first_visit=False)
print(f'\nActions: {env.action_text}')
for i in a.q: print(i,a.q[i])
```
### Control Problem
```
#stochastic environment
env = Gridworld(wind_p=0.2)
#initial policy
policy = {(0, 0): 3,
(0, 1): 3,
(0, 2): 2,
(1, 0): 3,
(1, 1): 3,
(1, 2): 0,
(2, 0): 3,
(2, 1): 0,
(2, 2): 0}
#stochastic agent - epsilon greedy with decays
a = GridworldAgent(env, policy = policy, gamma = 0.9,
start_epsilon=0.9,end_epsilon=0.3,epsilon_decay=0.9)
print('Reward Grid')
env.print_reward()
print('\n')
print('Policy: Reach Goal ASAP')
a.print_policy()
```
### All-visit Monte Carlo
**Coding Assignment** Implement `mc_control_q` function of `agent.py` using either all-visit or first-visit Monte Carlo.
```
#reset
a.policy = policy
a.q = defaultdict(lambda: np.zeros(a.n_action))
a.n_q = defaultdict(lambda: np.zeros(a.n_action))
a.mc_control_q(n_episode = 1000,first_visit=False)
a.print_policy()
print(f'\nActions: {env.action_text}')
for i in a.q: print(i,a.q[i])
```
#### First-visit Monte Carlo
```
#reset
a.policy = policy
a.q = defaultdict(lambda: np.zeros(a.n_action))
a.n_q = defaultdict(lambda: np.zeros(a.n_action))
a.mc_control_q(n_episode = 1000,first_visit=True)
a.print_policy()
print(f'\nActions: {env.action_text}')
for i in a.q: print(i,a.q[i])
```
#### Greedy within The Limit of Exploration

**Coding Assignment** Implement `mc_control_glie` function of `agent.py`
```
#reset
a.policy = policy
a.q = defaultdict(lambda: np.zeros(a.n_action))
a.n_q = defaultdict(lambda: np.zeros(a.n_action))
a.mc_control_glie(n_episode = 1000)
a.print_policy()
print(f'\nActions: {env.action_text}')
for i in a.q: print(i,a.q[i])
```
#### GLIE with Constant Learning Rate

```
#reset
a.policy = policy
a.q = defaultdict(lambda: np.zeros(a.n_action))
a.n_q = defaultdict(lambda: np.zeros(a.n_action))
a.mc_control_glie(n_episode = 1000, lr=0.1)
a.print_policy()
print(f'\nActions: {env.action_text}')
for i in a.q: print(i,a.q[i])
```
| github_jupyter |
# M-Estimators for Robust Linear Modeling
```
%matplotlib inline
from statsmodels.compat import lmap
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
```
* An M-estimator minimizes the function
$$Q(e_i, \rho) = \sum_i~\rho \left (\frac{e_i}{s}\right )$$
where $\rho$ is a symmetric function of the residuals
* The effect of $\rho$ is to reduce the influence of outliers
* $s$ is an estimate of scale.
* The robust estimates $\hat{\beta}$ are computed by the iteratively re-weighted least squares algorithm
* We have several choices available for the weighting functions to be used
```
norms = sm.robust.norms
def plot_weights(support, weights_func, xlabels, xticks):
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(support, weights_func(support))
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, fontsize=16)
ax.set_ylim(-.1, 1.1)
return ax
```
### Andrew's Wave
```
help(norms.AndrewWave.weights)
a = 1.339
support = np.linspace(-np.pi*a, np.pi*a, 100)
andrew = norms.AndrewWave(a=a)
plot_weights(support, andrew.weights, ['$-\pi*a$', '0', '$\pi*a$'], [-np.pi*a, 0, np.pi*a]);
```
### Hampel's 17A
```
help(norms.Hampel.weights)
c = 8
support = np.linspace(-3*c, 3*c, 1000)
hampel = norms.Hampel(a=2., b=4., c=c)
plot_weights(support, hampel.weights, ['3*c', '0', '3*c'], [-3*c, 0, 3*c]);
```
### Huber's t
```
help(norms.HuberT.weights)
t = 1.345
support = np.linspace(-3*t, 3*t, 1000)
huber = norms.HuberT(t=t)
plot_weights(support, huber.weights, ['-3*t', '0', '3*t'], [-3*t, 0, 3*t]);
```
### Least Squares
```
help(norms.LeastSquares.weights)
support = np.linspace(-3, 3, 1000)
lst_sq = norms.LeastSquares()
plot_weights(support, lst_sq.weights, ['-3', '0', '3'], [-3, 0, 3]);
```
### Ramsay's Ea
```
help(norms.RamsayE.weights)
a = .3
support = np.linspace(-3*a, 3*a, 1000)
ramsay = norms.RamsayE(a=a)
plot_weights(support, ramsay.weights, ['-3*a', '0', '3*a'], [-3*a, 0, 3*a]);
```
### Trimmed Mean
```
help(norms.TrimmedMean.weights)
c = 2
support = np.linspace(-3*c, 3*c, 1000)
trimmed = norms.TrimmedMean(c=c)
plot_weights(support, trimmed.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
```
### Tukey's Biweight
```
help(norms.TukeyBiweight.weights)
c = 4.685
support = np.linspace(-3*c, 3*c, 1000)
tukey = norms.TukeyBiweight(c=c)
plot_weights(support, tukey.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
```
### Scale Estimators
* Robust estimates of the location
```
x = np.array([1, 2, 3, 4, 500])
```
* The mean is not a robust estimator of location
```
x.mean()
```
* The median, on the other hand, is a robust estimator with a breakdown point of 50%
```
np.median(x)
```
* Analogously for the scale
* The standard deviation is not robust
```
x.std()
```
Median Absolute Deviation
$$ median_i |X_i - median_j(X_j)|) $$
Standardized Median Absolute Deviation is a consistent estimator for $\hat{\sigma}$
$$\hat{\sigma}=K \cdot MAD$$
where $K$ depends on the distribution. For the normal distribution for example,
$$K = \Phi^{-1}(.75)$$
```
stats.norm.ppf(.75)
print(x)
sm.robust.scale.mad(x)
np.array([1,2,3,4,5.]).std()
```
Another robust estimator of scale is the Interquartile Range (IQR)
$$\left(\hat{X}_{0.75} - \hat{X}_{0.25}\right),$$
where $\hat{X}_{p}$ is the sample p-th quantile and $K$ depends on the distribution.
The standardized IQR, given by $K \cdot \text{IQR}$ for
$$K = \frac{1}{\Phi^{-1}(.75) - \Phi^{-1}(.25)} \approx 0.74,$$
is a consistent estimator of the standard deviation for normal data.
```
sm.robust.scale.iqr(x)
```
The IQR is less robust than the MAD in the sense that it has a lower breakdown point: it can withstand 25\% outlying observations before being completely ruined, whereas the MAD can withstand 50\% outlying observations. However, the IQR is better suited for asymmetric distributions.
Yet another robust estimator of scale is the $Q_n$ estimator, introduced in Rousseeuw & Croux (1993), 'Alternatives to the Median Absolute Deviation'. Then $Q_n$ estimator is given by
$$
Q_n = K \left\lbrace \vert X_{i} - X_{j}\vert : i<j\right\rbrace_{(h)}
$$
where $h\approx (1/4){{n}\choose{2}}$ and $K$ is a given constant. In words, the $Q_n$ estimator is the normalized $h$-th order statistic of the absolute differences of the data. The normalizing constant $K$ is usually chosen as 2.219144, to make the estimator consistent for the standard deviation in the case of normal data. The $Q_n$ estimator has a 50\% breakdown point and a 82\% asymptotic efficiency at the normal distribution, much higher than the 37\% efficiency of the MAD.
```
sm.robust.scale.qn_scale(x)
```
* The default for Robust Linear Models is MAD
* another popular choice is Huber's proposal 2
```
np.random.seed(12345)
fat_tails = stats.t(6).rvs(40)
kde = sm.nonparametric.KDEUnivariate(fat_tails)
kde.fit()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.density);
print(fat_tails.mean(), fat_tails.std())
print(stats.norm.fit(fat_tails))
print(stats.t.fit(fat_tails, f0=6))
huber = sm.robust.scale.Huber()
loc, scale = huber(fat_tails)
print(loc, scale)
sm.robust.mad(fat_tails)
sm.robust.mad(fat_tails, c=stats.t(6).ppf(.75))
sm.robust.scale.mad(fat_tails)
```
### Duncan's Occupational Prestige data - M-estimation for outliers
```
from statsmodels.graphics.api import abline_plot
from statsmodels.formula.api import ols, rlm
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
print(prestige.head(10))
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(211, xlabel='Income', ylabel='Prestige')
ax1.scatter(prestige.income, prestige.prestige)
xy_outlier = prestige.loc['minister', ['income','prestige']]
ax1.annotate('Minister', xy_outlier, xy_outlier+1, fontsize=16)
ax2 = fig.add_subplot(212, xlabel='Education',
ylabel='Prestige')
ax2.scatter(prestige.education, prestige.prestige);
ols_model = ols('prestige ~ income + education', prestige).fit()
print(ols_model.summary())
infl = ols_model.get_influence()
student = infl.summary_frame()['student_resid']
print(student)
print(student.loc[np.abs(student) > 2])
print(infl.summary_frame().loc['minister'])
sidak = ols_model.outlier_test('sidak')
sidak.sort_values('unadj_p', inplace=True)
print(sidak)
fdr = ols_model.outlier_test('fdr_bh')
fdr.sort_values('unadj_p', inplace=True)
print(fdr)
rlm_model = rlm('prestige ~ income + education', prestige).fit()
print(rlm_model.summary())
print(rlm_model.weights)
```
### Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
* Data is on the luminosity and temperature of 47 stars in the direction of Cygnus.
```
dta = sm.datasets.get_rdataset("starsCYG", "robustbase", cache=True).data
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, xlabel='log(Temp)', ylabel='log(Light)', title='Hertzsprung-Russell Diagram of Star Cluster CYG OB1')
ax.scatter(*dta.values.T)
# highlight outliers
e = Ellipse((3.5, 6), .2, 1, alpha=.25, color='r')
ax.add_patch(e);
ax.annotate('Red giants', xy=(3.6, 6), xytext=(3.8, 6),
arrowprops=dict(facecolor='black', shrink=0.05, width=2),
horizontalalignment='left', verticalalignment='bottom',
clip_on=True, # clip to the axes bounding box
fontsize=16,
)
# annotate these with their index
for i,row in dta.loc[dta['log.Te'] < 3.8].iterrows():
ax.annotate(i, row, row + .01, fontsize=14)
xlim, ylim = ax.get_xlim(), ax.get_ylim()
from IPython.display import Image
Image(filename='star_diagram.png')
y = dta['log.light']
X = sm.add_constant(dta['log.Te'], prepend=True)
ols_model = sm.OLS(y, X).fit()
abline_plot(model_results=ols_model, ax=ax)
rlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(.5)).fit()
abline_plot(model_results=rlm_mod, ax=ax, color='red')
```
* Why? Because M-estimators are not robust to leverage points.
```
infl = ols_model.get_influence()
h_bar = 2*(ols_model.df_model + 1 )/ols_model.nobs
hat_diag = infl.summary_frame()['hat_diag']
hat_diag.loc[hat_diag > h_bar]
sidak2 = ols_model.outlier_test('sidak')
sidak2.sort_values('unadj_p', inplace=True)
print(sidak2)
fdr2 = ols_model.outlier_test('fdr_bh')
fdr2.sort_values('unadj_p', inplace=True)
print(fdr2)
```
* Let's delete that line
```
l = ax.lines[-1]
l.remove()
del l
weights = np.ones(len(X))
weights[X[X['log.Te'] < 3.8].index.values - 1] = 0
wls_model = sm.WLS(y, X, weights=weights).fit()
abline_plot(model_results=wls_model, ax=ax, color='green')
```
* MM estimators are good for this type of problem, unfortunately, we do not yet have these yet.
* It's being worked on, but it gives a good excuse to look at the R cell magics in the notebook.
```
yy = y.values[:,None]
xx = X['log.Te'].values[:,None]
```
**Note**: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and robustbase 0.93.
```ipython
%load_ext rpy2.ipython
%R library(robustbase)
%Rpush yy xx
%R mod <- lmrob(yy ~ xx);
%R params <- mod$coefficients;
%Rpull params
```
```ipython
%R print(mod)
```
```
Call:
lmrob(formula = yy ~ xx)
\--> method = "MM"
Coefficients:
(Intercept) xx
-4.969 2.253
```
```
params = [-4.969387980288108, 2.2531613477892365] # Computed using R
print(params[0], params[1])
abline_plot(intercept=params[0], slope=params[1], ax=ax, color='red')
```
### Exercise: Breakdown points of M-estimator
```
np.random.seed(12345)
nobs = 200
beta_true = np.array([3, 1, 2.5, 3, -4])
X = np.random.uniform(-20,20, size=(nobs, len(beta_true)-1))
# stack a constant in front
X = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]
mc_iter = 500
contaminate = .25 # percentage of response variables to contaminate
all_betas = []
for i in range(mc_iter):
y = np.dot(X, beta_true) + np.random.normal(size=200)
random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))
y[random_idx] = np.random.uniform(-750, 750)
beta_hat = sm.RLM(y, X).fit().params
all_betas.append(beta_hat)
all_betas = np.asarray(all_betas)
se_loss = lambda x : np.linalg.norm(x, ord=2)**2
se_beta = lmap(se_loss, all_betas - beta_true)
```
#### Squared error loss
```
np.array(se_beta).mean()
all_betas.mean(0)
beta_true
se_loss(all_betas.mean(0) - beta_true)
```
| github_jupyter |
http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
```
from sklearn.neighbors import KNeighborsClassifier
X= [[0.841509434,0.699166,0.845695,0,0.1,0.156,0.967049],
[0.749356223,1.611461,0.736916,0,0.2,0.3741,2.693249],
[0.263510732,0.433383,0.130578,0,0.05,0.0897,0.451053],
[0.245054945,3.621318,0.469305,0,0.55,0.8439,8.337038],
[0.163020833,1.659765,0.985557,0,0.15,0.3789,2.231894],
[0.017117117,0.89968,0.867267,0,0.15,0.2176,1.1261],
[0.016666667,0.915847,0.866627,0,0.15,0.2164,1.143948],
[0.012063492,0.803832,0.864758,0,0.15,0.1964,1.132905],
[0.085321101,2.855849,0.974629,0,0.3,0.6225,4.848462],
[0.000976412,0.413031,0.352385,1,0.1,0.0933,0.391748],
[0.022222222,1.125201,0.747835,0,0.15,0.247,1.562166],
[0.004683826,3.445772,0.982043,0,0.4,0.8123,9.188976],
[0.004560456,3.249376,0.758469,0,0.45,0.7355,5.353005]]
# y = [32,
# 64,
# 128,
# 32,
# 128,
# 32,
# 1024,
# 16,
# 64,
# 32,
# 128,
# 128,
# 256]
y = [32,
64,
128,
32,
128,
32,
1024,
32,
64,
32,
128,
128,
256]
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X,y)
test_X = [[0.26505697,2.948113,0.345524,0.4,0.45,0.65017471,5.086808]]
print neigh.kneighbors(test_X)
print neigh.kneighbors_graph(test_X)
neigh.predict(test_X)
neigh.predict_proba(test_X)
neigh.classes_
```
### compute 2d - em_expectmu
```
X = [[0.881481481,2.587229,0.907526,0,0.35,0.6005,4.322245],
[0.495752896,1.257769,0.721808,0,0.1,0.3069,1.658795],
[0.424892966,2.00134,0.98219,0,0.15,0.3513,1.958961],
[0.105555556,0.17028,0.116339,0,0.05,0.0401,0.165949],
[0.045238095,1.784725,0.854031,0,0.4,0.4464,3.497565],
[0.032925682,1.702509,0.990866,0,0.15,0.3404,2.33181],
[0.002206736,3.322614,0.699366,0,0.45,0.83,9.698005]]
y = [256,
256,
256,
128,
128,
256,
768]
neigh1 = KNeighborsClassifier(n_neighbors=3)
neigh1.fit(X,y)
test_em_expectmu = [[0.915384615,0.352684,0.124273,0,0.1,0.0722,0.307958]]
print neigh1.predict(test_em_expectmu)
print neigh1.predict_proba(test_em_expectmu)
print neigh1.classes_
```
### compute 2d - em_expectsigma_dev
```
em_expectsigma_dev = [[0.545671642,1.262241,0.321989,0,0.15,0.2578,1.349711]]
print neigh1.predict(test_em_expectmu)
print neigh1.predict_proba(test_em_expectmu)
print neigh1.classes_
```
### mem 2d - hmm
```
mem2d_trainX=[[1.911111111,0.898519,1.273404,0.3188,0.3990453679,0.3990453679,0,0.6156719307,1,2.158354],
[1.7,0.897425,1.434779,0.3589,0.3983864151,0.3983864151,0.6146552798,0.6146552798,2,2.505722],
[1.830769231,0.891426,1.329263,0.3227,0.389767283,0.389767283,2.40545012,0.3006719453,4,1.8795],
[2.163636364,0.899354,1.155747,0.2893,0.3871490377,0.3871490377,0.5973111637,0.5973111637,2,1.809195],
[2.477777778,0.848605,0.338919,0.0847,0.1366144811,1.092986113,0,0,0,0.387801]]
mem2d_trainY=[512,
512,
64,
128,
64]
neigh_mem2d = KNeighborsClassifier(n_neighbors=3)
neigh_mem2d.fit(mem2d_trainX, mem2d_trainY)
em_gammaobs = [[2.23,0.296949,0.374258,0.1001,0.05629230189,0.05629230189,0,0,0,0.623941]]
print neigh_mem2d.predict(em_gammaobs)
print neigh_mem2d.predict_proba(em_gammaobs)
print neigh_mem2d.classes_
em_update_expectsigma = [[2.23,0.294736,0.41878,0.1103,0.1435730377,0.06827649057,0,0,0,0.603971]]
print neigh_mem2d.predict(em_update_expectsigma)
print neigh_mem2d.predict_proba(em_update_expectsigma)
print neigh_mem2d.classes_
```
| github_jupyter |
### 1. Definição do Problema:
Esse notebook analisa as equipes finalistas das copas do mundo de futebol da FIFA, entre 1930 e 2018. Os dados foram obtidos no Keaglle (https://www.kaggle.com/abecklas/fifa-world-cup ), compreendendo o período de 1930 até 2014. Esses dados foram compilados no ano de 2018, mesmo ano em que aconteceu a Copa do Mundo Fifa na Rússia. A ideia do desafio do Keaglle é que algum competidor seja capaz de criar um modelo de machine learning que forneça uma estimativa do vencedor da copa de 2018, usando esse data set e outros dois que também estão disponíveis site do desafio.
Como estamos em 2021, sabemos bem que a campeã naquele ano foi a França, que fez a final com a Croácia. A equipe que ficou com o terceiro lugar foi a Bélgica (eliminou o Brasil nas quartas de final) e ganhou a disputa com a Inglaterra.
Dessa forma, vou adicionar essas informações ao data set, fazendo com que o tamanho da amostra seja de 21 observações. Obviamente não é um "Big Data", mas a finalidade aqui é exercitar o Python e a análise de dados com o Pandas.
### 2. Importação e limpeza dos Dados :
No site do Keaglle temos 3 arquivos CSV. O primeiro é o "WorldCupMatches.csv", que contém as informações sobre todos os jogos disputados em todas as partidas das copas do mundo, desde 1930. O segundo arquivo é o "WorldCupPlayers.csv" que contém informações sobre as equipes que participaram dos torneios. E o terceiro arquivo é "WorldCups.csv", que tem informações consolidadas sobre as copas do mundo e seus quatro finalistas. É com esse terceiro arquivo que vou trabalhar aqui.
```
# Vou listar os arquivos e sua localização na minha máquina, considerando a subpasta onde eu os salvei.
import os
for dirname, _, filenames in os.walk('input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Vou trabalhar com quatro pacotes. Dois para manipulação de dados, Pandas e Numpy,
# mas a ideia aqui é exercitar o Pandas. Utilizo os outros dois pacotes para análise gráfica.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Criando o data frame. Lembrando que este data frame não tem as informações
# da última copa do mundo, que serão adicionadas posteriormente.
dfCups = pd.read_csv('input/WorldCups.csv')
```
### Descrição das variáveis que compõe o data frame
#### dfCups
* Year = Ano em que aconteceu a copa do mundo
* Country = País sede da copa
* Winner = Campeão
* Runners-Up = Vice campeão
* Third = Terceiro colocado
* Fourth = Quarto colocado
* GoalsScored = Gols marcados durante a copa
* QualifiedTeams = Total de equipes participantes
* MatchesPlayed = Total de partidas disputadas na copa
* Attendance = Público total da copa
```
dfCups
dfCups.columns
```
### Manipulando os dados
Primeiramente preciso incluir os dados da copa de 2018. Depois eu corrijo algumas informações que aparecem truncadas no data set. Alemanha por exemplo, até 1988 aparecia como Germany e Germany FR, porque tínhamos Alemanha Ocidental e Alemanha Oriental.
Feito isso posso partir para a análise dos dados.
```
# Vou copiar o data set para mantar os dados originais armazenados.
# Assim posso fazer toda a manipulação diretamente na cópia.
dfCups1 = dfCups.copy()
# Adicionando a linha com os dados da Copa de 2018.
dfCups1 = dfCups1.append({'Year':'2018', 'Country':'Russia', 'Winner':'France', 'Runners-Up':'Croatia', 'Third':'Belgium', 'Fourth':'England', 'GoalsScored':'157', 'QualifiedTeams': '32', 'MatchesPlayed': '64', 'Attendance': '3.031.768'}, ignore_index=True)
dfCups1
```
Vamos corrigir o nome da Alemanha, apagando a expressão FR. Não vamos fazer isso para a variável 'Country' porque não há necessidade. Devemos fazer isso para 'Winner', 'Runners-Up', 'Third' e 'Fourth'. Começo criando um objeto chamado 'namesCups' e salvando a informação sempre que aparecer 'FR' nas variáveis.
```
namesCupsW = dfCups1[dfCups1['Winner'].str.contains('Germany FR')]['Winner'].value_counts(); namesCupsW
namesCupsR = dfCups1[dfCups1['Runners-Up'].str.contains('Germany FR')]['Runners-Up'].value_counts(); namesCupsR
namesCupsT = dfCups1[dfCups1['Third'].str.contains('Germany FR')]['Third'].value_counts(); namesCupsT
namesCupsF = dfCups1[dfCups1['Fourth'].str.contains('Germany FR')]['Fourth'].value_counts(); namesCupsF
# Crio o objeto 'wrongCups' e passo como parâmetro o 'namesCups'.
wrongCupsW = list(namesCupsW.index); wrongCupsW
wrongCupsR = list(namesCupsR.index); wrongCupsR
wrongCupsT = list(namesCupsT.index); wrongCupsT
wrongCupsF = list(namesCupsF.index); wrongCupsF
# Crio o objeto 'correctCups', passando como parâmetro o 'namesCups' com a função split
# sempre que a expressão 'FR' aparecer em 'wrongCups', removendo o 'FR'.
correctCupsW = [namesCupsW.split(' FR')[0] for namesCupsW in wrongCupsW]; correctCupsW
correctCupsR = [namesCupsR.split(' FR')[0] for namesCupsR in wrongCupsR]; correctCupsR
correctCupsT = [namesCupsT.split(' FR')[0] for namesCupsT in wrongCupsT]; correctCupsT
correctCupsF = [namesCupsF.split(' FR')[0] for namesCupsF in wrongCupsF]; correctCupsF
# Agora estou criando uma função para fazer a substituição (correção) dentro do meu data set, 'dfCups1'.
for index, wr in enumerate(wrongCupsW):
dfCups1 = dfCups1.replace(wrongCupsW[index], correctCupsW[index])
for index, wr in enumerate(wrongCupsR):
dfCups1 = dfCups1.replace(wrongCupsR[index], correctCupsR[index])
for index, wr in enumerate(wrongCupsT):
dfCups1 = dfCups1.replace(wrongCupsT[index], correctCupsT[index])
for index, wr in enumerate(wrongCupsF):
dfCups1 = dfCups1.replace(wrongCupsF[index], correctCupsF[index])
# Se eu olhar novamente cada variável, não devo encontrar a expressão 'FR' junto com 'Germany'.
namesCupsW = dfCups1[dfCups1['Winner'].str.contains('Germany FR')]['Winner'].value_counts(); namesCupsW
namesCupsR = dfCups1[dfCups1['Runners-Up'].str.contains('Germany FR')]['Runners-Up'].value_counts(); namesCupsR
namesCupsT = dfCups1[dfCups1['Third'].str.contains('Germany FR')]['Third'].value_counts(); namesCupsT
namesCupsF = dfCups1[dfCups1['Fourth'].str.contains('Germany FR')]['Fourth'].value_counts(); namesCupsF
dfCups1
```
Resolvido o problema dos nomes, podemos ver alguns dados interessantes sobre as copas do mundo. Por exemplo, das 21 copas do mundo já realizadas, cinco países foram sede pelo menos duas vezes. Porém, apesar da realização de 21 copas, apenas 16 países sediaram copas do mundo.
```
len(dfCups1)
```
### 3. Análise Exploratória dos Dados
Cinco países sediaram a copa do mundo em pelo menos duas oportunidades, entre elas o Brasil, que é a equipe que mais títulos conquistou, e a Alemanha, que é a equipe que mais aparece entre os quatro finalistas em toda a história.
Dos países Sul-Americanos, apenas quatro deles sediaram copas. Além do Brasil, o Uruguai, Chile e Argentina. O México (América do Norte) é outro país Latino Americano que sediou a copa duas vezes.
Ao todo, apenas 16 países sediaram copas do mundo de futebol.
```
# Países que mais sediaram copas do mundo
country = dfCups1['Country'].value_counts(); country
len(country)
```
O Brasil é a equipe com mais títulos mundiais (5 títulos), seguido pela Alemanha e Itália com 4. Argentina, França e Uruguai tem 2 títulos enquanto Inglaterra e Espanha tem uma conquista cada.
Temos apenas 8 equipes vencedoras, sendo três delas da América do Sul.
```
# Equipes com mais títulos mundiais
winner = dfCups1['Winner'].value_counts(); winner
len(winner)
```
Das dez equipes que foram vice campeãs, a Alemanha chegou 4 vezes nessa posição, seguida pela Holanda (que não tem título mundial) e Argentina com 3 vices cada.
Das equipes finalistas, Checoslováquia, Hugria, Suécia, Croácia e Holanda até o momento não conseguiram se sagrar campeãs mundiais de futebol.
```
# Equipes que mais foram vice campeãs
runners_up = dfCups1['Runners-Up'].value_counts(); runners_up
len(runners_up)
```
Ao longo das copas do mundo, tivemos 14 equipes diferentes em terceiro lugar e 16 em quarto lugar. Destaca-se a Alemanha com 4 aparições na terceira colocação e o Uruguai com 3 aparições no quarto lugar.
```
# Equipes que mais chegaram em terceiro lugar
third = dfCups1['Third'].value_counts(); third
len(third)
# Equipes que mais chegaram em quarto lugar
fourth = dfCups1['Fourth'].value_counts(); fourth
len(fourth)
# Agora vamos separar todos os finalistas de copa do mundo em um único data set.
listFinalist = dfCups1.iloc[:, 2:6]
```
Apenas por curiosidade, vou listar as estatísticas descritivas do data set. Ele mostra que temos 8 equipes únicas campeãs, 10 equipes únicas vice campeãs, 14 equipes únicas na terceira colocação e 16 na quarta colocação.
Outra estatística apresentada é qual a equipe que aparece mais vezes em cada colocação. Nesse caso o Brasil é a equipe com maior números de títulos, a Alemanha é a que mais aparece na segunda e terceira colocações e o Uruguai é a que mais aparece em quarto lugar.
```
listFinalist.describe()
```
Vou concatenar os quatro conjuntos de dados criados anteriormente, como exercício, e depois vou fazer uma contagem para ver qual equipe que mais aparece entre os finalistas.
```
df_concat = pd.concat([winner, runners_up, third, fourth], axis = 1)
df_concat
```
O problema é que ao fazer isso, quando a equipe não figura em uma determinada colocação é gerado um valor NaN. Esse problema é superado com o 'fillna()'.
```
df_concat.update(df_concat['Winner'].fillna(0))
df_concat.update(df_concat['Runners-Up'].fillna(0))
df_concat.update(df_concat['Third'].fillna(0))
df_concat.update(df_concat['Fourth'].fillna(0))
df_concat
```
Agora eu posso acrescentar uma coluna 'Total' e apresentar a soma das vezes em que a equipe chegou na fase final das copas.
```
df_concat['Total'] = df_concat.sum(axis=1)
```
A Alemanha é a equipe que mais apareceu na fase final das Copas do Mundo Fifa. O Brasil é a segunda equipe que mais apareceu enquanto o terceiro lugar fica com a Itália. Dos demais Sul-Americanos, Uruguai e Argentina apareceram 5 vezes cada e o Chile apareceu uma única vez.
```
df_concat.sort_values('Total', ascending=False)
```
Vamos fazer mais algumas análises, agora com os dados numéricos (número de gols, quantidades de equipes, partidas disputadas e público total da copa.
```
# Primeiro vou copiar o data set para um novo,
# a fim de preservar as alterações feitas até agora e poder treinar outras manipulações.
dfCups2 = dfCups1.copy()
```
O grande problema é que os dados numéricos estranhamente não estão apresentando as estatísticas corretamente. Aparentemente esses dados não são numéricos. Por isso primeiro vou converter as colunas em numérico para depois verificar as estatísticas descritivas.
A média de gols em uma copa do mundo é de 120,76 por competição, tendo uma amplitude grande, entre 70 e 171. Essa amplitude larga se deve ao fato de que ao longo das copas a quantidade de equipes participantes e jogos disputados variou consideravelmente, influenciando essa estatística calculada e as outras como veremos adiante.
```
dfCups2['GoalsScored'] = pd.to_numeric(dfCups2['GoalsScored'])
dfCups2['GoalsScored'].describe()
```
A primeira copa do mundo tinha apenas 13 participantes e a partir de 1998 passou a contar com 32 participantes.
```
dfCups2['QualifiedTeams'] = pd.to_numeric(dfCups2['QualifiedTeams'])
dfCups2['QualifiedTeams'].describe()
```
Da mesma forma como a quantidade de equipes participantes evoluiu ao longo do tempo, consequentemente a quantidade de jogos disputados também se alterou. Na primeira copa do mundo tivemos apenas 17 jogos disputados. A partir de 1998 passamos a ter 64 jogos em cada competição.
```
dfCups2['MatchesPlayed'] = pd.to_numeric(dfCups2['MatchesPlayed'])
dfCups2['MatchesPlayed'].describe()
```
Como a coluna 'Attendace' é um dtype object, e eu quero numérico, e tem pontos separando os milhares, utilizo o comando abaixo pra converter e transformar os pontos em vírgula (milhar separado por vírgula ao invés de pontos).
```
dfCups2['Attendance'] = pd.to_numeric(dfCups2['Attendance'].str.replace('.',''))
dfCups2['Attendance'].describe()
```
O problema é que o resultado é um dtype float64, com números científicos. Pra remover essa formatação eu uso a função abaixo.
Com isso podemos ver que o público médio é de quase 2 milhões de pessoas, variando entre 363 mil (copa da Itália em 1934) até 3.587 mil (copa dos Estados Unidos em 19940.
```
dfCups2["Attendance"].describe().apply(lambda x: format(x, ',.2f'))
```
### Análise gráfica
Vou adicionar alguns gráficos apenas como exercício, para ter uma opção visual dos resultados discutidos acima, usando para isso o pacote Seaborn.
```
fig, ax = plt.subplots(figsize = (10,5))
sns.despine(right = True)
g = sns.barplot(x = 'Year', y = 'Attendance', data = dfCups2)
g.set_xticklabels(g.get_xticklabels(), rotation = 80)
g.set_title('Total e público nos estádios por ano')
fig, ax = plt.subplots(figsize = (10,5))
sns.despine(right = True)
g = sns.barplot(x = 'Year', y = 'QualifiedTeams', data = dfCups2)
g.set_xticklabels(g.get_xticklabels(), rotation = 80)
g.set_title('Quantidade de equipes classificadas para a copa do mundo por ano')
fig, ax = plt.subplots(figsize = (10,5))
sns.despine(right = True)
g = sns.barplot(x = 'Year', y = 'GoalsScored', data = dfCups2)
g.set_xticklabels(g.get_xticklabels(), rotation = 80)
g.set_title('Gols marcados por ano')
fig, ax = plt.subplots(figsize = (10,5))
sns.despine(right = True)
g = sns.barplot(x = 'Year', y = 'MatchesPlayed', data = dfCups2)
g.set_xticklabels(g.get_xticklabels(), rotation = 80)
g.set_title('Jogos disputados por ano')
```
### 4. Conclusão
O objetivo deste notebook é exercitar o Python, espcialmente a análise de dados com o Pandas. Para isso usei dados das copas do mundo organizados pela FIFA, disponível no site do Keaglle. A ideia principal é exercitar manipulação de dados, encontrar soluções para os problemas que surgem durante o processo de análise e publicar no portfólio do Github.
| github_jupyter |
```
# Binary Classification with Sonar Dataset: Standardized Smaller
import numpy
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataframe = read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# smaller model
def create_smaller():
# create model
model = Sequential()
model.add(Dense(30, input_dim=60, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(build_fn=create_smaller, epochs=100,
batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Smaller: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
# larger model
def create_larger():
# create model
model = Sequential()
model.add(Dense(60, input_dim=60, kernel_initializer='normal', activation='relu'))
model.add(Dense(30, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(build_fn=create_larger, epochs=100, batch_size=5,
verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Larger: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
```
| github_jupyter |
# Programming_Assignment-20
Question1
Create a function that takes a list of strings and integers, and filters out the list so that it
returns a list of integers only.
Examples
filter_list([1, 2, 3, 'a', 'b', 4]) ➞ [1, 2, 3, 4]
filter_list(['A', 0, 'Edabit', 1729, 'Python', '1729']) ➞ [0, 1729]
filter_list(['Nothing', 'here']) ➞ []
```
def filter_list(lst):
intLst = []
for i in lst:
if type(i) == int:
intLst.append(i)
return intLst
filter_list([1, 2, 3, 'a', 'b', 4])
filter_list(['A', 0, 'Edabit', 1729, 'Python', '1729'])
filter_list(['Nothing', 'here'])
```
Question2
Given a list of numbers, create a function which returns the list but with each element's
index in the list added to itself. This means you add 0 to the number at index 0, add 1 to the
number at index 1, etc...
Examples
add_indexes([0, 0, 0, 0, 0]) ➞ [0, 1, 2, 3, 4]
add_indexes([1, 2, 3, 4, 5]) ➞ [1, 3, 5, 7, 9]
add_indexes([5, 4, 3, 2, 1]) ➞ [5, 5, 5, 5, 5]
```
def add_indexes(lst):
ind = 0
index = []
for i in lst:
index.append(lst.index(i,ind) + i)
ind+=1
return index
add_indexes([0, 0, 0, 0, 0])
add_indexes([1, 2, 3, 4, 5])
add_indexes([5, 4, 3, 2, 1])
```
Question3
Create a function that takes the height and radius of a cone as arguments and returns the
volume of the cone rounded to the nearest hundredth. See the resources tab for the formula.
Examples
cone_volume(3, 2) ➞ 12.57
cone_volume(15, 6) ➞ 565.49
cone_volume(18, 0) ➞ 0
```
import math
pi = math.pi
# Function to calculate Volume of Cone
def cone_volume(r, h):
return round((1 / 3) * pi * r * r * h)
r = 5
h = 12
print( "Volume Of Cone : ", cone_volume(r, h) )
cone_volume(5, 12)
cone_volume(15, 6)
cone_volume(18,0)
```
Question4
This Triangular Number Sequence is generated from a pattern of dots that form a triangle.
The first 5 numbers of the sequence, or dots, are:
1, 3, 6, 10, 15
This means that the first triangle has just one dot, the second one has three dots, the third one
has 6 dots and so on.
Write a function that gives the number of dots with its corresponding triangle number of the
sequence.
Examples
triangle(1) ➞ 1
triangle(6) ➞ 21
triangle(215) ➞ 23220
A Rule
We can make a "Rule" so we can calculate any triangular number.
First, rearrange the dots like this:
triangular numbers 1 to 5
Then double the number of dots, and form them into a rectangle:
triangular numbers when doubled become n by n+1 rectangles
Now it is easy to work out how many dots: just multiply n by n+1
Dots in rectangle = n(n+1)
But remember we doubled the number of dots, so
Dots in triangle = n(n+1)/2
We can use xn to mean "dots in triangle n", so we get the rule:
Rule: xn = n(n+1)/2
```
def triangle(n):
return n*(n+1)*0.5
n = int(input('Please Enter the trinalge number :'))
print("The {}th triangle has {} dots ".format(n,int(triangle(n))))
triangle(1)
triangle(60)
```
Question5
Create a function that takes a list of numbers between 1 and 10 (excluding one number) and
returns the missing number.
Examples
missing_num([1, 2, 3, 4, 6, 7, 8, 9, 10]) ➞ 5
missing_num([7, 2, 3, 6, 5, 9, 1, 4, 8]) ➞ 10
missing_num([10, 5, 1, 2, 4, 6, 8, 3, 9]) ➞ 7
```
def missing_num(lst):
total = sum([x for x in range(11)])
sum_Of_list = sum(lst)
return total - sum_Of_list
#print(missing_num([1, 2, 3, 4, 6, 7, 8, 9, 10]))
missing_num([1, 2, 3, 4, 6, 7, 8, 9, 10])
missing_num([10, 5, 1, 2, 4, 6, 8, 3, 9])
missing_num([7, 2, 3, 6, 5, 9, 1, 4, 8])
```
| github_jupyter |
# Safely refactoring ACLs and firewall rules
Changing ACLs or firewall rules (or *filters*) is one of the riskiest updates to a network. Even a small error can block connectivity for a large set of critical services or open up sensitive resources to the world at large. Earlier notebooks showed how to [analyze filters for what they do and do not allow](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Analyzing%20ACLs%20and%20Firewall%20Rules.ipynb) and how to [make specific changes in a provably safe manner](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb).
This notebook shows how to refactor complex filters in a way that the full impact of refactoring can be understood and analyzed for correctness *before* refactored filters are pushed to the network.
## Original ACL
We will use the following ACL as a running example in this notebook. The ACL can be read as a few separate sections:
* Line 10: Deny ICMP redirects
* Lines 20, 23: Permit BFD traffic on certain blocks
* Lines 40-80: Permit BGP traffic
* Lines 90-100: Permit DNS traffic a /24 subnet while denying it from a /32 within that
* Lines 110-500: Permit or deny IP traffic from certain subnets
* Line 510: Permit ICMP echo reply
* Lines 520-840: Deny IP traffic to certain subnets
* Lines 850-880: Deny all other types of traffic
(The IP address space in the ACL appears all over the place because it has been anonymized via [Netconan](https://github.com/intentionet/netconan). Netconan preserves the super- and sub-prefix relationships when anonymizing IP addresses and prefixes.)
```
# The ACL before refactoring
original_acl = """
ip access-list acl
10 deny icmp any any redirect
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3784
30 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3785
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
80 permit tcp 205.248.58.190/32 205.248.58.188/32 eq bgp
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
170 deny ip 11.125.64.0/19 any
180 permit ip 166.146.58.184/32 any
190 deny ip 218.66.57.0/24 any
200 deny ip 218.66.56.0/24 any
210 deny ip 218.67.71.0/24 any
220 deny ip 218.67.72.0/24 any
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
250 deny ip 54.203.159.1/32 any
260 permit ip 218.8.104.0/25 any
270 permit ip 218.8.104.128/25 any
280 permit ip 218.8.103.0/24 any
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
330 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
470 deny ip any 11.125.89.32/30
480 deny ip any 11.125.89.36/30
490 deny ip any 11.125.89.40/30
500 deny ip any 11.125.89.44/30
510 permit icmp any any echo-reply
520 deny ip any 11.36.199.216/30
530 deny ip any 11.36.199.36/30
540 deny ip any 11.36.199.2/30
550 deny ip any 11.36.199.52/30
560 deny ip any 11.36.199.20/30
570 deny ip any 11.125.82.216/30
580 deny ip any 11.125.82.220/32
590 deny ip any 11.125.82.36/30
600 deny ip any 11.125.82.12/30
610 deny ip any 11.125.80.136/30
620 deny ip any 11.125.80.141/32
630 deny ip any 11.125.87.48/30
640 deny ip any 11.125.87.168/30
650 deny ip any 11.125.87.173/32
660 deny ip any 11.125.90.56/30
670 deny ip any 11.125.90.240/30
680 deny ip any 11.125.74.224/30
690 deny ip any 11.125.91.132/30
700 deny ip any 11.125.89.132/30
710 deny ip any 11.125.89.12/30
720 deny ip any 11.125.92.108/30
730 deny ip any 11.125.92.104/32
740 deny ip any 11.125.92.28/30
750 deny ip any 11.125.92.27/32
760 deny ip any 11.125.92.160/30
770 deny ip any 11.125.92.164/32
780 deny ip any 11.125.92.204/30
790 deny ip any 11.125.92.202/32
800 deny ip any 11.125.93.192/29
810 deny ip any 11.125.95.204/30
820 deny ip any 11.125.95.224/30
830 deny ip any 11.125.95.180/30
840 deny ip any 11.125.95.156/30
850 deny tcp any any
860 deny icmp any any
870 deny udp any any
880 deny ip any any
"""
```
## Compressed ACL
Now, assume that we want to compress this ACL to make it more manageable. We do the following operations:
* Merge the two BFD permit statements on lines 20-30 into one statement using the range directive.
* Remove the BGP session on line 80 because it has been decommissioned
* Remove lines 180 and 250 because they are shadowed by earlier lines and will never match a packet. Such lines can be found via the `filterLineReachability` question, as shown [here](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Analyzing%20ACLs%20and%20Firewall%20Rules.ipynb#filterLineReachability:-Analyzing-reachability-of-filter-lines).
* Merge pairs of lines (190, 200), (210, 220), and (260, 270) by combining their prefixes into a less specific prefix.
* Remove all deny statements on lines 520-870. They are not needed given the final deny on line 880.
The result of these actions, which halve the ACL size, is shown below. To enable easy observation of changes, we have preserved the line numbers.
```
compressed_acl = """
ip access-list acl
10 deny icmp any any redirect
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785
! 30 MERGED WITH LINE ABOVE
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
! 80 DECOMMISSIONED BGP SESSION
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
170 deny ip 11.125.64.0/19 any
! 180 REMOVED UNREACHABLE LINE
190 deny ip 218.66.56.0/23 any
! 200 MERGED WITH LINE ABOVE
210 deny ip 218.67.71.0/23 any
! 220 MERGED WITH LINE ABOVE
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
! 250 REMOVED UNREACHABLE LINE
260 permit ip 218.8.104.0/24 any
! 270 MERGED WITH LINE ABOVE
280 permit ip 218.8.103.0/24 any
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
330 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
470 deny ip any 11.125.89.32/28
510 permit icmp any any echo-reply
! 520-870 REMOVED UNNECESSARY DENIES
880 deny ip any any
"""
```
The challenge for us is to find out if and how this compressed ACL differs from the original. That is, is there is traffic that is treated differently by the two ACLs, and if so, which lines are responsible for the difference.
This task is difficult to get right through manual reasoning alone, which is why we developed the `compareFilters` question in Batfish.
## Comparing filters
We can compare the two ACLs above as follows. To initialize snapshots, we will use Batfish's `init_snapshot_from_text` function which creates a snapshot with a single device who configuration is the provided text. The analysis shown below can be done even when the filters are embedded within bigger device configurations.
```
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize a snapshot with the original ACL
original_snapshot = bf.init_snapshot_from_text(
original_acl,
platform="cisco-nx",
snapshot_name="original",
overwrite=True)
# Initialize a snapshot with the compressed ACL
compressed_snapshot = bf.init_snapshot_from_text(
compressed_acl,
platform="cisco-nx",
snapshot_name="compressed",
overwrite=True)
# Now, compare the two ACLs in the two snapshots
answer = bf.q.compareFilters().answer(snapshot=compressed_snapshot, reference_snapshot=original_snapshot)
show(answer.frame())
```
The `compareFilters` question compares two filters and returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical. The analysis is exhaustive and considers *all possible* flows.
As we can see from the output above, our compressed ACL is not the same as the original one. In particular, line 210 of the compressed ACL will deny some flows that were being permitted by line 510 of the original; and line 510 of the compressed ACL will permit some flows that were being denied by line 220 of the original ACL. Because the permit statements correspond to ICMP traffic, we can tell that the traffic treated by the two filters is ICMP. To narrow learn specific source and destination IPs that are impacted, one may run the `searchFilters` question, as shown [here](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb#Step-3:-Ensure-that-no-collateral-damage-has-occurred).
By looking at the output above, we can immediately understand the difference:
* The first line is showing that the compressed ACL is denying some traffic on line 210 (with index 16) that the original ACL was permitting via line 510, and the compressed ACL is permitting some traffic on line 510 that the original ACL was denying via line 220.
It turns out that the address space merger we did for lines 210 and 220 in the original ACL, where we combined 218.67.72.0/24 and 218.67.71.0/24 into 218.67.71.0/23, was not correct. The other similar mergers of 218.66.57.0/24 and 218.66.56.0/24 into 218.66.56.0/23 and of 218.8.104.0/25 and 218.8.104.128/25 into 218.8.104.0/24 were correct.
* The third line is showing that the compressed ACL is denying some traffic at the end of the ACL that the original ACL was permitting via line 80. This is an expected change of decommissioning the BGP session on line 80.
It is not always the case that refactoring is semantics preserving. Where `compareFilters` helps is succinctly enumerating *all* differences. Engineers can look at the differences and decide if the refactored filter meets their intent.
## Splitting ACLs
Compressing large ACLs is one type of refactoring engineers do; another one is splitting a large ACL into multiple smaller ACLs and composing them on the same device or spreading across multiple devices in the network. Smaller ACLs are easier to maintain and evolve. However, the split operation is risky. We may forget to include in the smaller ACLs some protections that exist in the original ACL. We show how such splits can be safely done using Batfish.
Suppose we want to split the compressed ACL above into multiple smaller ACLs that handle different concerns. So, we should have different ACLs for different types of traffic and different ACLs for different logical groups of nodes in the network. The result of such splitting is shown below. For ease of exposition, we have retained the line numbers from the original ACL and mimic a scenario in which all ACLs live on the same device.
```
smaller_acls = """
ip access-list deny-icmp-redirect
10 deny icmp any any redirect
ip access-list permit-bfd
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785
ip access-list permit-bgp-session
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
ip access-list acl-dns
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
ip access-list deny-untrusted-sources-group1
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
ip access-list deny-untrusted-sources-group2
160 deny ip 11.36.192.0/20 any
190 deny ip 218.66.56.0/23 any
210 deny ip 218.67.71.0/23 any
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
ip access-list permit-trusted-sources
260 permit ip 218.8.104.0/24 any
280 permit ip 218.8.103.0/24 any
ip access-list deny-untrusted-sources-group3
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
300 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
ip access-list deny-untrusted-sources-group4
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
ip access-list acl-tail
470 deny ip any 11.125.89.32/28
510 permit icmp any any echo-reply
880 deny ip any any
"""
```
Given the split ACLs above, one analysis may be to figure out if each untrusted source subnet was included in a smaller ACL. Otherwise, we have lost protection that was present in the original ACL. We can accomplish this analysis via the `findMatchingFilterLines` question, as shown below.
Once we are satisfied with analysis of filters, for an end-to-end safety guarantee, we should also analyze if there are new flows that the network will allow (or disallow) after the change. Such an analysis can be done via the `differentialReachability` question, as shown [here](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Introduction%20to%20Forwarding%20Change%20Validation.ipynb#Change-Scenario-2:-Validating-the-end-to-end-impact-of-an-ACL-change).
```
# Initialize a snapshot with the smaller ACLs
smaller_snapshot = bf.init_snapshot_from_text(
smaller_acls,
platform="cisco-nx",
snapshot_name="smaller",
overwrite=True)
# All untrusted subnets
untrusted_source_subnets = ["54.0.0.0/8",
"163.157.0.0/16",
"166.144.0.0/12",
"198.170.50.0/24",
"198.120.0.0/16",
"11.36.192.0/19",
"11.125.64.0/19",
"218.66.56.0/24",
"218.66.57.0/24",
"218.67.71.0/23",
"218.67.96.0/22",
"8.89.120.0/22"
]
for subnet in untrusted_source_subnets:
# Find which ACLs match traffic from this source subnet
answer = bf.q.findMatchingFilterLines(
headers=HeaderConstraints(srcIps=subnet),
filters="/deny-untrusted/").answer(snapshot=smaller_snapshot)
# Each source subnet should match exactly one ACL
af = answer.frame()
if len(af) == 1:
print("{} .... OK".format(subnet))
elif len(af) == 0:
print("{} .... ABSENT".format(subnet))
else:
print("{} .... Multiply present".format(subnet))
show(af)
```
In the code above, we first enumerate all untrusted subnets in the network. The granularity of this specification need not be the same as that in the ACL. For instance, we enumerate 218.66.56.0/24 and 218.66.57.0/24 as untrusted subnets but the ACL has a less specific prefix 218.66.56.0/23. Batfish understands such relationships and provides an accurate analysis that is not possible with simple string matching.
The **for** loop above uses the `findMatchingFilterLines` question to find out which lines across all ACLs whose names contain "deny-untrusted" will match packets starting the the specified subnet. Our expectation is that each subnet should match exactly one line in exactly one ACL, and the output shows "OK" against such subnets. It shows "Absent" for subnets that do not match any line and shows the multiple matching lines for subnets where that happens.
We see that during the split above, we ended up matching the subnet 11.36.192.0/19 twice, once as a /19 in ACL deny-untrusted-sources-group1 and then as /20 in ACL deny-untrusted-sources-group2. More dangerously, we completely forgot to match the 11.125.64.0/19, which will open a security hole in the network if these smaller ACLs were applied.
## Summary
In this notebook, we showed how to use the `compareFilters` and `findMatchingFilterLines` questions of Batfish to safely refactor complex filters.
* `compareFilters` analyzes the original and revised filter to enumerate all cases that will treat *any* flow differently.
* `findMatchingFilterLines` enumerates all lines across all specified filters that match the given space of flows.
For additional ways to analyze filter using Batfish, see the ["Analyzing ACLs and Firewall Rules"](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Analyzing%20ACLs%20and%20Firewall%20Rules.ipynb) and the ["Provably Safe ACL and Firewall Changes"](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb) notebooks.
***
### Get involved with the Batfish community
Join our community on [Slack](https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTcyYzY3M2Q0NWUyYTRhYjdlM2IzYzRhZGU1NWFlNGU2MzlhNDY3OTJmMDIyMjQzYmRlNjhkMTRjNWIwNTUwNTQ) and [GitHub](https://github.com/batfish/batfish).
| github_jupyter |
# Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
**Instructions:**
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## <font color='darkblue'>Updates</font>
This notebook has been updated over the past few months. The prior version was named "v5", and the current versionis now named '6a'
#### If you were working on a previous version:
* You can find your prior work by looking in the file directory for the older files (named by version name).
* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.
* Please copy your work from the older versions to the new version, in order to submit your work for grading.
#### List of Updates
* Forward propagation formula, indexing now starts at 1 instead of 0.
* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".
* Fixed grammar in the comments.
* Y_prediction_test variable name is used consistently.
* Plot's axis label now says "iterations (hundred)" instead of "iterations".
* When testing the model, the test image is normalized by dividing by 255.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**Expected Output for m_train, m_test and num_px**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```
train_set_x_orig.reshape(train_set_x_orig.shape[0],-1,1).shape
train_set_x_orig.reshape(-1, train_set_x_orig.shape[0]).shape
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T
### END CODE HERE .
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
**Expected Output**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
```
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
<font color='blue'>
**What you need to remember:**
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
**Mathematical expression of the algorithm**:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
**Expected Output**:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X) +b) # compute activation
cost = -(1/m)*np.sum((Y*np.log(A) + (1-Y)*np.log(1-A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m)*(np.dot(X,(A-Y).T))
db = (1/m)*(np.sum(A-Y))
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
### 4.4 - Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate*dw
b = b - learning_rate*db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T,X) +b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0,i] = 0 if A[0,i] < .5 else 1
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
```
**Expected Output**:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
**What to remember:**
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction_test for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost=True)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w,b,X_test)
Y_prediction_train = predict(w,b,X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```
**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
## 7 - Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
<font color='blue'>
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!
Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
| github_jupyter |
# This notebook serves as an example of how to create AutoTST objects and how to create 3D geometries
```
#General imports
import os, sys
import logging
from copy import deepcopy
import numpy as np
import pandas as pd
from multiprocessing import Process
#RDKit imports
import rdkit
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem.rdchem import Mol
from rdkit.Chem.Pharm3D import EmbedLib
from rdkit import DistanceGeometry
from rdkit.Chem import rdDistGeom
#ASE imports
import ase
from ase import Atom, Atoms
from ase.calculators.gaussian import Gaussian as ASEGaussian
from ase.visualize import view
#RMG-Py imports
from rmgpy.molecule import Molecule as RMGMolecule
from rmgpy.species import Species as RMGSpecies
from rmgpy.reaction import Reaction as RMGReaction
#AutoTST imports
from autotst.species import Species, Conformer
from autotst.reaction import Reaction, TS
from autotst.calculator.gaussian import Gaussian
from autotst.calculator.statmech import StatMech
from autotst.job.job import Job
```
# Creating `Species` and `Conformer` objects
Each stable reactant and product species are generated as `Species` objects. These objects can contain many `Conformer` objects that can represent multiple resonance structures.
`Species` and `Conformer` objects can be initialized as follows:
- `Species(["SMILES_STRING"])`
- `Conformer("SMILES_STRING")`
For a `Species` object, it can take lists of SMILES strings because there may be multiple resonance structures for a given SMILES structure. However, `Conformer` objects represent only a single isomer.
`Species` objects can contain multiple `Conformer` and can be
```
smiles = ["[CH2]C=C(C)C"]
for smile in smiles:
molecule = RMGMolecule(SMILES=smile)
species = Species(["[CH2]C=C(C)C"])
print("For {} there are {} resonance structures".format(species, len(species.conformers)))
print("These structures are:")
for smiles in species.conformers.keys():
print ("\t- {}".format(smiles))
species.conformers
```
When looking at a specific `Conformer` object, you need to know that a `Species` contants a dictionary in `Species.conformers`. The keys are the possible SMILES strings associated with the species and the values are lists of `Conformer` objects for that specific SMILES. When initialized, these lists will be length 1 but will be extended if you generate more conformers. Below is how to view a single `Conformer` object
```
conformer = species.conformers[smiles][0]
conformer.view()
```
# Creating `Reaction` and `TS` objects
For this example we will be looking at a hydrogen abstraction reaction by peroxyl radical of 2-methylbut-2-ene
First, you need to initialize the autotst `Reaction` object as done below. This can be done by using either reaction strings that look like `r1+r2_p1+p2` where `r1`, `r2`, `p1`, and `p2` are smiles strings for the molecules involved in the reaction. In addition, not all reaction need to by bimolecular. AutoTST currently supports reactions of the following reaction families:
- Hydrogen Abstraction (`H_Abstraction`: `r1+r2_p1+p2`)
- Intra hydrogen migration (`intra_H_migration`: `r1_p1`)
- R addition to multiple bond (`R_Addition_MultipleBond`: `r1+r2_p1`)
You can specify a `reaction_family` in a `Reaction` object, however, it is not needed as AutoTST will attempt to match the reaction provided to one of the three supported reaction families.
Alternatively, you can intialize a `Reaction` object using an `RMGReaction` object. This can be done as follows:
`rxn = Reaction(rmg_reaction=RMGReaction())`
The initialization of the reaction will also create a forward and a reverse transition state geometry. And these can be visualized using `py3dmol`.
`Reaction` objects are similar to `Species` objects in that they both contain a dictionary corresponding to their conformers. The `Reaction.ts` is the dictionary of transition states for a reaction just like the `Species.conformers` is the dictionary of conformers for a species. The only difference is the keys for the `Reaction.ts` are simply `"forward"` and `"reverse"` to denote the `TS` generated in either direction
```
rxn = Reaction(label="CC=C(C)C+[O]O_[CH2]C=C(C)C+OO")
transitionstates = rxn.ts["reverse"] #the rxn.ts is a dictionary with keys being "forward" and "reverse"
ts = transitionstates[0] #transitionstates is a list of TS objects, this list is currently length 1
rxn.ts
```
# Editing geometries of `Conformer` and `TS` objects
AutoTST allows you to edit the following features of a `Conformer` or a `TS` object:
- Bond length
- Angles
- Dihedrals
- CisTrans bond orientation
- Sterocenter orientation
The definitions for all of these objects can be found in `geometry.py`.
```
conformer = Conformer("ClC=C(O)C(N)Cl")
conformer.view()
print("{} has the following geometries:".format(conformer))
print("")
print("Bonds")
for bond in conformer.bonds:
print("\t- {}: {}".format(bond, bond.index))
print("")
print("Angles")
for angle in conformer.angles:
print("\t- {}: {}".format(angle, angle.index))
print("")
print("Dihedrals")
for torsion in conformer.torsions:
print("\t- {}: {}".format(torsion, torsion.index))
print("")
print("CisTrans Bonds")
for cistran in conformer.cistrans:
print("\t- {}: ".format(cistran, cistran.index))
print("")
print("Chiral Centers")
for chiral_center in conformer.chiral_centers:
print("\t- {}: {}".format(chiral_center, chiral_center.index))
```
In addition, you can set variables for each of these objects. This next section of the notebook will walk though how to do all of this.
```
conformer.set_bond_length(bond_index=3, length=3) #Length is specified in angstroms
conformer.view()
conformer.set_angle(angle_index=7, angle=45.) #Angles are specified in degrees
conformer.view()
conformer.set_torsion(torsion_index=0, dihedral=180)
conformer.view()
conformer.set_cistrans(cistrans_index=0, stero="z") #options for stero are E and Z
conformer.view()
conformer.set_chirality(chiral_center_index=0, stero="r")
conformer.set_chirality??
conformer.view()
```
## AND all of these features can be extended to `TS` objects as well
# Now, let's look at how to generate ensembles of conformers for `Species` and `Reactions` using AutoTST
Both `Species` and `Reaction` objects have a built-in method called `generate_conformers` and this method takes one argument of an ASE calculator object. This method will then generate all possible combinations of dihedrals, CisTrans bonds, and chiral centers to identify every possible conformer. All of these conformers are then optimized using ASE's BFGS optimizer and the calculator that a user provides. From this, a list of conformers within 1 kcal/mol are generated and added to the `Species.conformers` or the `Reaction.ts` dict. Below are a few examples.
```
from hotbit import Hotbit
species = Species(["CCCC"])
species.generate_conformers(calculator=Hotbit())
species.conformers
reaction = Reaction("C+[O]O_[CH3]+OO")
reaction.generate_conformers(calculator=Hotbit())
reaction.ts
```
For right now, we're going to be looking at the first conformers for the `Species` and `Reaction` listed above.
```
ts = reaction.ts["forward"][0]
conformer = species.conformers["CCCC"][0]
```
# Now, to look at writing input files for `Conformer` and `TS` objects
This is realitively easy, you just need to use the `Gaussian` calculator object to do all of this. This starts with an initialzation of the object followed by calling it's methods on different objects to get ASE calculator objects that can write your geometries.
The main methods of the `Gaussian` calculator are:
- For species conformers
- `get_conformer_calc`
- For transition states
- `get_shell_calc`
- `get_center_calc`
- `get_overall_calc`
- `get_irc_calc`
- For both
- `get_rotor_calc`
```
gaussian = Gaussian()
```
For `Conformer` objects
```
calc = gaussian.get_conformer_calc(conformer=conformer)
calc.write_input(conformer.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
```
For `TS` objects
```
calc = gaussian.get_shell_calc(ts=ts)
calc.write_input(ts.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
calc = gaussian.get_center_calc(ts=ts)
calc.write_input(ts.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
calc = gaussian.get_overall_calc(ts=ts)
calc.write_input(ts.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
calc = gaussian.get_irc_calc(ts=ts)
calc.write_input(ts.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
```
For hindered rotor calculations
```
torsion = conformer.torsions[0]
calc = gaussian.get_rotor_calc(conformer=conformer, torsion=torsion)
calc.write_input(conformer.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
torsion = ts.torsions[0]
calc = gaussian.get_rotor_calc(conformer=ts, torsion=torsion)
calc.write_input(ts.ase_molecule)
f = open(calc.label + ".com", "r")
print(calc.label)
print("")
for line in f.readlines():
print(line[:-1])
```
At this point, you now have a way to have input files for quantum chemistry optimizations to be automatically written for you. And you can have all of these run automatically using the AutoTST `Job` class
| github_jupyter |
## Tutorial 2: Mixture Models and Expectation Maximization
### Exercise 1: Categorical Mixture Model (CMM)
```
# Import libraries
import numpy as np
import pandas as pd
from ast import literal_eval
import matplotlib.pyplot as plt
import gensim
from wordcloud import WordCloud, STOPWORDS
from categorical_em import CategoricalEM
```
#### 1.4) Play around with the dataset
##### Load and pre-process the data
Load the data from the tweets_cleaned csv file as `pandas` dataframe. It contains the documents already pre-processed and cleaned after applying the following steps:
1. Tokenization
2. Homogeneization, which includes:
1. Removing capitalization.
2. Removing non alphanumeric tokens (e.g. punktuation signs)
3. Stemming/Lemmatisation.
3. Cleaning
4. Vectorization
```
df = pd.read_csv('tweets_cleaned.csv')
df.drop_duplicates(subset="tweet", inplace=True) # drop duplicates tweets
df['tokens'] = df['tokens'].apply(literal_eval) # transform the string into a list of tokens
X_tokens = list(df['tokens'].values)
print('Columns: {}\n'.format(' | '.join(df.columns.values)))
print('Tweet:\n{}'.format(df.loc[1, 'tweet']))
print('Tweet cleaned:\n{}'.format(df.loc[1, 'tweets_clean']))
print('Tweet tokens:\n{}'.format(X_tokens[1]))
```
##### Create the dictionary
We have transformed the raw text collection in a list of documents stored in `X_tokens`, where each document is a collection of words which are the most relevant according to the semantic analysis.
We now convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). For this purpose we use the `gensim` library.
```
I = 120 # hyperparameter: number of different words to keep
dictionary = gensim.corpora.Dictionary(X_tokens)
print(dictionary)
dictionary.filter_extremes(no_below=15, no_above=0.5, keep_n=I)
print(dictionary)
```
##### Create Bag of Words (BoW)
Let's create the numerical version of our corpus using the `doc2bow` method. In general,
`D.doc2bow(token_list)` transforms any list of tokens into a list of tuples `(token_id, n)`, one per each token in
`token_list`, where `token_id` is the token identifier (according to dictionary `D`) and `n` is the number of occurrences
of such token in `token_list`.
```
X_bow = list()
keep_tweet = list()
for tweet in X_tokens:
tweet_bow = dictionary.doc2bow(tweet)
if len(tweet_bow) > 1:
X_bow.append(tweet_bow)
keep_tweet.append(True)
else:
keep_tweet.append(False)
df_data = df[keep_tweet]
N = len(df_data)
```
##### Create the matrix
Finally, we transform the BoW representation `X_bow` into a matrix, namely `X_matrix`, in which the n-th row and j-th column represents the
number of occurrences of the j-th word of the dictionary in the n-th document. This will be the matrix used in the algorithm.
```
X_matrix = np.zeros([N, I])
for i, doc_bow in enumerate(X_bow):
word_list = list()
for word in doc_bow:
X_matrix[i, word[0]] = word[1]
X_matrix.shape
```
#### 1.5) Implement the EM algorithm
```
K = 6 # hyperparameter: number of topics
i_theta = 1
i_pi = 1
model = CategoricalEM(K, I, N, delta=0.01, epochs=200, init_params={'theta': i_theta, 'pi': i_pi})
model.fit(X_matrix)
```
#### 1.6) Show the ten most representative words for each topic using a wordcloud, and the ten most relevant documents for each topic
Words per topic
```
argsort = np.argsort(model.theta_matrix, axis=1)
argsort = [x[::-1] for x in argsort]
fig, axs = plt.subplots(2, 3, figsize=(30, 10))
for k in range(K):
words = [(dictionary[m], model.theta_matrix[k][m]) for m in argsort[k][:10]]
axs[k//3,k%3].imshow(WordCloud().fit_words(dict(words)))
axs[k//3,k%3].set_title(f'Topic {k+1}', fontsize=25)
axs[k//3,k%3].axis('off')
```
Documents per topic
```
rnk = model.r_matrix.T
rnk.shape
argsort = np.argsort(rnk, axis=1)
argsort = [x[::-1][:10] for x in argsort]
# topic 1
df_data.iloc[argsort[0]]
```
| github_jupyter |
# Dimensionality reduction using `scikit-learn`
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing, model_selection as ms, \
manifold, decomposition as dec, cross_decomposition as cross_dec
from sklearn.pipeline import Pipeline
%matplotlib inline
BOROUGHS_URL = 'https://files.datapress.com/london/dataset/london-borough-profiles/2017-01-26T18:50:00/london-borough-profiles.csv'
```
Read in the London Borough Profiles datasets.
```
boroughs = pd.read_csv(BOROUGHS_URL, encoding='iso-8859-1')
```
Filter the DataFrame so that only boroughs are included.
```
boroughs = boroughs[boroughs.Code.str.startswith('E09', na=False)]
```
Replace underscores with spaces in column names.
```
boroughs.columns = boroughs.columns.str.replace('_', ' ')
```
Select columns of interest.
```
boroughs = boroughs[[
'Area name',
'Population density (per hectare) 2017',
'Proportion of population aged 0-15, 2015',
'Proportion of population of working-age, 2015',
'Proportion of population aged 65 and over, 2015',
'% of resident population born abroad (2015)',
'Unemployment rate (2015)',
'Gross Annual Pay, (2016)',
'Modelled Household median income estimates 2012/13',
'Number of active businesses, 2015',
'Two-year business survival rates (started in 2013)',
'Crime rates per thousand population 2014/15',
'Fires per thousand population (2014)',
'Ambulance incidents per hundred population (2014)',
'Median House Price, 2015',
'% of area that is Greenspace, 2005',
'Total carbon emissions (2014)',
'Household Waste Recycling Rate, 2014/15',
'Number of cars, (2011 Census)',
'Number of cars per household, (2011 Census)',
'% of adults who cycle at least once per month, 2014/15',
'Average Public Transport Accessibility score, 2014',
'Male life expectancy, (2012-14)',
'Female life expectancy, (2012-14)',
'Teenage conception rate (2014)',
'Life satisfaction score 2011-14 (out of 10)',
'Worthwhileness score 2011-14 (out of 10)',
'Happiness score 2011-14 (out of 10)',
'Anxiety score 2011-14 (out of 10)',
'Childhood Obesity Prevalance (%) 2015/16',
'People aged 17+ with diabetes (%)',
'Mortality rate from causes considered preventable 2012/14'
]]
```
Set index.
```
boroughs.set_index('Area name', inplace=True)
```
Fix a couple of issues with data types.
```
boroughs[boroughs['Gross Annual Pay, (2016)'] == '.'] = None
boroughs['Modelled Household median income estimates 2012/13'] = \
boroughs['Modelled Household median income estimates 2012/13'].str.replace("[^0-9]", "")
boroughs = boroughs.apply(pd.to_numeric)
```
Remove boroughs with missing values.
```
boroughs.dropna(inplace=True)
```
Extract information on 'feelings'.
```
col_idx = [
'Life satisfaction score 2011-14 (out of 10)',
'Worthwhileness score 2011-14 (out of 10)',
'Happiness score 2011-14 (out of 10)',
'Anxiety score 2011-14 (out of 10)'
]
feelings = boroughs[col_idx]
boroughs.drop(col_idx, axis=1, inplace=True)
```
## Multidimensional scaling (MDS)
Create a pipeline that scales the data and performs MDS.
```
smds = Pipeline([
('scale', preprocessing.StandardScaler()),
('mds', manifold.MDS())
])
```
Two-dimensional projection ('embedding') of 'boroughs'
```
boroughs_mds = smds.fit_transform(boroughs)
fig, ax = plt.subplots()
ax.scatter(boroughs_mds[:,0], boroughs_mds[:,1])
for i, name in enumerate(boroughs.index):
ax.annotate(name, boroughs_mds[i,:])
```
## Principal component analysis (PCA)
Create a pipeline that scales the data and performs PCA.
```
spca = Pipeline([
('scale', preprocessing.StandardScaler()),
('pca', dec.PCA())
])
```
Scores (projection of 'boroughs' on the PCs):
```
scores = spca.fit_transform(boroughs)
```
Scores plot:
```
fig, ax = plt.subplots()
ax.scatter(scores[:,0], scores[:,1])
for i, name in enumerate(boroughs.index):
ax.annotate(name, scores[i,0:2])
```
Loadings (coefficients defining the PCs):
```
spca.named_steps['pca'].components_
```
Explained variance:
```
spca.named_steps['pca'].explained_variance_
np.cumsum(spca.named_steps['pca'].explained_variance_)
```
Explained variance ratio:
```
spca.named_steps['pca'].explained_variance_ratio_
np.cumsum(spca.named_steps['pca'].explained_variance_ratio_)
```
Scree plot:
```
plt.bar(np.arange(1, spca.named_steps['pca'].n_components_ + 1) - 0.4,
spca.named_steps['pca'].explained_variance_ratio_)
cum_evr = np.cumsum(spca.named_steps['pca'].explained_variance_ratio_)
plt.plot(np.arange(1, spca.named_steps['pca'].n_components_ + 1), cum_evr, color='black')
```
## Partial least squares (PLS) regression
Create a pipeline that scales the data and performs PLS regression.
```
spls = Pipeline([
('scale', preprocessing.StandardScaler()),
('pls', cross_dec.PLSRegression(scale=False))
])
```
Train a PLS regression model with three components.
```
spls.set_params(
pls__n_components=3
)
spls.fit(boroughs, feelings)
```
Define folds for cross-validation.
```
three_fold_cv = ms.KFold(n_splits=3, shuffle=True)
```
Compute average MSE across folds.
```
mses = ms.cross_val_score(spls, boroughs, feelings, scoring='neg_mean_squared_error', cv=three_fold_cv)
np.mean(-mses)
```
Determine 'optimal' number of components.
```
gs = ms.GridSearchCV(
estimator=spls,
param_grid={
'pls__n_components': np.arange(1, 10)
},
scoring='neg_mean_squared_error',
cv=three_fold_cv
)
gs.fit(boroughs, feelings)
-gs.best_score_
gs.best_estimator_
```
Plot number of components against MSE.
```
plt.plot(np.arange(1, 10), -gs.cv_results_['mean_test_score'])
```
| github_jupyter |
```
import pandas as pd
import cx_Oracle
import sqlalchemy
from sqlalchemy.exc import SQLAlchemyError
import plotly.express as px
import plotly.graph_objects as go
from IPython.display import Image
```
### Fast Cars
```
try:
engine = sqlalchemy.create_engine("oracle+cx_oracle://tmulla:aEnWMJvX0zAVpMXXHp2YXieT@oracle.cise.ufl.edu:1521/orcl", arraysize=500000)
fast_sql = """
select vehicle_make, (extract(year from collision_date) - vehicle_year) as vehicle_age, count(*) as no_collisions
from tmulla.party_vehicle NATURAL JOIN gangardiwalam.party NATURAL JOIN gangardiwalam.time
where vehicle_make is not null and
vehicle_make in ('LAMBORGHINI', 'BUGATTI', 'ferrari', 'porsche', 'maserati')
and ((extract(year from collision_date) - vehicle_year) BETWEEN 0 AND 30)
group by vehicle_make, (extract(year from collision_date) - vehicle_year)
order by vehicle_make, vehicle_age
""";
df_fast = pd.read_sql(fast_sql, engine)
print(df_fast)
engine.dispose()
except SQLAlchemyError as e:
print(e)
fig = px.line(df_fast, x='vehicle_age', y='no_collisions', color='vehicle_make',
line_shape ='spline',
title ='Vehicle Manufactures')
# fig.show()
img_bytes = fig.to_image(format="png", engine="orca")
Image(img_bytes)
```
### Luxury Cars
```
try:
engine = sqlalchemy.create_engine("oracle+cx_oracle://tmulla:aEnWMJvX0zAVpMXXHp2YXieT@oracle.cise.ufl.edu:1521/orcl", arraysize=500000)
luxury_sql = """
select vehicle_make, (extract(year from collision_date) - vehicle_year) as vehicle_age, count(*) as no_collisions
from tmulla.party_vehicle NATURAL JOIN gangardiwalam.party NATURAL JOIN gangardiwalam.time
where vehicle_make is not null and
vehicle_make in ('bmw', 'mercedes-benz', 'audi', 'ASTON MARTIN', 'jaguar', 'cadillac', 'ROLLS ROYCE')
and ((extract(year from collision_date) - vehicle_year) BETWEEN 0 AND 30)
group by vehicle_make, (extract(year from collision_date) - vehicle_year)
order by vehicle_make, vehicle_age
""";
df_luxury = pd.read_sql(luxury_sql, engine)
print(df_luxury)
engine.dispose()
except SQLAlchemyError as e:
print(e)
fig = px.line(df_luxury, x='vehicle_age', y='no_collisions', color='vehicle_make',
line_shape ='spline',
title ='Vehicle Manufactures')
# fig.show()
img_bytes = fig.to_image(format="png", engine="orca")
Image(img_bytes)
```
### Basic Cars
```
try:
engine = sqlalchemy.create_engine("oracle+cx_oracle://tmulla:aEnWMJvX0zAVpMXXHp2YXieT@oracle.cise.ufl.edu:1521/orcl", arraysize=500000)
basic_sql = """
select vehicle_make, (extract(year from collision_date) - vehicle_year) as vehicle_age, count(*) as no_collisions
from tmulla.party_vehicle NATURAL JOIN gangardiwalam.party NATURAL JOIN gangardiwalam.time
where vehicle_make is not null and
vehicle_make in ('toyota', 'ford', 'honda', 'chevrolet', 'nissan','hyundai', 'volkswagen')
and ((extract(year from collision_date) - vehicle_year) BETWEEN 0 AND 30)
group by vehicle_make, (extract(year from collision_date) - vehicle_year)
order by vehicle_make, vehicle_age
""";
df_basic = pd.read_sql(basic_sql, engine)
print(df_basic)
engine.dispose()
except SQLAlchemyError as e:
print(e)
fig = px.line(df_basic, x='vehicle_age', y='no_collisions', color='vehicle_make',
line_shape ='spline',
title ='Vehicle Manufactures')
# fig.show()
img_bytes = fig.to_image(format="png", engine="orca")
Image(img_bytes)
```
| github_jupyter |
# 对象和类
- 一个学生,一张桌子,一个圆都是对象
- 对象是类的一个实例,你可以创建多个对象,创建类的一个实例过程被称为实例化,
- 在Python中对象就是实例,而实例就是对象
## 定义类
class ClassName:
do something
- class 类的表示与def 一样
- 类名最好使用驼峰式
- 在Python2中类是需要继承基类object的,在Python中默认继承,可写可不写
- 可以将普通代码理解为皮肤,而函数可以理解为内衣,那么类可以理解为外套
## 定义一个不含初始化__init__的简单类
class ClassName:
joker = “Home”
def func():
print('Worker')
- 尽量少使用
```
class ClassName:
yk = 'home'
ClassName.yk ## == ClassName().yk
class ClassName_:
yk = 'home'
def func():
print('Worker')
ClassName_.func() ## ClassName_().func() 不能运行,代码错误 只有内中的函数可使用这种形式(函数参数是self)
```
## 定义一个标准类
- __init__ 代表初始化,可以初始化任何动作
- 此时类调用要使用(),其中()可以理解为开始初始化
- 初始化内的元素,类中其他的函数可以共享

```
import math
class Circle:
def __init__(self,radius = 1):#初始化,一般在这个初始化内,放入参数
'''
因为在类中,可能有多个函数都需要使用这个参数,所以一定要将这个参数变成通用型
'''
self.radius = radius
def getPerimeter(self):#这里的self代表这个函数是这个类中的
return 2 * self.radius * self.radius * math.pi
def getArea(self):
return self.radius * self.radius * math.pi
def setRadius(self,radius):#自己的一个参数radius
self.radius = radius
Circle(radius=100).getPerimeter()
class class_jiou:
def __init__(self,shu =1 ):
self.shu = shu
def fun_jiou(self):
if self.shu % 2 == 0:
print(self.shu,' 偶数')
else:
print(self.shu,' 奇数')
def fun_su(self):
if self.shu == 2:
print(self.shu,' 素数')
for i in range(2,self.shu):
if self.shu % i == 0:
print(self.shu,' 不是素数')
break
if i == self.shu - 1:
print(self.shu,' 素数')
def fun_chengfabiao(self):
for i in range(1,10):
for j in range(1,i+1):
print(j,'*',i,'=',i * j,end = ' ')
print('')
class_jiou(3).fun_su()
class class_he:
def __init__(self,num):
self.num = num
def fun_zijia(self):
self.num += 1
return self.num
def fun_cheng(self):
self.num *= self.num
return self.num
num = class_he(0)
for i in range(100):
num.fun_zijia()
ji = num.fun_cheng()
print(ji)
```
## 何为self?
- self 是指向对象本身的参数
- self 只是一个命名规则,其实可以改变的,但是我们约定俗成的是self,也便于理解
- 使用了self就可以访问类中定义的成员
<img src="../Photo/86.png"></img>
## 使用类 Cirlcle
## 类的传参
- class ClassName:
def __init__(self, para1,para2...):
self.para1 = para1
self.para2 = para2
## EP:
- A:定义一个类,类中含有两个功能:
- 1、计算随机数的最大值
- 2、计算随机数的最小值
- B:定义一个类,(类中函数的嵌套使用)
- 1、第一个函数的功能为:输入一个数字
- 2、第二个函数的功能为:使用第一个函数中得到的数字进行平方处理
- 3、第三个函数的功能为:得到平方处理后的数字 - 原来输入的数字,并打印结果
## 类的继承
- 类的单继承
- 类的多继承
- 继承标识
> class SonClass(FatherClass):
def __init__(self):
FatherClass.__init__(self)
```
class class_chuangcan:
def __init__(self):
self.num = 123
class class_panduan(class_chuangcan):#告诉python 子类马上要继承父类
def __init__(self):
class_chuangcan.__init__(self)#告诉父类,子类继承父类
def fun_jiou(self):
if self.num % 2 == 0:
print(self.num,' 偶数')
else:
print(self.num,' 奇数')
A = class_panduan()
A.fun_jiou()
```
## 私有数据域(私有变量,或者私有函数)
- 在Python中 变量名或者函数名使用双下划线代表私有 \__Joker, def \__Joker():
- 私有数据域不可继承
- 私有数据域强制继承 \__dir__()

## EP:



## 类的其他
- 类的封装
- 实际上就是将一类功能放在一起,方便未来进行管理
- 类的继承(上面已经讲过)
- 类的多态
- 包括装饰器:将放在以后处理高级类中教
- 装饰器的好处:当许多类中的函数需要使用同一个功能的时候,那么使用装饰器就会方便许多
- 装饰器是有固定的写法
- 其包括普通装饰器与带参装饰器
# Homewor
## UML类图可以不用画
## UML 实际上就是一个思维图
- 1

```
class Rectangle:
def __init__(self,width,height):
self.width = width
self.height = height
def getArea(self):
return self.width * self.height
def getPerimeter(self):
return (self.width + self.height) * 2
def getwh(self):
print('这个矩阵的宽',self.width,'长',self.height,end = ' ')
A = Rectangle(4,40)
B = Rectangle(3.5,35.7)
A.getwh()
print('面积',A.getArea(),'周长',A.getPerimeter())
B.getwh()
print('面积',round(B.getArea(),2),'周长',B.getPerimeter())
```
- 2

```
class Account:
def __init__(self,Id = 0,balance = 100,annualInterestRate = 0):
self.__Id = Id
self.__balance = balance
self.__annualInterestRate = annualInterestRate
def getMonthlyInterestRate(self):
return self.__annualInterestRate / 12 / 100
def getMonthlyInterest(self):
return self.__balance * (self.__annualInterestRate / 12 / 100)
def withdraw(self,num):
self.__balance -= num
def deposit(self,num):
self.__balance += num
def pri(self):
print('id = ',self.__Id,'balance = ',self.__balance,end = ' ')
A = Account(1122,20000,4.5)
A.withdraw(2500)
A.deposit(3000)
A.pri()
print('MonthlyInterestRate = ',A.getMonthlyInterestRate(),'getMonthlyInterest = ',A.getMonthlyInterest())
```
- 3

```
class Fan():
def __init__(self,speed = 1,radius = 5,color = 'blue',on = False):
self.__speed = speed
self.__on = on
self.__radius = radius
self.__color = color
def fun_speed(self):
if self.__speed == 1:
print('SLOW',self.__radius,self.__color,self.__on)
elif self.__speed == 2:
print('MEDIUM',self.__radius,self.__color,self.__on)
else:
print('FAST',self.__radius,self.__color,self.__on)
Fan(3,10,'yellow',True).fun_speed()
Fan(2,5).fun_speed()
```
- 4


```
import math
class RegularPolygin:
def __init__(self,n =3,side = 1,x = 0,y = 0):
self.__n = n
self.__side = side
self.__x = x
self.__y = y
def getPerimeter(self):
return self.__n * self.__side
def getArea(self):
return (self.__n * self.__side ** 2) / (4 * math.tan (math.pi / self.__n))
A = RegularPolygin()
print('周长 = ',A.getPerimeter(),'面积 = ',round(A.getArea(),3))
B = RegularPolygin(6,4)
print('周长 = ',B.getPerimeter(),'面积 = ',round(B.getArea(),3))
C = RegularPolygin(10,4,5.6,7.8)
print('周长 = ',C.getPerimeter(),'面积 = ',round(C.getArea(),3))
```
- 5

```
class LinearEquation:
def __init__(self,a,b,c,d,e,f):
self.__a = a
self.__b = b
self.__c = c
self.__d = d
self.__e = e
self.__f = f
def isSolvable(self):
if self.__a * self.__d - self.__b * self.__c == 0:
return False
else:
return True
def getX(self):
return (self.__e * self.__d - self.__b * self.__f) / (self.__a * self.__d - self.__b * self.__c)
def getY(self):
return (self.__a * self.__f - self.__e * self.__c) / (self.__a * self.__d - self.__b * self.__c)
A = LinearEquation(1,1,1,-1,2,0)
if A.isSolvable:
print('x = ',A.getX(),'y = ',A.getY())
else:
print('这个方程式无解')
```
- 6

```
class LinearEquation:
def __init__(self,x1,y1,x2,y2,x3,y3,x4,y4):
self.x1 = x1
self.y1 = y1
self.x2 = x2
self.y2 = y2
self.x3 = x3
self.y3 = y3
self.x4 = x4
self.y4 = y4
def findPoint(self):
k1 = (self.y2 - self.y1) / (self.x2 - self.x1)
b1 = self.y1 - k1 * self.x1
if (x4-x3)==0:
k2=None
b2=0
else:
k2=(self.y4 - self.y3) / (self.x4 - self.x3)
b2=self.y3 - self.x3 * k2
if k2==None:
x = self.x3
else:
x = ( b2 - b1) / (k1 - k2)
y = k1* x + b1
print('The intersecting point is :(',x,y,')')
x1,y1,x2,y2 = eval(input('Enter the endpoints of the first line segment:'))
x3,y3,x4,y4 = eval(input('Enter the endpoints of the second line segment:'))
LinearEquation(x1,y1,x2,y2,x3,y3,x4,y4).findPoint()
```
- 7

- 8 进阶
- 定义一个子类,一个父类
- 父类产生随机数
- 子类一直判定该随机数是奇数还是偶数
- 该例子实际上就是一个粗浅的分布式
```
import random
class farther:
def __init__(self):
self.num = random.randint(1,100)
class son(farther):
def __init__(self):
farther.__init__(self)
def fun_jiou(self):
if self.num % 2 == 0:
print(self.num,'是偶数')
else:
print(self.num,'是奇数')
A = son()
A.fun_jiou()
```
| github_jupyter |
# Object Oriented Programming (OOP)
- we've been using procedural programming paradigm; focus on functions/procedures
- OOP paradigm is best used in large and complex modern software systems which make it easy to maintain and improve over time
- focus is on creation of objects which contain both data and functionality together under one name
- typically, each class definition corresponds to some object or concept in the real world with some attributes/properties that maintain its state; and the functions/methods correspond to the ways real-world objects interact
## class
- we've used classes like str, int, float, dict, tuple, etc.
- class keyword lets programmer define their own compound data types
- e.g., a clas that represents a point in 2-D coordinates
<pre>
class className:
[statement-1]
.
.
[statement-N]
</pre>
```
class Point:
pass
a = Point()
a.x = 0
a.y = 0
print(a.x, a.y)
# OK but NOT great and NOT common!
class Point:
"""
Point class to represent and manipulate x, y coords
"""
count = 0 # class variable/attribute
# constructor to customize the initial state of an object
# first argument refers to the instance being manipulated;
# it is customary to name this parameter self; but can be anything
def __init__(self, xx=0, yy=0):
"""Create a new point with given x and y coords"""
# x and y are object variables/attributes
self.x = xx
self.y = yy
Point.count += 1 # increment class variable
def __del__(self):
Point.count -= 1
# instantiate an object
p = Point()
# what is the access specifier for attributes?
print(p.x, p.y)
print(Point.count) # access class variable outside class
p1 = Point(2, 3)
print(p1.x, p1.y)
print(Point.count)
# Run this cell few times and see the value of Point.count
# How do you fix this problem? Use __del__ destructor method.
```
### visualizing class and instance attributes using pythontutor.com
- https://goo.gl/aGuc4r
```
from IPython.display import IFrame
src = """http://pythontutor.com/iframe-embed.html#code=class%20Point%3A%0A%20%20%20%20%22%22%22%0A%20%20%20%20Point%20class%20represents%20and%20manipulates%20x,y%20coords%0A%20%20%20%20%22%22%22%0A%20%20%20%20count%20%3D%200%20%23%20class%20variable/attribute%0A%20%20%20%20%0A%20%20%20%20%23%20constructor%20to%20customize%20the%20initial%20state%20of%20an%20object%0A%20%20%20%20%23%20first%20argument%20refers%20to%20the%20instance%20being%20manipulated%3B%0A%20%20%20%20%23%20it%20is%20customary%20to%20name%20this%20parameter%20self%3B%20but%20can%20be%20anything%0A%20%20%20%20def%20__init__%28self,%20xx%3D0,%20yy%3D0%29%3A%0A%20%20%20%20%20%20%20%20%22%22%22Create%20a%20new%20point%20with%20given%20x%20and%20y%20coords%22%22%22%0A%20%20%20%20%20%20%20%20%23%20x%20and%20y%20are%20object%20variables/attributes%0A%20%20%20%20%20%20%20%20self.x%20%3D%20xx%0A%20%20%20%20%20%20%20%20self.y%20%3D%20yy%0A%20%20%20%20%20%20%20%20Point.count%20%2B%3D%201%20%23%20increment%20class%20variable%0A%20%20%20%20%20%20%20%20%0Ap%20%3D%20Point%28%29%0Aprint%28p.x,%20p.y%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"""
IFrame(src, width=900, height=600)
```
### exercise: add a method dist_from_origin() to Point class
- computes and returns the distance from the origin
- test the methods
```
class Point:
"""
Point class represents and manipulates x,y coords
"""
count = 0
def __init__(self, xx=0, yy=0):
"""Create a new point with given x and y coords"""
self.x = xx
self.y = yy
Point.count += 1
def dist_from_origin(self):
import math
dist = math.sqrt(self.x**2+self.y**2)
return dist
def __str__(self):
return "({}, {})".format(self.x, self.y)
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return Point(x, y)
p1 = Point(2, 2)
print(p1.dist_from_origin())
```
## sameness - alias or deep copy
```
import copy
p2 = Point(3, 4)
p3 = p2 # alias or deepcopy?
print(p2 is p3) # checks if two references refer to the same object
p4 = copy.deepcopy(p2)
print(p2 is p4)
```
## passing objects as arguments and parameters
```
def print_point(pt):
#pt.x = 100
#pt.y = 100
print('({0}, {1})'.format(pt.x, pt.y))
p = Point(10, 10)
print_point(p)
#print(p)
print(p.x, p.y)
```
## are objects passed by value or reference?
- how can you tell?
- write a simple program to test.
## returning object instances from functions
```
def midpoint(p1, p2):
"""Returns the midpoint of points p1 and p2"""
mx = (p1.x + p2.x)/2
my = (p2.x + p2.y)/2
return Point(mx, my)
p = Point(4, 6)
q = Point(6, 4)
r = midpoint(p, q)
s = p + q
print_point(r) # fix this
print(r)
print(s)
```
## special methods / operator overloading
- https://docs.python.org/3/reference/datamodel.html
<pre>
__del__(self)
- destructor - called when instance is about to be destroyed
__str__(self)
- called by str(object) and the built-in functions format() and print() to computer the "informal" or nicely printable string representation of an object.
- must return string object
__lt__(self, other)
x < y calls x.__lt__(y)
__gt__(self, other)
x > y calls x.__gt__(y)
__eq__(self, other)
x == y calls x.__eq__(y)
__ne__(self, other)
__ge__(self, other)
__le__(self, other)
Emulating numeric types:
__add__(self, other)
__sub__(self, other)
__mul__(self, other)
__mod__(self, other)
__truediv__(self, other)
__pow__(self, other)
__xor__(self, other)
__or__(self, other)
__and__(self, other)
</pre>
### exercise 1: implement special methods for Point class and test them
### exercise 2: design a class to represent a triangle and implement methods to calculate area and perimeter. Also, implement as many special methods as it make sense. Instantiate some objects and test all the methods.
| github_jupyter |
# Régularisation des pronoms
```
def d_in_pos(csv_lu):
"""
Fonction récupérant le contenu d'un CSV avec les colones suivantes :
- "Numero de token"
- "Mot forme"
- "Modifié"
- "Lemme(s)"
- "POS"
Cette fonction boucle sur chaque token et chaque fois garde un trio de tokens
dont les numéros se suivent (en excluant la ponctuation). Puis, sur ce trio, on
teste des conditions, dont la principale est qu'Analog a hésité entre une forme
de déterminant et une forme de pronom, mais ni préposition, ni S, et que
le mot suivant est une forme de ponctuation ou un verbe.
La fonction ajoute alors la nouvelle forle dictionnaire du token comme valeur.
Une fois tous les tokens analysés, la fonction retourne le dictionnaire contenant
tous les tokens modifiés.
:param csv_lu: Le contenu du fichier CSV sur lequel on souhaite
effectuer des vérifications, extrait et lu par la fonction principale.
:return: Retourne un dictionnaire dont les clés sont les numéros des tokens
contenus et les valeurs la ligne entière du CSV modifiée par la fonction.
"""
# Création de dictionnaires locaux, vides.
dico_mots = {}
dico_modifies = {}
# Compteur de tokens modifiés, compteur du total de tokens.
total = 0
compteur = 0
# Liste des valeurs PoS à rechercher dans le mot suivant.
suivant_pos = ['Vuc', 'Vun', 'Vvc', 'Vvn', 'Fw', 'Fs']
# Liste des valeurs PoS correspondant à des déterminants.
D_pos = ['Da', 'Dd', 'Ds', 'Dn', 'Dp', 'Di', 'Dr', 'Dt']
P_pos = ['Pp', 'Pd', 'Pi', 'Ps', 'Pt', 'Pr']
# Liste destinée à rassembler les valeurs PoS à enlever d'une liste de PoS
# le cas échéant.
a_enlever = []
separateur = "|"
# On boucle sur les lignes du CSV d'entrée pour créer
# un dictionnaire général (dico_mots) à exploiter.
for row in csv_lu:
# On transfère les clés et valeurs dans un dictionnaire local,
# de manière à avoir un dictionnaire général avec
# clé = numéro d'ordre des tokens (pas identifiant des tokens)
# valeurs = dictionnaire identique à la ligne du CSV d'origine.
identifiant = int(row['ID'])
# On ajoute l'unité au dictionnaire général,
# puis on remet le dictionnaire local à vide,
# (j'ai eu de grosses erreurs avant d'y penser).
dico_mots[identifiant] = row
total += 1
for identifiant in dico_mots.keys():
if identifiant + 1 in dico_mots.keys():
# On crée deux variables sur le mot-cible et celui
# qui le suit.
# À la fin des tests, le mot_cible uniquement sera
# susceptible d'être modifié.
mot_cible = dico_mots[identifiant]
mot_suivant = dico_mots[identifiant + 1]
# Si Analog a hésité entre une forme de déterminant
# et une forme de pronom, mais ni préposition, ni S,
# et que le mot suivant est une forme de ponctuation ou un verbe, on continue.
if "D" in mot_cible['POS'] and "P" in mot_cible['POS'] and "PREP" not in mot_cible['POS'] and "S" not in mot_cible['POS'] and mot_suivant["POS"] in suivant_pos:
# On crée des listes pour les PoS et les lemmes, pour pouvoir comparer les
# différentes valeurs séparément aux listes déclarées au début de la fonction.
liste_pos = mot_cible['POS'].split("|")
liste_lemmes = mot_cible['LEMMES'].split("|")
# On compare chaque valeur avec la liste des PoS à garder
# et on fait une nouvelle liste avec ceux qui ne correspondent pas.
for valeur in liste_pos:
if valeur not in P_pos:
a_enlever.append(valeur)
# On enlève de la liste de PoS possibles du mot-cible toute
# valeur correspondant aux PoS pertinents et les lemmes qui vont avec.
for valeur in a_enlever:
if "|" in mot_cible['LEMMES']:
index_pos = liste_pos.index(valeur)
lemme_en_moins = liste_lemmes[index_pos]
liste_lemmes.remove(lemme_en_moins)
liste_pos.remove(valeur)
mot_cible['LEMMES'] = separateur.join(liste_lemmes)
mot_cible['POS'] = separateur.join(liste_pos)
a_enlever = []
# Le token a été modifié, donc on ajoute 1 au compteur de mots modifiés.
compteur += 1
# On ajoute le token modifié au dictionnaire retourné par la fonction.
dico_modifies[mot_cible['ID']] = mot_cible
# On calcule les totaux et pourcentages de tokens modifiés par la fonction.
pourcent_traites = compteur * 100 / total
print(compteur, ":", round(pourcent_traites,1), "%")
# On retourne les tokens modifiés.
return dico_modifies
def desambiguiser(csv_entree, csv_sortie):
"""
Fonction récupérant un CSV dont chaque ligne est un token lemmatisé,
avec les colonnes telles quelles :
- "Numero de token" --- étant le numéro constituant la valeur du @n
de l'élément <w> dans le XML dont les tokens ont été extraits
(en principe, chaque token a un numéro unique).
- "Mot forme" --- étant le texte de l'élément <w> original, avec
abréviations résolues et graphie modernisée s'il y a lieu.
- "Modifié" --- contient soit "yes", soit "no" ; la valeur a été produite
par les scripts précédents (NV n°4-6) et
indique si les valeurs de la ligne sont exactement celles produites
par Analog ("no") ou si le script précédent les a modifiées ("yes").
- "Lemme(s)" --- étant le lemme préconisé par Analog, ou bien les lemmes
entre lesquels Analog hésite, séparés par des '|'.
- "POS" --- étant la valeur POS préconisée par Analog, ou bien les
différentes valeurs entre lesquelles Analog hésite, séparées par
des '|'.
La fonction crée à partir du CSV d'entrée un dictionnaire qu'elle transmet à la
fonction secondaire chargée d'analyser les tokens et les modifier si besoin et
récupère un dictionnaire contenant uniquement les tokens modifiés.
Puis elle produit un autre fichier CSV. Si le token se trouve dans le dictionnaire
retourné par la fonction secondaire, c'est cette version qui sera écrite dans le
nouveau CSV. Sinon, la version originale sera copiée.
:param csv_entree: Le chemin interne du fichier CSV sur lequel on souhaite
effectuer des vérifications.
:param csv_sortie: Le chemin interne du fichier CSV dans lequel écrire
les tokens modifiés.
"""
import csv
colonnes = [
'ID',
'TOKEN',
'Modifié',
'LEMMES',
'POS'
]
row_modifie = {}
# Ouvrir le CSV d'Analog et le lire.
with open(csv_entree) as csv_a_lire:
csv_lu = csv.DictReader(csv_a_lire)
# Récupérer la liste des tokens contenant D dans le POS
# et modifiés par la fonction.
# La fonction retourne un dictionnaire ne contenant que
# les nouvelles formes des tokens modifiés.
dico_modifies = d_in_pos(csv_lu)
# Rouvrir le CSV d'entrée et le lire.
with open(csv_entree) as csv_a_lire:
csv_lu = csv.DictReader(csv_a_lire)
# Ouvrir le fichier de sortie et le lire en mode écriture.
with open(csv_sortie, 'w') as csv_a_ecrire:
a_ecrire = csv.DictWriter(csv_a_ecrire, fieldnames = colonnes)
# Écrire les noms des colonnes.
a_ecrire.writeheader()
# Boucler à nouveau sur les lignes du CSV d'entrée.
for row in csv_lu:
# Si le token se trouve dans les formes modifiées,
# on écrit la nouvelle forme dans le nouveau CSV.
if row['ID'] in dico_modifies.keys():
row_modifie = dico_modifies[row['ID']]
a_ecrire.writerow({
'ID' : row_modifie['ID'],
'TOKEN' : row_modifie['TOKEN'],
'Modifié' : "yes",
'LEMMES' : row_modifie['LEMMES'],
'POS' : row_modifie['POS']
})
row_modifie = {}
# Si le token n'était pas dans les formes modifiées,
# on écrit la forme originelle.
else:
a_ecrire.writerow({
'ID' : row['ID'],
'TOKEN' : row['TOKEN'],
'Modifié' : row['Modifié'],
'LEMMES' : row['LEMMES'],
'POS' : row['POS']
})
desambiguiser('/home/erminea/Documents/CONDE/GC/2021-05-27_GC_tokens_det_fait.csv',
'/home/erminea/Documents/CONDE/GC/2021-05-27_GC_tokens_pro_fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Basnage-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Basnage-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Berault-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Berault-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/instructions-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/instructions-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Merville-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Merville-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Pesnelle-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Pesnelle-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Rouille-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Rouille-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/ruines-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/ruines-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/tac-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/tac-pro-fait.csv')
desambiguiser('/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Terrien-det-fait.csv',
'/home/erminea/Documents/CONDE/Encodage/2020-04-22-work/Terrien-pro-fait.csv')
```
| github_jupyter |
# 1. Import libraries
```
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
```
# 2. Loading data
```
train_data_frame=np.array(pd.read_csv('./Dataset/isolet1+2+3+4.data',header=None))
test_data_frame=np.array(pd.read_csv('./Dataset/isolet5.data',header=None))
train_data_arr=(train_data_frame[:,0:617]).copy()
train_label_arr=((train_data_frame[:,617]).copy()-1)
test_data_arr=(test_data_frame[:,0:617]).copy()
test_label_arr=((test_data_frame[:,617]).copy()-1)
train_data_arr.shape
test_data_arr.shape
np.r_[train_data_arr,test_data_arr].shape
Data=MinMaxScaler(feature_range=(0,1)).fit_transform(np.r_[train_data_arr,test_data_arr])
Data.shape
C_train_x=Data[:len(train_data_arr)]
C_test_x=Data[len(train_data_arr):]
C_train_y=train_label_arr#to_categorical(train_label_arr)
C_test_y=test_label_arr#to_categorical(test_label_arr)
x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed)
x_test=C_test_x
y_test_onehot=C_test_y
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_validate: ' + str(x_validate.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_validate: ' + str(y_validate_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
key_feture_number=50
```
# 3.Model
```
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.pow(self.kernel,2)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='input_img')
encoded = Dense(p_encoding_dim, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(input_img)
bottleneck=encoded
decoded = Dense(p_data_feature, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(encoded)
latent_encoder = Model(input_img, bottleneck)
autoencoder = Model(input_img, decoded)
autoencoder.compile(loss='mean_squared_error', optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
#print('Latent Encoder Structure-------------------------------------')
#latent_encoder.summary()
return autoencoder,latent_encoder
#--------------------------------------------------------------------------------------------------------------------------------
def Identity_Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
bottleneck_score=encoded_score
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
latent_encoder_score = Model(input_img, bottleneck_score)
autoencoder = Model(input_img, decoded_score)
autoencoder.compile(loss='mean_squared_error',\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,latent_encoder_score
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
```
## 3.1 Structure and paramter testing
```
epochs_number=200
batch_size_value=64
```
---
### 3.1.1 Fractal Autoencoder
---
```
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
#file_name="./log/F_AE_"+str(key_feture_number)+".png"
#plot_model(F_AE, to_file=file_name,show_shapes=True)
model_checkpoint=ModelCheckpoint('./log_weights/F_AE_'+str(key_feture_number)+'_weights_'+str(loss_weight_1)+'.{epoch:04d}.hdf5',period=100,save_weights_only=True,verbose=1)
#print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(F_AE.layers[1].get_weights()))
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True,\
validation_data=(x_validate, [x_validate,x_validate]),\
callbacks=[model_checkpoint])
loss = F_AE_history.history['loss']
val_loss = F_AE_history.history['val_loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(epochs[250:], loss[250:], 'bo', label='Training Loss')
plt.plot(epochs[250:], val_loss[250:], 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
p_data=F_AE.predict(x_test)
numbers=x_test.shape[0]*x_test.shape[1]
print("MSE for one-to-one map layer",np.sum(np.power(np.array(p_data)[0]-x_test,2))/numbers)
print("MSE for feature selection layer",np.sum(np.power(np.array(p_data)[1]-x_test,2))/numbers)
```
---
### 3.1.2 Feature selection layer output
---
```
FS_layer_output=feature_selection_output.predict(x_test)
print(np.sum(FS_layer_output[0]>0))
```
---
### 3.1.3 Key features show
---
```
key_features=F.top_k_keepWeights_1(F_AE.get_layer(index=1).get_weights()[0],key_feture_number)
print(np.sum(F_AE.get_layer(index=1).get_weights()[0]>0))
```
# 4 Classifying
### 4.1 Extra Trees
```
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
selected_position_list=np.where(key_features>0)[0]
```
---
#### 4.1.1. On Identity Selection layer
---
a) with zeros
```
train_feature=feature_selection_output.predict(C_train_x)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
b) Sparse matrix
```
train_feature=feature_selection_output.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
```
---
c) Compression
```
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
d) Compression with structure
```
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
#### 4.1.2. On Original Selection
---
a) with zeros
```
train_feature=np.multiply(C_train_x, key_features)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
b) Sparse matrix
```
train_feature=np.multiply(C_train_x, key_features)
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
```
---
c) Compression
```
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
d) Compression with structure
```
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
#### 4.1.3. Latent space
---
```
train_feature=latent_encoder_score_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_score_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
train_feature=latent_encoder_choose_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_choose_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
---
# 6 Feature group compare
---
```
Selected_Weights=F.top_k_keep(F_AE.get_layer(index=1).get_weights()[0],key_feture_number)
selected_position_group=F.k_index_argsort_1d(Selected_Weights,key_feture_number)
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_group)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_group)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature[:,0:25],train_label,test_feature[:,0:25],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,25:],train_label,test_feature[:,25:],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,0:30],train_label,test_feature[:,0:30],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,30:],train_label,test_feature[:,30:],test_label,p_seed)
```
# 7. Reconstruction loss
```
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
train_feature_=np.multiply(C_train_x, key_features)
C_train_selected_x=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(C_train_selected_x.shape)
test_feature_=np.multiply(C_test_x, key_features)
C_test_selected_x=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(C_test_selected_x.shape)
train_feature_tuple=(C_train_selected_x,C_train_x)
test_feature_tuple=(C_test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
credits = pd.read_csv('credits.csv')
credits.head()
meta = pd.read_csv('movies_metadata.csv')
meta['release_date'] = pd.to_datetime(meta['release_date'], errors='coerce')
meta['year'] = meta['release_date'].dt.year
meta['year'].value_counts().sort_index()
# Getting only 2017 movies as we already have movies up to the year 2016 in preprocessing 1 file.
# We don't have enough data for the movies from 2018, 2019 and 2020.
# We'll deal with it in the upcoming preprocessing files
new_meta = meta.loc[meta.year == 2017,['genres','id','title','year']]
new_meta
new_meta['id'] = new_meta['id'].astype(int)
data = pd.merge(new_meta, credits, on='id')
pd.set_option('display.max_colwidth', 75)
data
# evaluates an expression node or a string containing a Python literal or container display
import ast
data['genres'] = data['genres'].map(lambda x: ast.literal_eval(x))
data['cast'] = data['cast'].map(lambda x: ast.literal_eval(x))
data['crew'] = data['crew'].map(lambda x: ast.literal_eval(x))
def make_genresList(x):
gen = []
st = " "
for i in x:
if i.get('name') == 'Science Fiction':
scifi = 'Sci-Fi'
gen.append(scifi)
else:
gen.append(i.get('name'))
if gen == []:
return np.NaN
else:
return (st.join(gen))
data['genres_list'] = data['genres'].map(lambda x: make_genresList(x))
data['genres_list']
def get_actor1(x):
casts = []
for i in x:
casts.append(i.get('name'))
if casts == []:
return np.NaN
else:
return (casts[0])
data['actor_1_name'] = data['cast'].map(lambda x: get_actor1(x))
def get_actor2(x):
casts = []
for i in x:
casts.append(i.get('name'))
if casts == [] or len(casts)<=1:
return np.NaN
else:
return (casts[1])
data['actor_2_name'] = data['cast'].map(lambda x: get_actor2(x))
data['actor_2_name']
def get_actor3(x):
casts = []
for i in x:
casts.append(i.get('name'))
if casts == [] or len(casts)<=2:
return np.NaN
else:
return (casts[2])
data['actor_3_name'] = data['cast'].map(lambda x: get_actor3(x))
data['actor_3_name']
def get_directors(x):
dt = []
st = " "
for i in x:
if i.get('job') == 'Director':
dt.append(i.get('name'))
if dt == []:
return np.NaN
else:
return (st.join(dt))
data['director_name'] = data['crew'].map(lambda x: get_directors(x))
data['director_name']
movie = data.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres_list','title']]
movie
movie.isna().sum()
movie = movie.dropna(how='any')
movie.isna().sum()
movie = movie.rename(columns={'genres_list':'genres'})
movie = movie.rename(columns={'title':'movie_title'})
movie['movie_title'] = movie['movie_title'].str.lower()
movie['comb'] = movie['actor_1_name'] + ' ' + movie['actor_2_name'] + ' '+ movie['actor_3_name'] + ' '+ movie['director_name'] +' ' + movie['genres']
movie
old = pd.read_csv('data.csv')
old
old['comb'] = old['actor_1_name'] + ' ' + old['actor_2_name'] + ' '+ old['actor_3_name'] + ' '+ old['director_name'] +' ' + old['genres']
old
new = old.append(movie)
new
new.drop_duplicates(subset ="movie_title", keep = 'last', inplace = True)
new
new.to_csv('new_data.csv',index=False)
```
| github_jupyter |
## Part I: On-policy learning and SARSA
(3 points)
_This notebook builds upon `qlearning.ipynb`, or to be exact, generating qlearning.py._
The policy we're gonna use is epsilon-greedy policy, where agent takes optimal action with probability $(1-\epsilon)$, otherwise samples action at random. Note that agent __can__ occasionally sample optimal action during random sampling by pure chance.
```
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
from qlearning import QLearningAgent
class EVSarsaAgent(QLearningAgent):
"""
An agent that changes some of q-learning functions to implement Expected Value SARSA.
Note: this demo assumes that your implementation of QLearningAgent.update uses get_value(next_state).
If it doesn't, please add
def update(self, state, action, reward, next_state):
and implement it for Expected Value SARSA's V(s')
"""
def get_value(self, state):
"""
Returns Vpi for current state under epsilon-greedy policy:
V_{pi}(s) = sum _{over a_i} {pi(a_i | s) * Q(s, a_i)}
Hint: all other methods from QLearningAgent are still accessible.
"""
epsilon = self.epsilon
possible_actions = self.get_legal_actions(state)
# If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
q_values = [self.get_qvalue(state, (self.get_best_action(state)) * self.get_qvalue(state, action)) for action in possible_actions]
# q_values = [(self.get_best_action(state) * self.get_qvalue(state, action)) for action in possible_actions]
# print(q_values)
state_value = np.sum(q_values)
# print(state_value)
return state_value
```
### Cliff World
Let's now see how our algorithm compares against q-learning in case where we force agent to explore all the time.
<img src=https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/cliffworld.png width=600>
<center><i>image by cs188</i></center>
```
import gym
import gym.envs.toy_text
env = gym.envs.toy_text.CliffWalkingEnv()
n_actions = env.action_space.n
print(env.__doc__)
# Our cliffworld has one difference from what's on the image: there is no wall.
# Agent can choose to go as close to the cliff as it wishes. x:start, T:exit, C:cliff, o: flat ground
env.render()
def play_and_train(env, agent, t_max=10**4):
"""This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward"""
# global game
total_reward = 0.0
s = env.reset()
# game += 1
for t in range(t_max):
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
agent.update(s, a, r, next_s)
s = next_s
total_reward += r
# if t % 500 == 0:
# clear_output(True)
# env.render()
# print("game: {}".format(game), "step: {}".format(t), "total reward: {}".format(total_reward))
if done:
break
return total_reward
from qlearning import QLearningAgent
agent_sarsa = EVSarsaAgent(alpha=0.25, epsilon=0.2, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
agent_ql = QLearningAgent(alpha=0.25, epsilon=0.2, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
# from IPython.display import clear_output
# game = 0
# for i in range(100):
# reward = play_and_train(env, agent_sarsa)
# Note: agent.epsilon stays constant
from IPython.display import clear_output
from pandas import DataFrame
def moving_average(x, span=100): return DataFrame(
{'x': np.asarray(x)}).x.ewm(span=span).mean().values
rewards_sarsa, rewards_ql = [], []
for i in range(1000):
rewards_sarsa.append(play_and_train(env, agent_sarsa))
rewards_ql.append(play_and_train(env, agent_ql))
# Note: agent.epsilon stays constant
if i % 100 == 0:
clear_output(True)
print('EVSARSA mean reward =', np.mean(rewards_sarsa[-100:]))
print('QLEARNING mean reward =', np.mean(rewards_ql[-100:]))
plt.title("epsilon = %s" % agent_ql.epsilon)
plt.plot(moving_average(rewards_sarsa), label='ev_sarsa')
plt.plot(moving_average(rewards_ql), label='qlearning')
plt.grid()
plt.legend()
plt.ylim(-50000, 0)
plt.show()
```
Let's now see what did the algorithms learn by visualizing their actions at every state.
```
def draw_policy(env, agent):
""" Prints CliffWalkingEnv policy with arrows. Hard-coded. """
n_rows, n_cols = env._cliff.shape
actions = '^>v<'
for yi in range(n_rows):
for xi in range(n_cols):
if env._cliff[yi, xi]:
print(" C ", end='')
elif (yi * n_cols + xi) == env.start_state_index:
print(" X ", end='')
elif (yi * n_cols + xi) == n_rows * n_cols - 1:
print(" T ", end='')
else:
print(" %s " %
actions[agent.get_best_action(yi * n_cols + xi)], end='')
print()
print("Q-Learning")
draw_policy(env, agent_ql)
print("SARSA")
draw_policy(env, agent_sarsa)
```
### More on SARSA
Here are some of the things you can do if you feel like it:
* Play with epsilon. See learned how policies change if you set epsilon to higher/lower values (e.g. 0.75).
* Expected Value SASRSA for softmax policy __(2pts)__:
$$ \pi(a_i|s) = softmax({Q(s,a_i) \over \tau}) = {e ^ {Q(s,a_i)/ \tau} \over {\sum_{a_j} e ^{Q(s,a_j) / \tau }}} $$
* Implement N-step algorithms and TD($\lambda$): see [Sutton's book](http://incompleteideas.net/book/bookdraft2018jan1.pdf) chapter 7 and chapter 12.
* Use those algorithms to train on CartPole in previous / next assignment for this week.
### Homework part II: experience replay
(4 points)
There's a powerful technique that you can use to improve sample efficiency for off-policy algorithms: [spoiler] Experience replay :)
The catch is that you can train Q-learning and EV-SARSA on `<s,a,r,s'>` tuples even if they aren't sampled under current agent's policy. So here's what we're gonna do:
<img src=https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/exp_replay.png width=480>
#### Training with experience replay
1. Play game, sample `<s,a,r,s'>`.
2. Update q-values based on `<s,a,r,s'>`.
3. Store `<s,a,r,s'>` transition in a buffer.
3. If buffer is full, delete earliest data.
4. Sample K such transitions from that buffer and update q-values based on them.
To enable such training, first we must implement a memory structure that would act like such a buffer.
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import random
class ReplayBuffer(object):
def __init__(self, size):
"""
Create Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
Note: for this assignment you can pick any data structure you want.
If you want to keep it simple, you can store a list of tuples of (s, a, r, s') in self._storage
However you may find out there are faster and/or more memory-efficient ways to do so.
"""
self._storage = []
self._maxsize = size
# OPTIONAL: YOUR CODE
def __len__(self):
return len(self._storage)
def add(self, obs_t, action, reward, obs_tp1, done):
'''
Make sure, _storage will not exceed _maxsize.
Make sure, FIFO rule is being followed: the oldest examples has to be removed earlier
'''
data = (obs_t, action, reward, obs_tp1, done)
# add data to storage
<YOUR CODE >
def sample(self, batch_size):
"""Sample a batch of experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
"""
idxes = <randomly generate batch_size integers to be used as indexes of samples >
# collect <s,a,r,s',done> for each index
<YOUR CODE >
return np.array( < states > ), np.array( < actions > ), np.array( < rewards > ), np.array( < next_states > ), np.array( < is_done > )
```
Some tests to make sure your buffer works right
```
replay = ReplayBuffer(2)
obj1 = tuple(range(5))
obj2 = tuple(range(5, 10))
replay.add(*obj1)
assert replay.sample(
1) == obj1, "If there's just one object in buffer, it must be retrieved by buf.sample(1)"
replay.add(*obj2)
assert len(
replay._storage) == 2, "Please make sure __len__ methods works as intended."
replay.add(*obj2)
assert len(replay._storage) == 2, "When buffer is at max capacity, replace objects instead of adding new ones."
assert tuple(np.unique(a) for a in replay.sample(100)) == obj2
replay.add(*obj1)
assert max(len(np.unique(a)) for a in replay.sample(100)) == 2
replay.add(*obj1)
assert tuple(np.unique(a) for a in replay.sample(100)) == obj1
print("Success!")
```
Now let's use this buffer to improve training:
```
import gym
from qlearning import QLearningAgent
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
def play_and_train_with_replay(env, agent, replay=None,
t_max=10**4, replay_batch_size=32):
"""
This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward
:param replay: ReplayBuffer where agent can store and sample (s,a,r,s',done) tuples.
If None, do not use experience replay
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s
a = <YOUR CODE >
next_s, r, done, _ = env.step(a)
# update agent on current transition. Use agent.update
<YOUR CODE >
if replay is not None:
# store current <s,a,r,s'> transition in buffer
<YOUR CODE >
# sample replay_batch_size random transitions from replay,
# then update agent on each of them in a loop
<YOUR CODE >
s = next_s
total_reward += r
if done:
break
return total_reward
# Create two agents: first will use experience replay, second will not.
agent_baseline = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
agent_replay = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
replay = ReplayBuffer(1000)
from IPython.display import clear_output
rewards_replay, rewards_baseline = [], []
for i in range(1000):
rewards_replay.append(
play_and_train_with_replay(env, agent_replay, replay))
rewards_baseline.append(play_and_train_with_replay(
env, agent_baseline, replay=None))
agent_replay.epsilon *= 0.99
agent_baseline.epsilon *= 0.99
if i % 100 == 0:
clear_output(True)
print('Baseline : eps =', agent_replay.epsilon,
'mean reward =', np.mean(rewards_baseline[-10:]))
print('ExpReplay: eps =', agent_baseline.epsilon,
'mean reward =', np.mean(rewards_replay[-10:]))
plt.plot(moving_average(rewards_replay), label='exp. replay')
plt.plot(moving_average(rewards_baseline), label='baseline')
plt.grid()
plt.legend()
plt.show()
```
#### What to expect:
Experience replay, if implemented correctly, will improve algorithm's initial convergence a lot, but it shouldn't affect the final performance.
### Outro
We will use the code you just wrote extensively in the next week of our course. If you're feeling that you need more examples to understand how experience replay works, try using it for binarized state spaces (CartPole or other __[classic control envs](https://gym.openai.com/envs/#classic_control)__).
__Next week__ we're gonna explore how q-learning and similar algorithms can be applied for large state spaces, with deep learning models to approximate the Q function.
However, __the code you've written__ for this week is already capable of solving many RL problems, and as an added benifit - it is very easy to detach. You can use Q-learning, SARSA and Experience Replay for any RL problems you want to solve - just thow 'em into a file and import the stuff you need.
### Bonus I: TD($ \lambda $) (5+ points)
There's a number of advanced algorithms you can find in week 3 materials (Silver lecture II and/or reading about eligibility traces). One such algorithm is TD(lambda), which is based on the idea of eligibility traces. You can also view it as a combination of N-step updates for alll N.
* N-step temporal difference from Sutton's book - [url](http://incompleteideas.net/sutton/book/ebook/node73.html)
* Eligibility traces from Sutton's book - [url](http://incompleteideas.net/sutton/book/ebook/node72.html)
* Blog post on eligibility traces - [url](http://pierrelucbacon.com/traces/)
Here's a practical algorithm you can start with: [url](https://stackoverflow.com/questions/40862578/how-to-understand-watkinss-q%CE%BB-learning-algorithm-in-suttonbartos-rl-book/40892302)
Implementing this algorithm will prove more challenging than q-learning or sarsa, but doing so will earn you a deeper understanding of how value-based methods work [in addition to some bonus points].
More kudos for comparing and analyzing TD($\lambda$) against Q-learning and EV-SARSA in different setups (taxi vs cartpole, constant epsilon vs decreasing epsilon).
### Bonus II: More pacman (5+ points)
__see README.md for software requirements of seminar_py2__
Remember seminar_py2 where your vanilla q-learning had hard time solving Pacman even on a small grid? Now's the time to fix that issue.
We'll focus on those grids for pacman setup.
* python pacman.py -p PacmanQAgent -x N_TRAIN_GAMES -n N_TOTAL_GAMES -l __mediumGrid__
* python pacman.py -p PacmanQAgent -x N_TRAIN_GAMES -n N_TOTAL_GAMES -l __mediumClassic__
Even if you adjust N_TRAIN_GAMES to 10^5 and N_TOTAL_GAMES to 10^5+100 (100 last games are for test), pacman won't solve those environments
The problem with those environments is that they have a large amount of unique states. However, you can devise a smaller environment state by choosing different observation parameters, e.g.:
* distance and direction to nearest ghost
* where is nearest food
* 'center of mass' of all food points (and variance, and whatever)
* is there a wall in each direction
* and anything else you see fit
Here's how to get this information from [state](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/pacman.py#L49),
* Get pacman position: [state.getPacmanPosition()](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/pacman.py#L128)
* Is there a wall at (x,y)?: [state.hasWall(x,y)](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/pacman.py#L189)
* Get ghost positions: [state.getGhostPositions()](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2t/pacman.py#L144)
* Get all food positions: [state.getCapsules()](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/pacman.py#L153)
You can call those methods anywhere you see state.
* e.g. in [agent.getValue(state)](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/qlearningAgents.py#L52)
* Defining a function that extracts all features and calling it in [getQValue](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/qlearningAgents.py#L38) and [setQValue](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/qlearningAgents.py#L44) is probably enough.
* You can also change agent parameters. The simplest way is to hard-code them in [PacmanQAgent](https://github.com/yandexdataschool/Practical_RL/blob/master/week3_model_free/seminar_py2/qlearningAgents.py#L140)
Also, don't forget to optimize ```learning_rate```, ```discount``` and ```epsilon``` params of model, this may also help to solve this env.
| github_jupyter |
population
```
#Routines for scraping and downloading data from http://esa.un.org/unpd/wpp/unpp/panel_indicators.htm
#Source: http://schoolofdata.org/2014/09/12/easy-access-to-world-bank-and-un-development-data-from-ipython-notebooks/
import requests
from bs4 import BeautifulSoup
from io import StringIO
import pandas as pd
def getFormSelectVals(soup,name):
#Parse out options from an HTML select list
tmp={}
items=soup.find('select',{'name':name})
for o in items.findAll('option'):
#print(o.attrs['value'],o.text.strip())
tmp[o.text.strip()]=o.attrs['value']
return tmp
def getUNPopDivIndicators():
#Grab the UN Population Division indicators page and parse out form values
url='http://esa.un.org/unpd/wpp/unpp/panel_indicators.htm'
page = requests.get(url)
soup=BeautifulSoup(page.content, "lxml")
unpop={}
unpop['vars']=getFormSelectVals(soup,'Variable')
unpop['countries']=getFormSelectVals(soup,'Location')
unpop['variants']=getFormSelectVals(soup,'Varient')
return unpop
def getUNPopDivData(dfi,country,indicator,variant='Medium variant',startyear=1950,endyear=2100):
#Download selected data from the UN Population Division indicators page and return it as a pandas dataframe
if not isinstance(country,list): country= [country]
if not isinstance(indicator,list): indicator= [indicator]
#Limited to five options - use the first 5
#For each item, look up the name from the scraped form elements and find what code number to use.
locations= [dfi['countries'][x] for x in country[:5]]
variables= [dfi['vars'][x] for x in indicator[:5]]
varient=dfi['variants'][variant]
data=[('Panel',2),('Varient',varient),('StartYear',startyear),('EndYear',endyear),('DoWhat','Download as .CSV File')]
#If we request multiple locations or variables, the form repeats the 'Location' or 'Variable' attribute name
#This means we can't use a Python dict to represent the arguments - instead we create a list of tuples
for l in locations: data.append(('Location',l))
for v in variables: data.append(('Variable',v))
url='http://esa.un.org/unpd/wpp/unpp/p2k0data_script.asp'
response = requests.post(url,data=data)
#The data is returned as a string - we use StringIO to make it look like a file stream so pd.read_csv() can read it
#The last two lines of the returned data are an empty line and a metadata line - so let's ignore them.
#skipfooter currently only works with the python engine - so let's declare that to prevent a warning
#old version with skipfooter, unicode characters represent a problem
#df=pd.read_csv(StringIO(response.text), skipfooter=2, engine='python', encoding='latin-1')
#new version, unicode OK (you're welcome Côte d'Ivoire)
df=pd.read_csv(StringIO(response.text))
df.ix[0:len(df.index)-3]
return df
url='http://esa.un.org/unpd/wpp/unpp/panel_indicators.htm'
page = requests.get(url)
soup=BeautifulSoup(page.content, "lxml")
unpop={}
unpop['vars']=getFormSelectVals(soup,'Variable')
unpop['countries']=getFormSelectVals(soup,'Location')
unpop['variants']=getFormSelectVals(soup,'Varient')
#Helper routines for working with the data
def search(d, substr):
#Partial string match search within dict key names
#via http://stackoverflow.com/a/10796050/454773
result = []
for key in d:
if substr.lower() in key.lower():
result.append((key, d[key]))
return result
```
So that's the code... now how do we use it?
The first thing we need to do is see what indicators are available:
```
#Get indicators
dfi=getUNPopDivIndicators()
#Preview indicators
dfi['vars']
```
We can also search within indicators to find their proper names.
```
#Example search within indicators
search(dfi['vars'],'pop')
```
The following shows us what prediction variants are supported.
```
#Show variants of predictions
dfi['variants']
```
We search for countries by name - but we need to make sure we pass the name in correctly. So search to find out...
```
#Example search within country names
search(dfi['countries'],'united')
```
Once we know what we want to ask for, we can make a data request.
```
#Let's run a test query
pop=getUNPopDivData(dfi,['World'],
['Population by sex (annual)'])
pop=pop.query('Sex=="Total"').drop(['Sex','Variable','Variant'],axis=1).set_index(['Country','Year'])
import matplotlib.pyplot as plt
%matplotlib inline
pop.loc['World'].plot()
```
UN POP Stats API can take calls only for max 5 countries at a time. Let's split the country list into batches of 5 and do the calls.
```
def split(arr, size):
arrs = []
while len(arr) > size:
pice = arr[:size]
arrs.append(pice)
arr = arr[size:]
arrs.append(arr)
return arrs
#Make all calls - could take up to 5 minutes based on your internet connection and processor speed
n = len(split(list(dfi['countries']),5))
pop = pd.DataFrame()
for i in range(n):
print round((i+1)/float(n),2)
pop2=getUNPopDivData(dfi,split(list(dfi['countries']),5)[i],
['Population by sex (annual)'])
pop2=pop2.query('Sex=="Total"').drop(['Sex','Variable','Variant'],axis=1).set_index(['Country','Year'])
pop=pd.concat([pop,pop2])
#save to file to avoid running again
file('pop.json','w').write(pop.to_csv())
file('pop.csv','w').write(pop.to_csv())
pop.loc['World'].plot()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import re
from sklearn.preprocessing import LabelEncoder
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from gensim.models import Word2Vec
from nltk.tokenize import word_tokenize, sent_tokenize
from gensim.models import Phrases
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from gensim.models import FastText
from keras.models import Sequential
import gensim
from gensim.models import FastText
from keras.layers import Embedding,Dense,LSTM, Dropout, Flatten, Convolution1D, MaxPooling1D
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import re
import nltk
nltk.download('stopwords')
%matplotlib inline
from gensim.models.wrappers import FastText
model = FastText.load_fasttext_format('wiki.en/wiki.en.bin')
```
### preprocessing data
```
#Preprocessing - removing unwanted characters, tokenization, stop-word removal
def clean_data(txts):
x = re.sub("[^a-zA-Z]", " ",txts)
x = x.lower().split()
stops = set(stopwords.words("english"))
words = [w for w in x if not w in stops]
return( " ".join(words))
def clean_norm(txts):
x = re.sub('(OntoBiotope |)','',txts)
x = re.sub('(:|_)',' ',x)
x = re.sub('(Taxonomy)','taxon',x)
return x
```
### Train_norm+Dev_norm appended (same as Train_norm+ner, Dev_norm+ner appended)
```
data = pd.read_csv('data.csv')
data['norm2'].fillna(value='', inplace=True)
data['norm3'].fillna(value='', inplace=True)
data['norm4'].fillna(value='', inplace=True)
data['norm5'].fillna(value='', inplace=True)
data = data.drop(data[data.entity == 'Paragraph'].index)
data = data.drop(data[data.entity == 'Title'].index)
print(data.shape)
data.head()
```
### Test_norm
```
def get_test_data(path):
z=[]
df_dat = pd.DataFrame(columns = ['text','words','entity'])
org_dat = os.listdir(path)
for i in range(len(org_dat)):
org1 = org_dat[i]
if org1.endswith('.a1'):
org_path1 = os.path.join(path,org1)
text1 = pd.read_table(org_path1,header = None)
for j in range(0,text1.shape[0]):
#print(org1,org2)
#print(text1.iloc[j][2],re.sub(r'[0-9; ]','',text1.iloc[j][1]))
z.append(org1)
df_dat = df_dat.append({'text':text1.iloc[j][0] ,'words': text1.iloc[j][2], 'entity': re.sub(r'[0-9; ]','',text1.iloc[j][1]) }, ignore_index = True)
#print(df_dat)
return df_dat,z
path = 'BioNLP-OST-2019_BB-norm_test'
df_test,z = get_test_data(path)
print(df_test.shape)
df_test = df_test.drop(df_test[df_test.entity == 'Paragraph'].index)
df_test = df_test.drop(df_test[df_test.entity == 'Title'].index)
print(df_test.shape)
df_test.head()
# test = pd.read_csv('test.csv')
# print(test.shape)
# test = test.drop(test[test.entity == 'Paragraph'].index)
# test = test.drop(test[test.entity == 'Title'].index)
# print(test.shape)
# test.head()
data['words'] = data['words'].apply(lambda x: clean_data(x))
test = df_test
test['words'] = test['words'].apply(lambda x: clean_data(x))
data_words = data['words'].tolist()
data_entity = data['entity'].tolist()
test_words = test['words'].tolist()
test_entity = test['entity'].tolist()
X1_train= np.zeros((len(data_words),300))
X2_train= np.zeros((len(data_entity),300))
for i in range(len(data_words)):
words = word_tokenize(data_words[i])
emb = [model.wv[w] for w in words]
X1_train[i] = np.mean(emb, axis=0)
for w in data_entity:
X2_train[i] = model.wv[w]
X_train= np.concatenate((X1_train,X2_train),axis=1)
print(X_train.shape)
X1_test= np.zeros((len(test_words),300))
X2_test= np.zeros((len(test_entity),300))
for i in range(len(test_words)):
words = word_tokenize(test_words[i])
emb = [model.wv[w] for w in words]
X1_test[i] = np.mean(emb, axis=0)
for w in test_entity:
X2_test[i] = model.wv[w]
X_test = np.concatenate((X1_test,X2_test),axis=1)
print(X_test.shape)
df_train= pd.DataFrame(X_train)
df_test= pd.DataFrame(X_test)
df_train.fillna(value=0,inplace=True)
df_test.fillna(value=0,inplace=True)
X_test = np.array(df_test)
X_train = np.array(df_train)
print(X_test.shape)
print(X_train.shape)
data_norm1 = data['norm1'].tolist()
le = LabelEncoder()
Y = le.fit_transform(data_norm1)
Y.shape
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
model_class = RandomForestClassifier(n_estimators=30,criterion='entropy',class_weight='balanced')
model_class.fit(X_train,Y)
pre = model_class.predict(X_test)
# print(accuracy_score(y_test,pre))
# print(confusion_matrix(y_test,pre))
norm = le.inverse_transform(pre)
sub = pd.DataFrame({'words':test['words'],'entity':test['entity'],'norm':norm})
sub.head()
# def get_test_data(path):
# z=[]
# df_dat = pd.DataFrame(columns = ['text','words','entity'])
# org_dat = os.listdir(path)
# for i in range(len(org_dat)):
# org1 = org_dat[i]
# if org1.endswith('.a1'):
# org_path1 = os.path.join(path,org1)
# text1 = pd.read_table(org_path1,header = None)
# for j in range(0,text1.shape[0]):
# #print(org1,org2)
# #print(text1.iloc[j][2],re.sub(r'[0-9; ]','',text1.iloc[j][1]))
# z.append(org1)
# df_dat = df_dat.append({'text':text1.iloc[j][0] ,'words': text1.iloc[j][2], 'entity': re.sub(r'[0-9; ]','',text1.iloc[j][1]) }, ignore_index = True)
# #print(df_dat)
# return df_dat,z
# path = 'BioNLP-OST-2019_BB-norm_test'
# df_test,z = get_test_data(path)
# print(df_test.shape)
# df_test = df_test.drop(df_test[df_test.entity == 'Paragraph'].index)
# df_test = df_test.drop(df_test[df_test.entity == 'Title'].index)
# print(df_test.shape)
# df_test.head()
text = df_test['text'].tolist()
annotation=[]
for i in range(0,norm.shape[0]):
if 'NCBI' in norm[i]:
annotation.append(f'N{i+1}\tNCBI_Taxonomy Annotation:{text[i]} Referent:{os.path.splitext(norm[i])[0][14:]}')
elif 'OntoBiotope' in norm[i]:
annotation.append(f'N{i}\tOntoBiotope Annotation:{text[i]} Referent:{os.path.splitext(norm[i])[0][12:]}')
sub = pd.DataFrame(annotation)
sub.head()
pd.read_csv('BioNLP-OST-2019_BB-norm_train/BB-norm-448557.a1',header=None)
x_train,x_test,y_train,y_test = train_test_split(X_train,Y,test_size=0.10,random_state=42)
x_train.shape,x_test.shape,y_train.shape,y_test.shape
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
model_class = RandomForestClassifier(n_estimators=30,criterion='entropy',class_weight='balanced')
model_class.fit(x_train,y_train)
pre = model_class.predict(x_test)
print(accuracy_score(y_test,pre))
print(confusion_matrix(y_test,pre))
norm = le.inverse_transform(pre)
sub = pd.DataFrame({'words':test['words'],'entity':test['entity'],'norm':norm})
```
## cross validation
```
from sklearn.model_selection import cross_val_predict
```
| github_jupyter |
# AWS Summit San Francisco 2022
## Using AWS Inferentia to optimize HuggingFace model inference
Welcome to the AWS Summit San Francisco 2022 Inferentia Workshop!
# Table of contents
1. [Introduction](#introduction)
1. [Setting up the environment](#setenv)
3. [Get model from HuggingFace Model Hub](#getmodel)
1. [Get the Tokenizer](#gettoken)
2. [Download models and prepare them for inference](#trace)
4. [Deploy default model to a CPU-based endpoint](#deploycpu)
1. [Perform a test CPU based inference](#testcpu)
5. [Compile and deploy the model on an Inferentia instance](#compiledeploy)
1. [Review changes to the inference code](#reviewchanges)
2. [Create and compile Pytorch model for the inf1 instance](#pytorchmodel)
3. [Deploy compiled model into the inf1 instance](#deployinf1)
4. [Perform a test inf1 based inference](#testinf1)
6. [Benchmark and comparison](#benchmark)
1. [Benchmark CPU based endpoint](#benchcpu)
2. [Benchmark Inferentia based endpoint](#benchinf1)
7. [Comparison and conclusions](#conclusions)
8. [Cleanup](#cleanup)
---
# 1. Introduction <a name="introduction"></a>
During this workshop, we will create two endpoints with one HuggingFace model each. We will use them for the task of paraphrase detection which is an NLP classification problem.
These two endpoints will have the following configurations: a) CPU-based endpoint, where we will be deploying the model with no changes; and b) Inf1 instance based endpoint, where we will prepare and compile the model using SageMaker Neo before deploying.
Finally, we will perform a latency and throughput performance comparison of both endpoints.
[AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/) is Amazon's first ML chips designed to accelerate deep learning workloads and is part of a long-term strategy to deliver on this vision. AWS Inferentia is designed to provide high performance inference in the cloud, to drive down the total cost of inference, and to make it easy for developers to integrate machine learning into their business applications. AWS Inferentia chips deliver up 2.3x higher throughput and up to 70% lower cost per inference than comparable current generation GPU-based Amazon EC2 instances, as we will confirm in the example notebook.
[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable developers to run high-performance and low latency inference using AWS Inferentia-based Amazon EC2 Inf1 instances. Using Neuron, you can bring your models that have been trained on any popular framework (PyTorch, TensorFlow, MXNet), and run them optimally on Inferentia. There is excellent support for Vision and NLP models especially, and on top of that we have released great features to help you make the most efficient use of the hardware, such as [dynamic batching](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/torch-neuron-dataparallel-app-note.html#dynamic-batching-description) or [Data Parallel](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/api-torch-neuron-dataparallel-api.html) inferencing.
[SageMaker Neo](https://aws.amazon.com/sagemaker/neo/) saves you the effort of DIY model compilation, extending familiar SageMaker SDK API's to enable easy compilation for a [wide range](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_OutputConfig.html#API_OutputConfig_Contents) of platforms. This includes CPU and GPU-based instances, but also Inf1 instances; in this case, SageMaker Neo uses the Neuron SDK to compile your model.
---
### Setting up the environment <a name="setenv"></a>
First, make sure you are using the Python 3 (Pytorch 1.8 Python 3.6 CPU Optimized) Kernel. And that you are working in the us-west-2 region unless instructed otherwise.
Then, install ipywidgets library and restart the kernel to be able to use it.
```
%%capture
import IPython
import sys
!{sys.executable} -m pip install ipywidgets
IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used
```
STOP! Restart the Kernel, comment the cell above and continue.
We will then install required Python packages. Also, we will create a default Amazon Sagemaker session, get the Amazon Sagemaker role and default Amazon S3 bucket.
```
%%capture
!pip install -U transformers
!pip install -U sagemaker
!pip install -U torch
import sys
import transformers
import sagemaker
import torch
import boto3
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
sess_bucket = sagemaker_session.default_bucket()
```
---
## 2. Get model from HuggingFace Model Hub <a name="getmodel"></a>
For this workshop, we will use [Prompsit/paraphrase-bert-en](https://huggingface.co/Prompsit/paraphrase-bert-en) transformer model from HuggingFace Model Hub. It has been fine-tuned from a pretrained model called "bert-base-uncased". The model works comparing a pair of sentences, it determines the semantic similarity between them. If the two sentences convey the same meaning it is labelled as paraphrase, otherwise it is labeled as non-paraphrase.
So it allows to evaluate paraphrases for a given phrase, answering the following question: Is "phrase B" a paraphrase of "phrase A"? and the resulting probabilities correspond to classes:
0: Not a paraphrase
1: It's a paraphrase
This model doesn't expect to find punctuation marks or long pieces of text.
### Get the Tokenizer <a name="gettoken"></a>
As a first step, we need to get the tokenizer. A tokenizer breaks a stream of text into tokens, and it is in charge of preparing the inputs for a model. We need it to create a sample input to interact with the model, and will get it from HuggingFace through the `transformers` library. It is important to set the `return_dict` parameter to `False` when instantiating the model. In `transformers` v4.x, this parameter is `True` by default and it enables the return of dict-like python objects containing the model outputs, instead of the standard tuples. Neuron compilation does not support dictionary-based model ouputs, and compilation would fail if we didn't explictly set it to `False`.
```
tokenizer = transformers.AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-en")
model = transformers.AutoModelForSequenceClassification.from_pretrained(
"Prompsit/paraphrase-bert-en", return_dict=False
)
```
### Download models and prepare them for inference <a name="trace"></a>
We will download the model and create two files with different formats. The first one is the model itself with no changes. This one will be uploaded and used in the CPU based endpoint as it is. The second image is a traced Pytorch image of the model so we can compile it before deploying it to the inf1 instance.
PyTorch models must be saved as a definition file (.pt or .pth) with input datatype of float32.
To save the model, we will use torch.jit.trace followed by torch.save. This will save an object to a file ( a python pickle: pickle_module=pickle).
Next, we will convert the saved model to a compressed tar file and upload it to an S3 bucket.
As a final step, we will create a sample input to `jit.trace` of the model with PyTorch. We need this to have SageMaker Neo compile the model artifact.
```
from pathlib import Path
# Create directory for model artifacts
Path("normal_model/").mkdir(exist_ok=True)
Path("traced_model/").mkdir(exist_ok=True)
# Prepare sample input for jit model tracing
seq_0 = "Welcome to AWS Summit San Francisco 2022! Thank you for attending the workshop on using Huggingface transformers on Inferentia instances."
seq_1 = seq_0
max_length = 512
tokenized_sequence_pair = tokenizer.encode_plus(
seq_0, seq_1, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt"
)
example = tokenized_sequence_pair["input_ids"], tokenized_sequence_pair["attention_mask"]
traced_model = torch.jit.trace(model.eval(), example)
model.save_pretrained('normal_model/')
traced_model.save("traced_model/model.pth") # The `.pth` extension is required.
!tar -czvf normal_model.tar.gz -C normal_model . && mv normal_model.tar.gz normal_model/
!tar -czvf traced_model.tar.gz -C traced_model . && mv traced_model.tar.gz traced_model/
```
We upload the traced model `tar.gz` file to Amazon S3, where the compilation job will download it from
```
normal_model_url = sagemaker_session.upload_data(
path="normal_model/normal_model.tar.gz",
key_prefix="neuron-experiments/bert-seq-classification/normal-model",
)
traced_model_url = sagemaker_session.upload_data(
path="traced_model/traced_model.tar.gz",
key_prefix="neuron-experiments/bert-seq-classification/traced-model",
)
```
---
## 3. Deploy default model to a CPU-based endpoint <a name="deploycpu"></a>
As a first step, we create model from the Hugging Face Model Class.
We will be passing the `normal_model_url` as the `model_data` parameter to the `HuggingFaceModel` API.
Notice that we are passing `inference.py` as the entry point script; also, the packages defined in the requirements file within the `source_dir` will automatically be installed in the endpoint instance. In this case we will use the `transformers` library that is compatible Inferentia instances (v. 4.15.0)
```
from sagemaker.huggingface import HuggingFaceModel
from sagemaker.predictor import Predictor
from datetime import datetime
prefix = "neuron-experiments/bert-seq-classification"
flavour = "normal"
date_string = datetime.now().strftime("%Y%m-%d%H-%M%S")
normal_sm_model = HuggingFaceModel(
model_data=normal_model_url,
predictor_cls=Predictor,
transformers_version="4.12.3",
pytorch_version='1.9.1',
role=role,
entry_point="inference.py",
source_dir="code",
py_version="py38",
name=f"{flavour}-distilbert-{date_string}",
env={"SAGEMAKER_CONTAINER_LOG_LEVEL": "10"},
)
```
Then, we create the endpoint and deploy the model for inference. This process will take about 4 minutes to complete. As you can see, one line of code will create a [real time endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) for real time inference that you can integrate to your applications. These endpoints are fully managed and support autoscaling.
```
%%time
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer
hardware = "c5"
normal_predictor = normal_sm_model.deploy(
instance_type="ml.c5.xlarge",
initial_instance_count=1,
endpoint_name=f"paraphrase-bert-en-{hardware}-{date_string}",
serializer=JSONSerializer(),
deserializer=JSONDeserializer(),
)
```
### Perform a test inference <a name="testcpu"></a> on CPU
We will perform a quick test to see if the endpoint is responding as expected. We will send sample sequences.
```
# Predict with model endpoint
client = boto3.client('sagemaker')
#let's make sure it is up und running first
status = ""
while status != 'InService':
endpoint_response = client.describe_endpoint(EndpointName=f"paraphrase-bert-en-{hardware}-{date_string}")
status = endpoint_response['EndpointStatus']
# Send a payload to the endpoint and recieve the inference
payload = seq_0, seq_1
normal_predictor.predict(payload)
```
---
## 4. Compile and deploy the model on an Inferentia instance <a name="compiledeploy"></a>
In this section we will cover the compilation and deployment of the model into the inf1 instance. We will also review the changes in the inference code.
### Review inference code <a name="reviewchanges"></a>
If you open `inference.py` you will see a few functions:
a) `model_fn` which receives the model directory and is responsible for loading and returning the model.
b) `input_fn` and `output_fn` functions that are in charge of pre-processing/checking content types of input and output to the endpoint.
And c) `predict_fn`, receives the outputs of `model_fn` and `input_fn` and defines how the model will run inference (it recieves the loaded model and the deserialized/pre-processed input data).
All of this code runs inside the endpoint once it is created.
```
!pygmentize code/inference.py
```
In this case, notice that we will load the corresponding model depending on where the function is deployed. `model_fn` will return a tuple containing both the model and its corresponding tokenizer. Both the model and the input data will be sent `.to(device)`, which can be a CPU or GPU.
Also, notice the `predict_fn`. In this function we recieve the string for inference, convert it to the format the model accepts, ask the model for the inference, recieve the inference and format it in clear text as a return string. In real life you might not need to do this interpretation since your application might be fine receiving the predicted class and use it directly.
### Create and compile Pytorch model for the inf1 instance <a name="pytorchmodel"></a>
We will now create a new `Huggingface` model that will use the `inference.py` file described above as its entry point script.
```
from sagemaker.huggingface import HuggingFaceModel
from sagemaker.predictor import Predictor
from datetime import datetime
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer
date_string = datetime.now().strftime("%Y%m-%d%H-%M%S")
hardware = "inf1"
compilation_job_name = f"paraphrase-bert-en-{hardware}-" + date_string
output_model_path = f"s3://{sess_bucket}/{prefix}/neo-compilations19/{hardware}-model"
compiled_inf1_model = HuggingFaceModel(
model_data=traced_model_url,
predictor_cls=Predictor,
transformers_version="4.12.3",
pytorch_version='1.9.1',
role=role,
entry_point="inference.py",
source_dir="code",
py_version="py37",
name=f"distilbert-{date_string}",
env={"SAGEMAKER_CONTAINER_LOG_LEVEL": "10"},
)
```
We are ready to compile the model! Two additional notes:
* HuggingFace models should be compiled to `dtype` `int64`
* the format for `compiler_options` differs from the standard Python `dict` that you can use when compiling for "normal" instance types; for inferentia, you must provide a JSON string with CLI arguments, which correspond to the ones supported by the [Neuron Compiler](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/command-line-reference.html) (read more about `compiler_options` [here](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_OutputConfig.html#API_OutputConfig_Contents))
#### Model compilation
Let's compile the model (this will take around 10 minutes to complete):
```
%%time
import json
compiled_inf1_model = compiled_inf1_model.compile(
target_instance_family=f"ml_{hardware}",
input_shape={"input_ids": [1, 512], "attention_mask": [1, 512]},
job_name=compilation_job_name,
role=role,
framework="pytorch",
framework_version="1.9.1",
output_path=output_model_path,
compiler_options=json.dumps("--dtype int64"),
compile_max_run=900,
)
```
#### Compiler logs and artifacts
Open a new browser tab and navigate to the Sagemaker Console. Under the Images menu on the left you will find the menu Inference and inside "Compilation Jobs". Here is where you will find the job that was executed in the previous cell. Look for the job name to get its details. If you scroll down you will find a section called "Monitor" you can access the compiler logs hosted in Cloudwatch. Look for the successful completion of the job in a line similar to the following:
Also, in the Output section, you will find a link to the S3 compiled model artifact. Click on it so see where it was stored.
```
print("Compilation job name: {} \nOutput model path in S3: {}".format(compilation_job_name, output_model_path))
```
### Deploy compiled model into the inf1 instance <a name="deployinf1"></a>
After successful compilation, we deploy the new model to an inf1.xlarge instance based [real time endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html). As you can see, the one line of code procedure is similar to creating a CPU based instance.
```
%%time
compiled_inf1_predictor = compiled_inf1_model.deploy(
instance_type="ml.inf1.xlarge",
initial_instance_count=1,
endpoint_name=f"paraphrase-bert-en-{hardware}-{date_string}",
serializer=JSONSerializer(),
deserializer=JSONDeserializer(),
wait=False
)
```
### Perform a test inference <a name="testinf1"></a>
As a final test, we first make sure the endpoint is up una running in a `InService` state, and then perform a simple inference sending two sequences of text and wait for the response.
```
# Predict with model endpoint
client = boto3.client('sagemaker')
#let's make sure it is up und running first
status = ""
while status != 'InService':
endpoint_response = client.describe_endpoint(EndpointName=f"paraphrase-bert-en-{hardware}-{date_string}")
status = endpoint_response['EndpointStatus']
# Send a payload to the endpoint and recieve the inference
payload = seq_0, seq_1
compiled_inf1_predictor.predict(payload)
```
---
## 5. Benchmark and comparison <a name="benchmark"></a>
Now that we have both endpoints online, we will perform a benchmark using Python's `threading` module. In each benchmark, we start 5 threads that will each make 100 requests to the model endpoint. We measure the inference latency for each request, and we also measure the total time to finish the task, so that we can get an estimate of the request throughput/second.
### Benchmark CPU based endpoint <a name="benchcpu"></a>
```
%%time
# Run the benchmark
import threading
import time
num_preds = 100
num_threads = 5
times = []
def predict():
thread_id = threading.get_ident()
print(f"Thread {thread_id} started")
for i in range(num_preds):
tick = time.time()
response = normal_predictor.predict(payload)
tock = time.time()
times.append((thread_id, tock - tick))
threads = []
[threads.append(threading.Thread(target=predict, daemon=False)) for i in range(num_threads)]
[t.start() for t in threads]
# Wait for threads, get an estimate of total time
start = time.time()
[t.join() for t in threads]
end = time.time() - start
# Display results
from matplotlib.pyplot import hist, title, show, xlim
import numpy as np
TPS_CPU = (num_preds * num_threads) / end
t_CPU = [duration for thread__id, duration in times]
latency_percentiles = np.percentile(t_CPU, q=[50, 90, 95, 99])
latency_CPU = latency_percentiles[2]*1000
hist(t_CPU, bins=100)
title("Request latency histogram on CPU")
show()
print("==== Default HuggingFace model on CPU benchmark ====\n")
print(f"95 % of requests take less than {latency_CPU} ms")
print(f"Rough request throughput/second is {TPS_CPU}")
```
We can see that request latency is in the 1-1.2 second range, and throughput is ~4.5 TPS.
### Benchmark Inferentia based endpoint <a name="benchinf1"></a>
```
%%time
# Run benchmark
import threading
import time
num_preds = 300
num_threads = 5
times = []
def predict():
thread_id = threading.get_ident()
print(f"Thread {thread_id} started")
for i in range(num_preds):
tick = time.time()
response = compiled_inf1_predictor.predict(payload)
tock = time.time()
times.append((thread_id, tock - tick))
threads = []
[threads.append(threading.Thread(target=predict, daemon=False)) for i in range(num_threads)]
[t.start() for t in threads]
# Make a rough estimate of total time, wait for threads
start = time.time()
[t.join() for t in threads]
end = time.time() - start
# Display results
from matplotlib.pyplot import hist, title, show, xlim
import numpy as np
TPS_inf1 = (num_preds * num_threads) / end
t_inf1 = [duration for thread__id, duration in times]
latency_percentiles = np.percentile(t_inf1, q=[50, 90, 95, 99])
latency_inf1 = latency_percentiles[2]*1000
hist(t_inf1, bins=100)
title("Request latency histogram on Inferentia")
show()
print("==== Default HuggingFace model on inf1 benchmark ====\n")
print(f"95 % of requests take less than {latency_inf1} ms")
print(f"Rough request throughput/second is {TPS_inf1}")
```
We can see that request latency is in the 0.02-0.05 millisecond range, and throughput is ~157 TPS.
---
# 6. Conclusion <a name="conclusions"></a>
```
print("Using inf1 instances latency dropped to a {:.2f} millisecond range from {:.2f} ms on a CPU endpoint.".format(latency_inf1, latency_CPU))
print("Also, The average throughput increased to {:.2f} TPS from {:.2f} TPS on the CPU.".format( TPS_inf1, TPS_CPU) )
```
This increase in performance obtained from using inf1 instances, paired with the cost reduction and the use of known SageMaker SDK APIs, enables new benefits with little development effort and a gentle learning curve.
* To learn more about how to deploy Hugging Face modes through Sagemaker on to Inf1, please watch their latest [Webinar](https://www.youtube.com/watch?v=3fulTyMXhWQ), and read their latest [blog post](https://huggingface.co/blog/bert-inferentia-sagemaker).
* For more information about Inferentia, please see the AWS EC2 Inf1 [website](https://aws.amazon.com/ec2/instance-types/inf1/) or check out other Tutorials available online [here] (https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/tutorials.html).
* You can learn more about Inferentia performance on the [Neuron Inference Performance](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/index.html) pages
---
# 7. Clean up <a name="cleanup"></a>
Delete the models and release the endpoints.
```
normal_predictor.delete_model()
normal_predictor.delete_endpoint()
compiled_inf1_predictor.delete_model()
compiled_inf1_predictor.delete_endpoint()
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/TriangleArea/triangle-area.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
```
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
!pip install --upgrade --force-reinstall --user git+git://github.com/callysto/nbplus.git#egg=geogebra\&subdirectory=geogebra
from importlib import reload
import site
reload(site)
from geogebra.ggb import *
ggb = GGB()
from IPython.display import Image
import ipywidgets as widgets
import IPython
```
# Area of a Triangle
##### Grade 7 Curriculum
This is a notebook on how to calculate the area of a triangle. In this notebook, we will focus on how to calculate the area of any triangle. We will also learn how to find out the length of one variable, either base or height, while given the other variable and the area.
This notebook will help you understand how to find the area of a triangle, and give a few practice questions. This one notebook won’t be enough practice to master this skill, so be sure to do lots of practice!
### What is a triangle?
A triangle is defined as a 2D shape with 3 straight sides and 3 angles, which can take many forms:
 www.onlinemathlearning.com
### How to find the area
Finding the area of a triangle seems a little trickier than finding the area of a rectangle, but there's an easy way to look at it.
In the picture below, a right angle triangle is created by cutting a rectangle diagonally in half. This makes two triangles with equal areas. We can see that the area of one triangle is half of the area of the rectangle. Since the area of a rectangle is $\text{base} \times \text{height}$, the area of a triangle would be $\frac{\text{base } \times \text{ height}}{2}$. Sometimes it’s written as $\frac{1}{2} \times \text{base} \times \text{height}$, though they both mean the same thing. We will use the first formula to keep it simple.
 www.homeschoolmath.net
### What about non-right angle triangles?
If non-right angle triangles aren't created out of rectangles, then how can we calculate their area?
Non-right angled triangles aren't created out of cutting a rectangle, but we get another shape when they're doubled: a parallelogram! We know from the [Area of a Parallelogram notebook](../ParallelogramArea/parallelogram-area.ipynb) that a parallelogram has the same area formula as a rectangle. Now we can figure out the area of any triangle like this:
 http://people.wku.edu/tom.richmond/triangle1.html
We can also create rectangles out of non-right angle triangles by moving the area around, like this:
 www.wikimedia.org
Or if you only work with the original area, we can create a rectangle like this:
 http://people.wku.edu/tom.richmond/triangle2.html
### Formula
So we can see that no matter what kind of triangle you're faced with, as long as we know the base and the height, the area of the triangle will be $$\frac{\text{base } \times \text{ height}}{2}$$.
We are going to write base as b and height as h in the practice questions.
### Still not conviced?
Choose a length for the base and height then slide point $P$ back and forth on the triangle below and watch the area to see if it changes based on the triangle you make. Try making an obtuse triangle, a right-angle triangle, and an acute triangle.
```
ggb.material('QgWDn3pt').draw()
```
### Let's try it with some numbers
Let's look at this triangle:
```
Image('images/TriStep1.png',width=300,height=400)
```
We can see that the height is 2 cm and the base is 3.4 cm. Let's duplicate this triangle and create a rectangle out of both triangles:
```
Image('images/TriStep2.png',width=300,height=400)
```
Now we have a rectangle, which we know how to calculate the area for using the formula
$$\begin{align*}
\text{Area of rectangle} & = \text{base} \times \text{height}\\
\text{Area of rectangle} & = 2 \text{ cm} \times 3.4 \text{ cm}\\
\text{Area of rectangle} & = 6.8 \text{ cm}^2
\end{align*}$$
Since the triangle makes up only half of this rectangle, we have to divide the area in half as well.
$$\begin{align*}
\text{Area of triangle} & = \text{Area of rectangle} \div 2\\
\text{Area of triangle} & = 6.8 \text{ cm}^2 \div 2\\
\text{Area of triangle} & = 3.4 \text{ cm}^2
\end{align*}$$
Therefore the area of the triangle by itself is $3.4 \text{ cm}^2$
##### Now let's use our formula!
$$\begin{align*}
\text{Area of triangle} & = \frac{\text{base} \times \text{height}}{2}\\
\text{Area of triangle} & = \frac{3.4 \text{ cm} \times 2 \text{ cm}}{2}\\
\text{Area of triangle} & = \frac{6.8 \text{ cm}^2}{2}\\
\text{Area of triangle} & = 3.4 \text{ cm}^2
\end{align*}$$
As you can see, we get the same answer!
### Question 1
```
Image('images/TriQ1.png',width=300,height=400)
ans = widgets.Text(value='',placeholder='0',description='Area=',continuous_update=False)
def display():
print("What is the area of this triangle?")
IPython.display.display(ans)
def check(a):
IPython.display.clear_output(wait=True)
display()
if ans.value == '10m^2' or ans.value == '10 m^2':
print("Great job!")
else:
if ans.value == '':
pass
if ans.value == '10':
print("Don't forget units! Remember that area is given in square units.")
print("You can include units in your answer by typing cm^2, m^2, or mm^2.")
else:
print("Not quite, don't forget the formula.")
display()
ans.observe(check, 'value')
```
### Question 2
```
Image('images/TriQ2.png',width=300,height=400)
ans2 = widgets.Text(value='',placeholder='0',description='Area=',continuous_update=False)
def display2():
print("What is the area of this triangle?")
IPython.display.display(ans2)
def check2(a):
IPython.display.clear_output(wait=True)
display2()
if ans2.value == '9.6 cm^2' or ans2.value == '9.6cm^2':
print("Great job!")
else:
if ans2.value == '':
pass
if ans2.value == '9.6':
print("Don't forget units! Remember that area is given in square units.")
print("You can include units in your answer by typing cm^2, m^2, or mm^2.")
else:
print("Not quite, don't forget the formula.")
display2()
ans2.observe(check2, 'value')
Image('images/TriQ3.png',width=400,height=500)
ans3 = widgets.Text(value='',placeholder='0',description='Area=',continuous_update=False)
def display3():
print("What is the area of this triangle?")
IPython.display.display(ans3)
def check3(a):
IPython.display.clear_output(wait=True)
display3()
if ans3.value == '7.52m^2' or ans3.value == '7.52 m^2':
print("Great job!")
else:
if ans3.value == '':
pass
if ans3.value == '7.52' or ans3.value == '7.52m' or ans3.value == "7.52 m":
print("Don't forget units! Remember that area is given in square units.")
print("You can include units in your answer by typing cm^2, m^2, or mm^2.")
else:
print("Not quite, don't forget the formula.")
display3()
ans3.observe(check3, 'value')
```
### How does a change in height affect the area?
Let's take the first triangle from the example we went through where the base is 3.4 cm and the height is 2 cm which we figured out the area is $3.4text{ cm}^2$. What do you think would happen to the area if we doubled the height to 4 cm?
Let's calculate it:
$$\begin{align*}
\text{Area of triangle} & = \frac{\text{base} \times \text{height}}{2}\\
\text{Area of Triangle} & = \frac{3.4 \text{ cm} \times 4 \text{ cm}}{2}\\
\text{Area of Triangle} & = \frac{13.6\text{ cm}^2}{2}\\
\text{Area of Triangle} & = 6.8 \text{ cm}^2
\end{align*}$$
Interesting. The new area, $6.8 \text{ cm}^2$ is double the amount of the old area, $3.4 \text{ cm}^2$. Perhaps that was just a coincidence? There's a slider over the next triangle labelled height, slide it to double the current height and look at what happens to the area. Can we generalize that when the height is doubled then the area is doubled?
### How does a change in base affect the area?
Now instead of changing the height, let's double the base of the triangle in the first example to 6.8 cm and keep the height at 2 cm. How will this new area compare to the old area? How will it compare to the area when we changed the height?
Let's calculate it:
$$\begin{align*}
\text{Area of triangle} & = \frac{\text{base} \times \text{height}}{2}\\
\text{Area of triangle} & = \frac{6.8 \text{ cm} \times 2 \text{ cm}}{2}\\
\text{Area of triangle} & = \frac{13.6 \text{ cm}^2}{2}\\
\text{Area of triangle} & = 6.8 \text{ cm}^2
\end{align*}$$
Look at that! This new area is the same as when we doubled the height! Let's confirm that doubling the base doubles the area by using the slider labelled base over the triangle below to double the base length and watch what happens to the area.
**Note:** you can slide P back and forth to create right-angle, obtuse, and acute triangles!
```
ggb.material('QgWDn3pt').draw()
```
### What conclusion can we draw?
By trying different values for base and height of the triangle above, we can safely conclude that doubling the base or doubling the height will double the area.
Let's extend this conclusion:
>What would happen to the area if the base or the height were halved?
Go back to the triangle above and try it!
## Practice
Lets use what you've learned to answer these questions:
#### Question 1
What is the area of a right-angled triangle with a base length of 3.2 m and a height of 4 m?
#### Question 2
What is the area of the triangle in question 1 if the height is halved?
#### Question 3
What is the area of the triangle in question 1 if the base is halved and the height is doubled?
#### Question 4
If the triangle in question 1 was an acute triangle instead of right angle triangle would the area be greater than, equal to, or less than the area as a right angle triangle?
### What if you know the area, but not one of the variables?
Let's say you know the area and the base, but are asked for the height, or you know the area and the height, but are asked for the base. How can we find these missing variables? Look at this example below. What is the height of the triangle?
```
Image('images/TriQ4.png',width=300,height=200)
```
Let's look at the formula: $$ \text{Area of triangle} = \frac{\text{base} \times \text{height}}{2}$$
Now let's plug in all the values we know into the formula: $$11.2 \text{ mm}^2 = \frac{7 \text{ mm} \times \text{height}}{2}$$
This looks a lot different than when all the numbers we know are on one side of the equals sign, so let's solve this equation for height which will put all the numbers on one side of the equals sign and height will be the only thing on the other side of the equals sign.
*Don't forget, to keep the meaning of the equation the same, any operation you do on one side, you must do it to the other.*
$$ \begin{align*}
11.2 \text{ mm}^2 & = \frac{7 \text{ mm} \times \text{height}}{2}\\
11.2 \text{ mm}^2 \times 2 & = \frac{7 \text{ mm} \times \text{height}}{2\!\!\!\!\diagup}\times 2\!\!\!\!\diagup \tag{multiply by 2 to cancel out the division}\\[4pt]
(11.2 \text{ mm}^2 \times 2) \div 7 \text{ mm} & = 7\!\!\!\!\diagup \text{ mm} \times \text{height} \div 7\!\!\!\!\diagup \text{ mm} \tag{divide by 7 mm to cancel out the base}\\[4pt]
(11.2 \text{ mm}^2 \times 2) \div 7 \text{ mm} & = \text{height} \tag{Solve}\\[4pt]
22.4 \text{ mm}^2\!\!\!\!\diagup \div 7 \text{ mm}\!\!\!\!\diagup & = \text{height} \tag{cancel units}\\[4pt]
3.2 \text{ mm} & = \text{height}
\end{align*}$$
At the step with the comment "Solve" on it, that step actually shows us the formula for height when you know the base and area. Let's remove the values from that step to see what the formula is in general: $$\frac{\text{Area of triangle} \times 2}{\text{base}} = \text{height}$$
This can also be written with height on the left and the formula on the right: $$\text{height} = \frac{\text{Area of triangle} \times 2}{\text{base}}$$
### What about the other way?
We can do the same thing if we know the height and not the base! What is the base length of this triangle?
```
Image('images/TriQ5.png',width=400,height=200)
```
Let's use the same steps as the last example to solve this one.
*Don't forget, to keep the meaning of the equation the same, any operation you do on one side, you must do it to the other.*
$$\begin{align*}
\text{Area of triangle} & = \frac{\text{base} \times \text{height}}{2}\\
7.5 \text{ cm}^2 & = \frac{\text{base} \times 5 \text{ cm}}{2} \tag{plug in values}\\[4pt]
7.5 \text{ cm}^2 \times 2 & = \frac{\text{base} \times 5 \text{ cm}}{2\!\!\!\!\diagup}\times 2\!\!\!\!\diagup \tag{multiply by 2 to cancel out the division}\\[4pt]
(7.5 \text{ cm}^2 \times 2) \div 5 \text{ cm} & = \text{base} \times 5\!\!\!\!\diagup \text{ cm} \div 5\!\!\!\!\diagup \text{ cm} \tag{Divide by 5 cm to cancel out the base}\\[4pt]
(7.5 \text{ cm}^2 \times 2) \div 5 \text{ cm} & = \text{base} \tag{Solve}\\[4pt]
15 \text{ cm}^2\!\!\!\!\diagup \div 5 \text{ cm}\!\!\!\!\diagup & = \text{base} \tag{cancel units}\\
3 \text{ cm} & = \text{base}
\end{align*}$$
Now let's get the general formula from the "Solve" step: $$\frac{\text{Area of triangle} \times 2}{\text{height}} = \text{base}$$
This can also be written with base on the left and the formula on the right: $$\text{base} = \frac{\text{Area of triangle} \times 2}{\text{height}}$$
### Practice Questions
Try to figure out the values of the missing base or height.
##### Question 1
What is the height of this triangle?
```
Image('images/TriQ7.png',width=600)
ans4 = widgets.Text(value='',placeholder='0',description='Height =',continuous_update=False)
def display4():
IPython.display.display(ans4)
def check4(a):
IPython.display.clear_output(wait=True)
display4()
if ans4.value == '0.4 cm' or ans4.value == '0.4cm':
print("Great job!")
else:
if ans4.value == '':
pass
if ans4.value == '0.4':
print("Don't forget units!")
print("You can include units in your answer by typing cm, m, or mm.")
else:
print("Not quite, don't forget the formula for height.")
display4()
ans4.observe(check4, 'value')
```
##### Question 2
What is the base length of this triangle?
```
Image('images/TriQ6.png',width=300)
ans5 = widgets.Text(value='',placeholder='0',description='Base =',continuous_update=False)
def display5():
IPython.display.display(ans5)
def check5(a):
IPython.display.clear_output(wait=True)
display5()
if ans5.value == '2.2 m' or ans5.value == '2.2m':
print("Great job!")
else:
if ans5.value == '':
pass
if ans5.value == '2.2':
print("Don't forget units!")
print("You can include units in your answer by typing cm, m, or mm.")
else:
print("Not quite, don't forget the formula for base length.")
display5()
ans5.observe(check5, 'value')
```
##### Question 3
What is the height of this triangle?
```
Image('images/TriQ8.png',width=300)
ans6 = widgets.Text(value='',placeholder='0',description='Height =',continuous_update=False)
def display6():
IPython.display.display(ans6)
def check6(a):
IPython.display.clear_output(wait=True)
display6()
if ans6.value == '1 cm' or ans6.value == '1cm':
print("Great job!")
else:
if ans6.value == '':
pass
if ans6.value == '1':
print("Don't forget units!")
print("You can include units in your answer by typing cm, m, or mm.")
else:
print("Not quite, don't forget the formula for height.")
display6()
ans6.observe(check6, 'value')
```
### What if you have other information other than the base and height?
There are lots of ways to calculate the area of a triangle with different information given to you, some even require knowing an angle or two in the triangle, but thats coming in the future. If you want to learn a little bit about another way to calculate area, this time knowing the length of all three sides, but not the height, then look at the article on [Heron's formula](http://www.mathwarehouse.com/geometry/triangles/area/herons-formula-triangle-area.php) but you don't have to know this yet.
## Conclusion
Let's review what we've learned in this notebook.
- The area of a triangle is half the area of a rectangle with the same base and height.
- The formula is $\text{Area} = \frac{\text{base } \times \text{ height}}{2}$
- The formula works for all types of triangles as long as you know base and height.
- The formula works for finding any variable that is missing by rearranging values on either side of the equals sign.
- There's more than one way to calculate the area.
Take the time to learn this formula and how to fill it in. You may not always have the formula to read, but filling in and manipulating equations like this formula will be essential for many math skills in the future. As always in mathematics, the more you practice, the easier it gets!
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Facies classification using Machine Learning #
## LA Team Submission 5 ##
### _[Lukas Mosser](https://at.linkedin.com/in/lukas-mosser-9948b32b/en), [Alfredo De la Fuente](https://pe.linkedin.com/in/alfredodelafuenteb)_ ####
In this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:
- Features Exploration: based on [Paolo Bestagini's work](https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try02.ipynb), we will consider imputation, normalization and augmentation routines for the initial features.
- Model tuning:
## Libraries
We will need to install the following libraries and packages.
```
# %%sh
# pip install pandas
# pip install scikit-learn
# pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
```
## Data Preprocessing
```
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Fill 'PE' missing values with mean
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(X)
X = imp.transform(X)
```
We procceed to run [Paolo Bestagini's routine](https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try02.ipynb) to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
```
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# # Initialize model selection methods
# lpgo = LeavePGroupsOut(2)
# # Generate splits
# split_list = []
# for train, val in lpgo.split(X, y, groups=data['Well Name']):
# hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
# hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
# if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
# split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
```
## Data Analysis
In this section we will run a Cross Validation routine
```
# from tpot import TPOTClassifier
# from sklearn.model_selection import train_test_split
# X_train, X_test, y_train, y_test = preprocess()
# tpot = TPOTClassifier(generations=5, population_size=20,
# verbosity=2,max_eval_time_mins=20,
# max_time_mins=100,scoring='f1_micro',
# random_state = 17)
# tpot.fit(X_train, y_train)
# print(tpot.score(X_test, y_test))
# tpot.export('FinalPipeline.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
# Pass in the classifier so we can iterate over many seed later.
def train_and_test(X_tr, y_tr, X_v, well_v, clf):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
```
## Prediction
```
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
# # Prepare training data
# X_tr = X
# y_tr = y
# # Augment features
# X_tr, padded_rows = augment_features(X_tr, well, depth)
# # Removed padded rows
# X_tr = np.delete(X_tr, padded_rows, axis=0)
# y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
y_pred = []
print('.' * 100)
for seed in range(100):
np.random.seed(seed)
# Make training data.
X_train, padded_rows = augment_features(X, well, depth)
y_train = y
X_train = np.delete(X_train, padded_rows, axis=0)
y_train = np.delete(y_train, padded_rows, axis=0)
# Train classifier
clf = make_pipeline(XGBClassifier(learning_rate=0.12,
max_depth=3,
min_child_weight=10,
n_estimators=150,
seed=seed,
colsample_bytree=0.9))
# Make blind data.
X_test, _ = augment_features(X_ts, well_ts, depth_ts)
# Train and test.
y_ts_hat = train_and_test(X_train, y_train, X_test, well_ts, clf)
# Collect result.
y_pred.append(y_ts_hat)
print('|', end='')
np.save('LA_Team_100_realizations.npy', y_pred)
# # Augment features
# X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# # Predict test labels
# y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# # Save predicted labels
# test_data['Facies'] = y_ts_hat
# test_data.to_csv('Prediction_XX_Final.csv')
```
| github_jupyter |
TSG086 - Run `top` in all containers
====================================
Steps
-----
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Run top in each container
```
cmd = "top -b -n 1"
pod_list = api.list_namespaced_pod(namespace)
pod_names = [pod.metadata.name for pod in pod_list.items]
for pod in pod_list.items:
container_names = [container.name for container in pod.spec.containers]
for container in container_names:
print (f"CONTAINER: {container} / POD: {pod.metadata.name}")
try:
print(stream(api.connect_get_namespaced_pod_exec, pod.metadata.name, namespace, command=['/bin/sh', '-c', cmd], container=container, stderr=True, stdout=True))
except Exception as err:
print (f"Failed to get run 'top' for container: {container} in pod: {pod.metadata.name}. Error: {err}")
print("Notebook execution is complete.")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/davy-datascience/ml_algorithms/blob/master/LinearRegression/Approach-1/Linear%20Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Linear Regression - with single variable
## Intro
I first tried coding linear regression algorithm being taught by Luis Serrano. Luis produces youtube videos on data-science subjects with easy-to-understand visualizations. In his video [Linear Regression: A friendly introduction](https://www.youtube.com/watch?v=wYPUhge9w5c) he uses the following approach :
<br/>

**Note:**
The dataset we're using contains salary of some people and the number of year of experience.
We are trying to predict the salary given the number of year of experience.
So the number of year of experience is the independent variable and the salary is the dependent variable.
The x-axis is related to the number of year of experience.
The y-axis is related to the salary.
y-intercept is the point that satisfy x = 0, in other words the point of the line that intersects the y-axis
Increasing y-intercept means translating the line up, and decreasing y-intercept means translating the line down
## Implementation
Run the following cell to import all needed modules, you must have opened this document on Google Colab before doing so: <a href="https://colab.research.google.com/github/davy-datascience/ml_algorithms/blob/master/LinearRegression/Approach-1/Linear%20Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
from sympy.geometry import Point, Line
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
import progressbar
```
I used the component Line from the module sympy.geometry. To create a Line I need to specify two Points. The line is also characterized by 3 coefficients (a, b and c) that match the following equation :

In my appoach I am dealing with a line equation of this sort :

So I translated the first equation to match my equation requirement :

Run the following cell. It contains the functions that will be used in the program:
```
def drawAll(X, Y, line):
""" plot the points from the dataset and draw the actual Line """
coefs = line.coefficients
x = np.linspace(X.min(),X.max())
y = (-coefs[0] * x - coefs[2]) / coefs[1]
plt.plot(x, y)
plt.scatter(X, Y, color = 'red')
plt.show()
def transformLine(point, line, x_median, learning_rate):
""" According to the random point, update the Line """
# We take the median of the x values for better results for the calculations of the horizontal distances
# Creation of the vertical line passing through the new point
ymin = line.points[0] if line.direction.y > 0 else line.points[1]
ymax = line.points[1] if line.direction.y > 0 else line.points[0]
vertical_line = Line(Point(point.x,ymin.y), Point(point.x,ymax.y))
# Find the intersection with our line (to calculate the vertical distance)
I = line.intersection(vertical_line)
vertical_distance = point.y - I[0].y
horizontal_distance = point.x - x_median
coefs = line.coefficients
a = coefs[0]
b = coefs[1]
c = coefs[2]
# Calculation of the points which constitute the new line
# Reminder: we add (learning_rate * vertical_distance * horizontal_distance) to the slope and we add (learning_rate * vertical_distance) to y-intercept
# The equation now looks like :
# y = - (a/b)*x + (learning_rate * vertical_distance * horizontal_distance) * x - (c/b) + learning_rate * vertical_distance
# We keep the same scope of the line so the min value of x and the max value of x don't change
x_min = line.points[0].x
y_min = - (a/b)*x_min + (learning_rate * vertical_distance * horizontal_distance * x_min) - (c/b) + learning_rate * vertical_distance
x_max = line.points[1].x
y_max = - (a/b)*x_max + (learning_rate * vertical_distance * horizontal_distance * x_max) - (c/b) + learning_rate * vertical_distance
newLine = Line(Point(x_min, y_min), Point(x_max, y_max))
return newLine
def predict(X, line):
""" I use my model (the equation of the line) to predict new values """
prediction = []
coefs = line.coefficients
a = coefs[0]
b = coefs[1]
c = coefs[2]
for x in X.values:
y = - (a/b)*x - (c/b)
prediction.append(y)
return prediction
```
Run the following cell to launch the linear regression program:
```
# Set the learning rate and the number of iterations
learning_rate = 0.01
nb_epochs = 1000
# Read the data
dataset = pd.read_csv("https://raw.githubusercontent.com/davy-datascience/ml_algorithms/master/LinearRegression/Approach-1/dataset/Salary_Data.csv")
# Separate the dataset into a training set and a test set
train, test = train_test_split(dataset, test_size = 0.2)
# Separation independent variable X - dependent variable y for the train set & the test set
X_train = train.YearsExperience
y_train = train.Salary
X_test = test.YearsExperience
y_test = test.Salary
# Looking for 1st line equation
# The line must have the same scope than the scatter plots from the dataset
# I decided to build the line choosing the point that has the max x-value and the point that has the min x-value
# Find the point with the maximum value of x in the dataset
idx_max = X_train.idxmax()
x_max = Point(X_train.loc[idx_max], y_train.loc[idx_max])
# Find the point with the minimum value of x in the dataset
idx_min = X_train.idxmin()
x_min = Point(X_train.loc[idx_min], y_train.loc[idx_min])
# Build the line with the 2 points
line = Line(x_min, x_max)
drawAll(X_train, y_train, line)
# Iterate choosing a random point and moving the line with the function transformLine
for i in progressbar.progressbar(range(nb_epochs)):
sample = train.sample()
point = Point(sample.YearsExperience, sample.Salary)
line = transformLine(point, line, X_train.median(), learning_rate)
#drawAll(X_train, y_train, line) # Uncomment this line to see the line at each iteration
drawAll(X_train, y_train, line)
# Predict the test set with my model and see
y_pred = predict(X_test, line)
print("MAE (Mean Absolute Error) is used to evaluate the model accuracy")
print("MAE for my model: {}".format(mean_absolute_error(y_pred, y_test)))
# Predict the test set with the sklearn algorithm
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train.to_frame(), y_train)
y_pred2 = regressor.predict(X_test.to_frame())
print("MAE for the algorithm of the sklearn module: {}".format(mean_absolute_error(y_pred2, y_test)))
```
| github_jupyter |
# TASK #1: DEFINE SINGLE AND MULTI-DIMENSIONAL NUMPY ARRAYS
```
# NumPy is a Linear Algebra Library used for multidimensional arrays
# NumPy brings the best of two worlds: (1) C/Fortran computational efficiency, (2) Python language easy syntax
# Let's define a one-dimensional array
import NumPy as np
list_1 = [6,8,40,80,200,679,800,1289]
list_1
# Let's create a numpy array from the list "my_list"
my_numpy_array = np.array(list_1)
my_numpy_array
type(my_numpy_array)
# Multi-dimensional (Matrix definition)
my_matrix = np.array([[2,5,8],[3,7,9]])
print(my_matrix)
type(my_matrix)
```
MINI CHALLENGE #1:
- Write a code that creates the following 2x4 numpy array
```
[[3 7 9 3]
[4 3 2 2]]
```
```
array_matrix = np.array([[3,7,9,3],
[4,3,2,2]])
print(array_matrix)
```
# TASK #2: LEVERAGE NUMPY BUILT-IN METHODS AND FUNCTIONS
```
# "rand()" uniform distribution between 0 and 1
x = np.random.rand(20)
x
# "randint" is used to generate random integers between upper and lower bounds
y = np.random.rand(3,3)
y
# "randint" can be used to generate a certain number of random itegers as follows
z = np.random.randint(1,50)
print(z)
a = np.random.randint(1,40,9)
print(a)
# np.arange creates an evenly spaced values within a given interval
b = np.arange(1,50,5)
print(b)
c = np.arange(1,20)
c
# create a diagonal of ones and zeros everywhere else
d = np.eye(4)
d
# Matrix of ones
e = np.ones((7,3))
e
# Array of zeros
f = np.zeros((8,2))
f
```
MINI CHALLENGE #2:
- Write a code that takes in a positive integer "x" from the user and creates a 1x10 array with random numbers ranging from 0 to "x"
```
x = int(input("Enter the number for the range: "))
g = np.random.randint(0,x,10)
print(g)
```
# TASK #3: PERFORM MATHEMATICAL OPERATIONS IN NUMPY
```
# np.arange() returns an evenly spaced values within a given interval
h = np.arange(1,9,2)
print(h)
i = np.arange(1,9,2)
print(i)
# Add 2 numpy arrays together
j = h+i
print(j)
k = h**i
print(k)
l = h**2
print(l)
m = np.sqrt(l)
print(m)
n = np.exp(i)
print(n)
```
MINI CHALLENGE #3:
- Given the X and Y values below, obtain the distance between them
```
X = [5, 7, 20]
Y = [9, 15, 4]
```
```
x = np.array([5,7,10])
print(x)
y = np.array([9,15,4])
print(y)
z = np.array(x-y)
print(z)
p = x-y
print(p)
Q = np.sqrt(x**2 + y**2)
print(Q)
```
# TASK #4: PERFORM ARRAYS SLICING AND INDEXING
```
num_py_array = np.array([5,7,8,2,6,10,16,25,67,89])
print(num_py_array)
# Access specific index from the numpy array
num_py_array[4]
num_py_array[-1]
# Starting from the first index 0 up until and NOT including the last element
num_py_array[0:-1]
# Broadcasting, altering several values in a numpy array at once
my_numpy_array[0:4]= 7
print(my_numpy_array)
# Let's define a two dimensional numpy array
matrix_numpy = np.random.randint(1,10,(4,4))
print(matrix_numpy)
# Get a row from a mtrix
print(matrix_numpy[0])
print(matrix_numpy[-1])
# Get one element
print(matrix_numpy[0][3])
print(matrix_numpy[2][1])
```
MINI CHALLENGE #4:
- In the following matrix, replace the last row with 0
```
X = [2 30 20 -2 -4]
[3 4 40 -3 -2]
[-3 4 -6 90 10]
[25 45 34 22 12]
[13 24 22 32 37]
```
```
x = np.array([[2, 30, 20, -2, -4],[3, 4, 40, -3, -2],[-3, 4, -6, 90 ,10],[25, 45, 34, 22, 12],
[13 ,24, 22, 32, 37]])
print(x)
print(x[-1])
x[-1]=0
print(x)
```
# TASK #5: PERFORM ELEMENTS SELECTION (CONDITIONAL)
```
A = np.random.randint(1,100,(5,5))
print(A)
B = A[A > 4]
print(B)
# Obtain odd elements only
C = A[A % 2 ==1]
print(C)
D = A[A % 2 != 0]
print(D)
```
MINI CHALLENGE #5:
- In the following matrix, replace negative elements by 0 and replace odd elements with -2
```
X = [2 30 20 -2 -4]
[3 4 40 -3 -2]
[-3 4 -6 90 10]
[25 45 34 22 12]
[13 24 22 32 37]
```
```
x = np.array([[2, 30, 20, -2, -4],[3, 4, 40, -3, -2],[-3, 4, -6, 90 ,10],[25, 45, 34, 22, 12],
[13 ,24, 22, 32, 37]])
print(x)
x[x < 0]=0
print(x)
x[x%2!=0] = -2
print(x)
```
| github_jupyter |
```
import pandas as pd
import scipy.io
import os
import matplotlib.pyplot as plt
path = os.getcwd()
matlab_exe_path = '''matlab'''
julia_path = '''C:\\Users\\mwaugh\\AppData\\Local\\Programs\\Julia\\Julia-1.4.0\\bin\\julia.exe'''
path = "src\\calibration"
#fig_path = "C:\\users\mwaugh\\github\\perla_tonetti_waugh\\Figures"
```
---
### [Equilibrium Technology Diffusion, Trade, and Growth](https://christophertonetti.com/files/papers/PerlaTonettiWaugh_DiffusionTradeAndGrowth.pdf) by Perla, Tonetti, and Waugh (AER 2020)
---
## 7.5. The Role of Firm Dynamics and Adoption Costs
#### Table of Contents
- [GBM](#gbm)
- [Delta Shock](#detla)
- [Connection to Welfare Decomposition (Section 7.3)](#dcomp)
The underlying MATLAB code is described (with links to the relevant ``.m`` files) in the [readme file in the calibraiton folder](/src/calibration/README.md).
---
### <a name="gbm"></a> Importance of Firm Productivity Shocks (GBM)
We uniformly scale up and down the GBM variance and mean for different values of the adoption cost parameter chi. The large value of chi is ten percent larger than the baseline calibrated value. The small value of chi is ten percent smaller than the baseline calibrated value. All other parameter values are fixed, i.e., we do not re-calibrate the model when changing these parameter values.
##### Step 1. Compute outcomes for different GBM parameter values
First, we compute a key input for the figures, saved to [``/output/robust/gbm/closest_chi_params.csv``](/output/robust/gbm/closest_chi_params.csv). Each row in this file contains the parameter values that generate a BGP equilibrium growth rate that matches the baseline aggregate growth when externally fixing a set value for chi.
```
matlab_cmd = '''"cd('src\calibration');robust_no_recalibrate_gbm;"'''
!{matlab_exe_path} -batch {matlab_cmd}
```
##### Step 2. Create Figure 6 in PTW 2020
The code below reads in the output from matlab and then plots the results to generate Figure 6 of PTW.
```
cnames = ['gold', 'gnew', 'gdiff', "welfare", 'upsilon']
mat = scipy.io.loadmat(path + "\\output\\robust\\gbm\\norecalibrate_values_gbm_1.mat")
nocaldf = pd.DataFrame(mat["record_values"])
nocaldf.columns = cnames
nocaldf["gdiff"] = -nocaldf["gdiff"]
nocaldf.sort_values(["upsilon"], inplace = True)
base_chi = str(round(mat["chi_value"][0][0],3))
mat = scipy.io.loadmat(path + "\\output\\robust\\gbm\\norecalibrate_values_gbm_0.9.mat")
nocaldf_lowchi = pd.DataFrame(mat["record_values"])
nocaldf_lowchi.columns = cnames
nocaldf_lowchi["gdiff"] = -nocaldf_lowchi["gdiff"]
nocaldf_lowchi.sort_values(["upsilon"], inplace = True)
low_chi = str(round(mat["chi_value"][0][0],3))
mat = scipy.io.loadmat(path + "\\output\\robust\\gbm\\norecalibrate_values_gbm_1.1.mat")
nocaldf_higchi = pd.DataFrame(mat["record_values"])
nocaldf_higchi.columns = cnames
nocaldf_higchi["gdiff"] = -nocaldf_higchi["gdiff"]
nocaldf_higchi.sort_values(["upsilon"], inplace = True)
hig_chi = str(round(mat["chi_value"][0][0],3))
fig, ax = plt.subplots(2,2,figsize = (12,10))
fig.tight_layout(pad = 6)
position = (0,0)
ax[position].plot(nocaldf.upsilon, 100*nocaldf["gdiff"], lw = 4)
ax[position].plot(nocaldf_lowchi.upsilon, 100*nocaldf_lowchi["gdiff"], lw = 4, color = 'r')
ax[position].plot(nocaldf_higchi.upsilon, 100*nocaldf_higchi["gdiff"], lw = 4, color = 'k')
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.004,0.07)
ax[position].set_ylim(0.10,0.35)
ax[position].set_ylabel("\n Change in Productivity Growth \n Percentage Points", fontsize = 12)
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].vlines(0.048, 0.00, 0.234,
color='k',
linestyle='--',
lw = 3)
ax[position].hlines(0.234, -0.001, 0.048 ,
color='k',
label="Calibrated Values",
linestyle='--',
lw = 3)
###################################################################################
position = (0,1)
ax[position].plot(nocaldf.upsilon, 100*nocaldf["gold"], lw = 4, label = "Calibrated χ = " + base_chi)
ax[position].plot(nocaldf_lowchi.upsilon, 100*nocaldf_lowchi["gold"], lw = 4, color = 'red', label = "Large χ = " + low_chi)
ax[position].plot(nocaldf_higchi.upsilon, 100*nocaldf_higchi["gold"], lw = 4, color = 'k', label = "Small χ = " + hig_chi)
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.004,0.07)
ax[position].set_ylim(0.0,3.10)
#ax.set_ylim(0,0.40)
ax[position].set_xlabel("\n GBM Variance Parameter", fontsize = 12)
ax[position].set_ylabel("\n Initial SS Productivity Growth", fontsize = 12)
ax[position].hlines(0.79, -0.001, 0.048,
color='k',
label="Calibrated Values",
linestyle='--',
lw = 3)
ax[position].vlines(0.048, 0, 0.79,
color='k',
linestyle='--',
lw = 3)
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].legend(bbox_to_anchor=(0., -1.25, 1., .102),frameon = False, fontsize = 14, loc = 4)
#########################################################################################
position = (1,0)
series = "welfare"
ax[position].plot(nocaldf.upsilon, 100*nocaldf[series], lw = 4, label = "Calibrated χ")
ax[position].plot(nocaldf_lowchi.upsilon, 100*nocaldf_lowchi[series], lw = 4, color = 'red', label = "Large χ")
ax[position].plot(nocaldf_higchi.upsilon, 100*nocaldf_higchi[series], lw = 4, color = 'k', label = "Small χ")
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.004,0.07)
ax[position].set_ylim(5,15)
#ax.set_ylim(0,0.40)
ax[position].set_xlabel("\n GBM Variance Parameter", fontsize = 12)
ax[position].set_ylabel("\n Welfare Gain, Percent", fontsize = 12)
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].vlines(0.048, 0, 11.18,
color='k',
linestyle='--',
lw = 3) # thickness of the line
ax[position].hlines(11.18, -0.001, 0.048 ,
color='k',
label="Calibrated Values",
linestyle='--',
lw = 3)
#############################################################################################
position = (1,1)
ax[position].axis('off')
#plt.savefig(fig_path + "\\gbm_chi.pdf", bbox_inches = "tight", dip = 3600)
plt.show()
```
##### Discussion of these results from PTW text
The first thing to observe from Figure 6 is that the percentage point change in productivity is nearly constant across different values of the variance parameter. In other words, the variance does not much affect the response of growth to a change in trade costs.
The parameter which does influence the change in growth is the adoption cost parameter. The three
different lines on the left panel in Figure 6 illustrate this point. A small value of chi (top black line) corresponds to small costs of adoption. When adoption costs are small, growth is more responsive to changes in trade costs. In contrast, a large value of chi (bottom red line) corresponds to large adoption costs and a smaller response of growth to trade costs. The closed form equations available in the non-GBM version of the model deliver some insight. Equation 35 shows that the change in the growth rate for a given change in trade costs is larger when adoption costs are smaller.
Even though the elasticity of growth to trade costs is not sensitive to the value of the GBM variance parameterholding adoption costs constant, the value of the GBM variance parameter—and, thus, the firm dynamics data—strongly influences the calibrated value of the adoption cost. The right panel in Figure 6 illustrates this point by tracing out how the growth rate in the initial steady state varies with the GBM variance parameter. For a given value, there is a near linear decrease in the steady state growth rate as the variance increases. Across chi values, the slope is essentially the same, but the intercept shifts, with smaller chi values leading to higher growth rates. This is intuitive—lower adoption costs lead to more adoption and faster economic growth.
The implication of these observations is that data on firm dynamics influences the inferred adoption cost and, thus, the elasticity of growth to trade costs. For example, holding fixed our target of an aggregate growth rate of 0.79 percent, if the transition matrix of relative size (Table 3) had pushed for us to find a smaller value for the GBM variance parameter, then the right panel of Figure 6 shows this would have lead us to calibrate a larger value for chi. Combining this observation with the left panel of Figure 6, our calibration strategy would have then led to a smaller increase in the growth rate for the same decrease in trade costs.
The lower panel of Figure 6 shows that the welfare gains from trade (comparing BGPs) are nearly constant across values of the GBM variance parameter, but sensitive to the value of chi, just like the elasticity of growth to trade costs. Thus, the value of chi is crucial for determining both the change in growth and the welfare gains from trade. Even though the GBM variance parameter does not much affect the welfare gains from trade when holding all other parameters constant, different values of the GBM variance parameter (which are associated with different firm dynamics moments) affect the calibration of chi. It is in this sense that not just firm heterogeneity, but firm dynamics, matter
for the welfare gains from trade in our model.
Our discussion above, which compares our gains from trade to those in Sampson (2016), strongly suggests this point as well. When the GBM process is shut down and the model is re-calibrated, the gains from trade are still larger than what the ACR formula would imply, but they are far more modest and in line with what Sampson finds. Recall from Section 7.3 that much of the welfare gains arise because the equilibrium has an inefficiently low growth rate and that changes in trade costs change the growth rate. Using the decomposition from Section 7.3, we find that the different values of chi associated with different
values of the GBM variance parameteraffect the welfare gains from trade almost completely because of a change in the sensitivity
of growth to the trade cost and not because of different levels of inefficiency.
This point is illustrated in the Connection to the Welfare Decomposition section below.
---
### <a name="delta"></a> Importance of the Exit Shock
We uniformly scale up and down the exit shock for different values of the adoption cost parameter chi. The large value of chi is ten percent larger than the baseline calibrated value. The small value of chi is ten percent smaller than the baseline calibrated value. All other parameter values are fixed, i.e., we do not re-calibrate the model when changing these parameter values.
##### Step 1. Compute outcomes for different Delta parameter values
This calls the matlab code to perform this operation. The code appendix below describes each of the different components. The line below executes matlab from the command line/terminal
```
matlab_cmd = '''"cd('src\calibration');robust_no_recalibrate_delta;"'''
!{matlab_exe_path} -batch {matlab_cmd}
```
##### Step 2. Create Figure 7 in PTW 2020
The code below reads in the output from matlab and then plots the results to create Figure 7 of PTW.
```
cnames = ['gold', 'gnew', 'gdiff', "welfare", 'delta']
mat = scipy.io.loadmat(path + "\\output\\robust\\delta\\norecalibrate_values_delta_1.mat")
nocaldf = pd.DataFrame(mat["record_values"])
nocaldf.columns = cnames
nocaldf["gdiff"] = -nocaldf["gdiff"]
nocaldf.sort_values(["delta"], inplace = True)
base_chi = str(round(mat["chi_value"][0][0],3))
mat = scipy.io.loadmat(path + "\\output\\robust\\delta\\norecalibrate_values_delta_0.9.mat")
nocaldf_lowchi = pd.DataFrame(mat["record_values"])
nocaldf_lowchi.columns = cnames
nocaldf_lowchi["gdiff"] = -nocaldf_lowchi["gdiff"]
nocaldf_lowchi.sort_values(["delta"], inplace = True)
low_chi = str(round(mat["chi_value"][0][0],3))
mat = scipy.io.loadmat(path + "\\output\\robust\\delta\\norecalibrate_values_delta_1.1.mat")
nocaldf_higchi = pd.DataFrame(mat["record_values"])
nocaldf_higchi.columns = cnames
nocaldf_higchi["gdiff"] = -nocaldf_higchi["gdiff"]
nocaldf_higchi.sort_values(["delta"], inplace = True)
hig_chi = str(round(mat["chi_value"][0][0],3))
fig, ax = plt.subplots(2,2,figsize = (12,10))
fig.tight_layout(pad = 6)
position = (0,0)
ax[position].plot(nocaldf.delta, 100*nocaldf["gdiff"], lw = 4)
ax[position].plot(nocaldf_lowchi.delta, 100*nocaldf_lowchi["gdiff"], lw = 4, color = 'r')
ax[position].plot(nocaldf_higchi.delta, 100*nocaldf_higchi["gdiff"], lw = 4, color = 'k')
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.01,0.04)
ax[position].set_ylim(0.15,0.4)
#ax[position].set_xlabel("\n GBM Variance Parameter", fontsize = 12)
ax[position].set_ylabel("\n Change in Productivity Growth \n Percentage Points", fontsize = 12)
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].vlines(0.020, 0.00, 0.234, # Set the value equall to the average
color='k', # make the color red
#label='Trade Shock', # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
ax[position].hlines(0.234, -0.001, 0.020 , # Set the value equall to the average
color='k', # make the color red
label="Calibrated Values", # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
##########################################################################################
position = (0,1)
ax[position].plot(nocaldf.delta, 100*nocaldf["gold"], lw = 4, label = "Calibrated χ = " + base_chi)
ax[position].plot(nocaldf_lowchi.delta, 100*nocaldf_lowchi["gold"],
lw = 4, color = 'red', label = "Large χ = " + low_chi)
ax[position].plot(nocaldf_higchi.delta, 100*nocaldf_higchi["gold"],
lw = 4, color = 'k', label = "Small χ = " + hig_chi)
#ax[1].plot(nocaldf_bigchi.upsilon, 100*nocaldf_bigchi["gold"], lw = 4, color = 'k', label = "Large 1/chi")
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.01,0.04)
ax[position].set_ylim(0.20,1.4)
#ax.set_ylim(0,0.40)
ax[position].set_xlabel("\n Exit Shock Parameter", fontsize = 12)
ax[position].set_ylabel("\n Initial SS Productivity Growth", fontsize = 12)
ax[position].vlines(0.02, 0.00, 0.79, # Set the value equall to the average
color='k', # make the color red
#label='Trade Shock', # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
ax[position].hlines(0.79, -0.001, 0.020 , # Set the value equall to the average
color='k', # make the color red
label="Calibrated Values", # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].legend(bbox_to_anchor=(0., -1.25, 1., .102),frameon = False, fontsize = 14, loc = 4)
#########################################################################################
position = (1,0)
series = "welfare"
ax[position].plot(nocaldf.delta, 100*nocaldf[series], lw = 4, label = "Calibrated χ")
ax[position].plot(nocaldf_lowchi.delta, 100*nocaldf_lowchi[series], lw = 4, color = 'red', label = "Large χ")
ax[position].plot(nocaldf_higchi.delta, 100*nocaldf_higchi[series], lw = 4, color = 'k', label = "Small χ")
#ax[1].plot(nocaldf_bigchi.upsilon, 100*nocaldf_bigchi["gold"], lw = 4, color = 'k', label = "Large 1/chi")
xticks = [0.01*float(item) for item in list(range(1,11,1)) ]
ax[position].set_xticks(xticks)
ax[position].set_xlim(0.01,0.04)
ax[position].set_ylim(6,20)
#ax.set_ylim(0,0.40)
ax[position].set_xlabel("\n Exit Shock Parameter", fontsize = 12)
ax[position].set_ylabel("\n Welfare Gain, Percent", fontsize = 12)
ax[position].spines["right"].set_visible(False)
ax[position].spines["top"].set_visible(False)
ax[position].vlines(0.02, 0, 11.18, # Set the value equall to the average
color='k', # make the color red
#label='Trade Shock', # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
ax[position].hlines(11.18, -0.001, 0.02 , # Set the value equall to the average
color='k', # make the color red
label="Calibrated Values", # this is the label (shows up in the legend)
linestyle='--',
lw = 3) # thickness of the line
###############################################################################################
position = (1,1)
ax[position].axis('off')
#plt.savefig(fig_path + "\\delta_chi.pdf", bbox_inches = "tight", dip = 3600)
plt.show()
```
##### Discussion of Figure from paper
Similarly to the GBM variance case, the delta parameter interacts with the adoption cost parameter to affect the calibrated value of chi. The right panel in Figure 6 illustrates this point by tracing out how the growth rate in the initial steady state varies with delta. For a given chi value, the steady state growth rate increases with delta; across chi values, smaller chi values (lower adoption costs) lead to higher growth rates. Figure 6 shows that larger delta values (i.e., more entry observed in the data) would induce the calibration to infer larger chi values. But because these two parameters have opposite effects on economic growth, the change
in parameter values generates offsetting effects and leaves the model’s elasticity of growth to trade costs unchanged
The welfare gains from trade display a similar pattern. The bottom panel of Figure 7 shows that the welfare gains from trade increase with the value of delta, holding all else fixed. Again, however, larger values of delta generate larger calibrated values of chi, which offset to keep the welfare gains from trade largely unchanged. Re-calibrating the model holding fixed different values for delta verifies this observation—welfare only increases slightly as delta increases.
### <a name="dcomp"></a> Connection to the Welfare Decomposition
This calls Julia to perform the same welfare decomposition exercise as that done in ``section_7-3.ipynb``.
```
!jupyter nbconvert --to script ChiUpsilonDelta.ipynb
julia_command = '''ChiUpsilonDelta.jl'''
!{julia_path} {julia_command}
```
The decomposition from Section 7.3, shows how different values of $\chi$s affect the welfare gains from trade. We find that the different values of $\chi$ associated with different values of $\upsilon^{2}$ affect the welfare gains from trade almost completely because of a change in the sensitivity of growth to the trade cost $\left(\frac{\mathrm{d} f_{g}}{\mathrm{d} d}\right)$ (i.e., the semi-elasticity of growth changes substantially) and not because of different levels of inefficiency $\left(U_1 \frac{ \partial f_{c}}{ \partial g} + U_2\right)$ (which are relativly simmilar across different specifications).
| github_jupyter |
```
import numpy as np
import pandas as pd
from CSVUtils import *
import pickle
from os import path
import matplotlib.pyplot as plt
ROOT_DIR = "./from github/Stock-Trading-Environment/"
freq_list = [
{
"freq": 1,
"training": "10k",
"DIR": "./output/200",
"prefix": "BRZ+TW+NASDAQ-Training_detailed-ModelNo-10000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 1,
"training": "50k",
"DIR": "./output/201",
"prefix": "BRZ+TW+NASDAQ-Training_detailed-ModelNo-50000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 1,
"training": "100k",
"DIR": "./output/201",
"prefix": "BRZ+TW+NASDAQ-Training_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 1,
"training": "500k",
"DIR": "./output/202",
"prefix": "BRZ+TW+NASDAQ-Training_detailed-ModelNo-500000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
}
]
bnh_ratio={
"2015-2019": 3.0693999444726088,
"2001-2004": 1.0730432708521411,
"2007-2010": 1.8942480597911275,
}
ROOT_DIR = "./from github/Stock-Trading-Environment/"
freq_list = [
{
"freq": 7,
"training": "50k",
"DIR": "./output/204",
"prefix": "BRZ+TW+NASDAQ-Training-punish_detailed-ModelNo-50000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 7,
"training": "100k",
"DIR": "./output/205",
"prefix": "BRZ+TW+NASDAQ-Training-swap-nopunish-7d_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 7,
"training": "200k",
"DIR": "./output/204",
"prefix": "BRZ+TW+NASDAQ-Training-punish_detailed-ModelNo-200000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
]
freq_list = [
{
"freq": 20,
"training": "10k",
"DIR": "./output/306",
"prefix": "BRZ_TW_NASDAQ-Selected_Trans-withleakage+RSI-_detailed-ModelNo-10000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 20,
"training": "50k",
"DIR": "./output/306",
"prefix": "BRZ_TW_NASDAQ-Selected_Trans-withleakage+RSI-_detailed-ModelNo-50000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 20,
"training": "100k",
"DIR": "./output/306",
"prefix": "BRZ_TW_NASDAQ-Selected_Trans-withleakage+RSI-_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 20,
"training": "200k",
"DIR": "./output/306",
"prefix": "BRZ_TW_NASDAQ-Selected_Trans-withleakage+RSI-_detailed-ModelNo-200000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
]
freq_list = [
{
"freq": 20,
"training": "BRZ+TW+NASDAQ",
"DIR": "./output/205",
"prefix": "BRZ+TW+NASDAQ-Training-swap-nopunish-7d_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 20,
"training": "BRZ+TW+NASDAQ",
"DIR": "./output/205",
"prefix": "NASDA+QBRZ+TW-Training-swap-nopunish_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
{
"freq": 20,
"training": "BRZ+TW+NASDAQ",
"DIR": "./output/205",
"prefix": "TW+NASDAQ+BRZ-Training-swap-nopunish_detailed-ModelNo-100000-",
"start_date": pd.to_datetime("2015-01-01"),
"end_date": pd.to_datetime("2019-12-31"),
},
]
for experiment in freq_list:
nominal_rate_list = [] # Model/bnh
nominal_return_list = [] # profit/300k
actual_return_list = [] # actual profit/bnh
DIR = path.join("./from github/Stock-Trading-Environment",experiment["DIR"])
for i in range(10):
record = pickle.load(open(path.join(DIR,experiment["prefix"]+str(i)+".out"), "rb"))
df = pd.DataFrame(record)
final_nominal_profit = df['net_worth'].iloc[-1]-300000
bnh_profit = df['buyNhold_balance'].iloc[-1]-300000
nominal_profit_rate = (final_nominal_profit/bnh_profit) # How much better is the model compare to bnh
nominal_rate_list.append(nominal_profit_rate)
nominal_return_list.append(final_nominal_profit/300000)
actual_return_list.append(df['actual_profit'].iloc[-1]/df['buyNhold_balance'].iloc[-1])
nominal_rate_list=np.array(nominal_rate_list)
print(experiment['freq'], experiment['training'],
len(nominal_rate_list[(nominal_rate_list>=1)]),
len(nominal_rate_list[(nominal_rate_list>=0.50) & (nominal_rate_list<1)]),
len(nominal_rate_list[(nominal_rate_list>=0) & (nominal_rate_list<0.50)]),
len(nominal_rate_list[nominal_rate_list<0]),
np.mean(nominal_return_list),
np.mean(actual_return_list),
)
df
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.