code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# Since Newton invented calculus, differentiating a function has been essential to the advancement of humanity. Calculating the derivative of a function is crucial to finding the extrema for a function and determining zeros for a function, two operations that are central to optimization (1). Often, we can find the symbolic/analytical solution to the derivative of a function, however this has become increasingly complex and computationally expensive as our functions/equations have grown in size and complexity. Numerically solving differential equations forms a cornerstone of modern science and engineering and is intimately linked with machine learning; however this method suffers from rounding errors and numerical instability. Many of these issues can be solved using Automatic Differentiation (AD) because AD can calculate the exact derivative up to machine precision (2). The logic and processes behind AD enables it to be implemented using computer code, making it easily accessible for use by scientists and mathematicians. This python package will implement the forward mode of AD.
#
# # Background
#
# The following mathematical concepts and background are required for understanding automatic differentiation:
#
# ### 1. Differential calculus
#
# Differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change (3).
# Given the function:
# \begin{align}
# f\left(x\right) &= {x^{2}}
# \end{align}
#
# Increment x by h:
# \begin{align}
# f\left(x+h\right) &= {(x+h)^{2}}
# \end{align}
#
# Apply the finite difference approximation to calculate the slope:
# \begin{align}
# \frac{f\left(x+h\right) - f\left(x\right) }{h}
# \end{align}
#
# Simplify the equation:
# \begin{align}
# &= \frac{x^{2}+2xh+h^{2}-x^{2} }{h}\\
# &= \frac{2xh+h^{2}}{h}\\
# &=2x+h
# \end{align}
#
# Set $h = 0$:
# \begin{align}
# 2x +0 &= 2x
# \end{align}
#
# The derivative is then defined as:
# \begin{align}
# \lim_{h\to0} \frac{f\left(x+h\right) - f\left(x\right) }{h}
# \end{align}
#
# ### 2. Elementary functions and their derivatives
#
# | Function $f(x)$ | Derivative $f^{\prime}(x)$ |
# | :-------------------: | :------------------------------------------------------------------------------: |
# | ${c}$ | $0$ |
# | ${x}$ | $1$ |
# | ${x^{n}}$ | ${nx^{n-1}}$ |
# | $\frac{1}{x}$ | $\frac{-1}{x^{2}}$ |
# | $ln{x}$ | $\frac{1}{x}$ |
# | $\sin(x)$ | $\cos(x)$ |
# | $\cos(x)$ | $-\sin(x)$ |
# | $\tan(x)$ | $\dfrac{1}{\cos^2(x)}$ |
# | $\exp(x)$ | $\exp(x)$ |
# | ${a^{x}}$ | ${a^{x}\ln{a}}$ |
#
#
# ### 3. The chain rule$^{(1)}$
#
# For a function $h(u(t))$, the derivative of $h$ with respect to $t$ can be expressed as:
# $$\dfrac{\partial h}{\partial t} = \dfrac{\partial h}{\partial u}\dfrac{\partial u}{\partial t}.$$
# If the function is expressed as a combination of multiple variables that are expressed in terms of t, i.e. $h(u(t), v(t))$, the the derivative of $h$ with respect to $t$ can be expressed as:
# $$\frac{\partial h}{\partial t} = \frac{\partial h}{\partial u}\frac{\partial u}{\partial t} + \frac{\partial h}{\partial v}\frac{\partial v}{\partial t}$$
#
# Note that we are only looking at scalar variables in this case, but this idea can be extended to vector variables as well.
#
# For any $h = h\left(y\left(x\right)\right)$ where $y\in\mathbb{R}^{n}$ and $x\in\mathbb{R}^{m}$,
#
# \begin{align}
# \nabla_{x}h = \sum_{i=1}^{n}{\frac{\partial h}{\partial y_{i}}\nabla y_{i}\left(x\right)}.
# \end{align}
#
#
# ### 4. The graph structure of calculations and forward accumulation
#
# Forward accumulation is computing the derivative using the chain rule starting from the inner most derivative to the outer most derivative, where we assume the most basic variables have seed values. Using a graph helps visualize forward accumulation (4). For example,
#
# \begin{align}
# f\left(x,y\right) &= \frac{x}{y} +cos(x)sin(y)\\
# x &= y = 1
# \end{align}
#
# 
#
# | Trace | Elementary Function | Current Value | Elementary Function Derivative | $\nabla_{x}$ Value | $\nabla_{y}$ Value
# | :---: | :-----------------: | :-----------: | :----------------------------: | :-----------------: | :-----------------: | :-----------------: |
# | $w_{1}$ | $1$ | $1$ | $\dot{w_1}$ | $1$ | $0$ |
# | $w_{2}$ | $1$ | $1$ | $\dot{w_2}$ | $0$ | $1$ |
# | $w_{3}$ | $cos{(w_1})$ | $cos{(1)}$ | $-sin{(w_1)}\dot{w_1}$ | $-sin(1)$ | $0$ |
# | $w_{4}$ | $sin{(w_2})$ | $sin{(1)}$ | $cos{(w_2)}\dot{w_2}$ | $0$ | $cos{(1)}$ |
# | $w_{5}$ | $w_3\dot w_4$ | $sin{(1)}cos{(1)}$ | $w_4\dot{w_3} + w_3\dot{w_4}$ | $-sin^2{(1)}$ | $cos^2{(1)}$ |
# | $w_{6}$ | $w_1 / w_2$ | $1$ | $\dot{w_1}/w_2 - w_1 \dot{w_2}/ w_2^2$ | $1$ | $-1$ |
# | $w_{7}$ | $w_5 + w_6$ | $sin{(1)}cos{(1)} + 1$ | $\dot{w_5} + \dot{w_6}$ | $-sin^2{(1)} + 1$ | $cos^2{(1)}-1$ |
#
#
#
# # How to Use *autodiff*
#
#
# Ideally, a user should not be overwhelmed by our package. They would only need to import our package and instantiate the basic variables or vectors (i.e. $x_1, x_2, x_3, \vec{x}$, etc.). Using the AD objects, the users will be able to easily call operations such as addition, subtraction, sine, exponential, etc. with both other AD objects and also normal numbers (*int* and *float*), forming the expressions that they want. The AD objects will feel perfectly integrated, and the users can get the value/derivatives of the expressions with basic get calls.
#
# For example, in the case of scalar functions/values, the user will type something like this:
#
# ```python
#
# import autodiff as ad
#
# x = ad.Scalar('x', 2)
# x.get_value()
# >>>2
# x.get_deriv()
# >>>{'x': 1}
# y = ad.Scalar('y', 5)
# y.get_value()
# >>>5
# y.get_deriv()
# >>>{'y': 1}
# z = x + y
# z.get_value()
# >>> 7
# z.get_deriv()
# >>> {'x': 1, 'y': 1}
# z2 = x*y
# z2.get_value()
# >>>10
# z2.get_deriv()
# >>> {'x': 5 , 'y': 2 }
# z3 = y ** 2
# z3.get_value()
# >>> 25
# z3.get_deriv()
# >>>{'y': 10}
# z4 = ad.sin(ad.Scalar('x1', 0))
# z4.get_value()
# >>> 0
# z4.get_deriv()
# >>> {'x1': 1}
# ```
#
# This idea can be extended to the cases of vectors.
# ```python
# #do not need to follow this naming convention
# x1 = ad.Scalar('x1', 10)
# x2 = ad.Scalar('x2', 4)
# x = ad.Vector([x1, x2, x1 + x2, x1 * x2])
# x.get_values()
# >>> np.array([10, 4, 14, 40])
# #There can be a function to return it as matrix
# x.get_derivs()
# >>> [{'x1': 1, 'x2': 0}, {'x1': 0, 'x2': 1}, {'x1': 1, 'x2': 1}, {'x1': 4, 'x2': 10}]
# ```
#
#
# # Software Organization
#
# * The directory structure would look like that:
# ```
# autodiff\
# autodiff\
# __init__.py
# variables.py (scalar and vector)
# functions.py
# test\
# __init__.py
# test_variables.py
# test_functions.py
# README.md
# setup.py
# LICENSE
#
# ```
#
# The plans on organizing our software package are:
#
# * We are planning to use *numpy* to conduct most of our calculations, vector operations and definition of elementary functions.
# * The test suite will live in the test directory shown above. We are using both `TravisCI` and `Coveralls`
# * We want to release our package on `PyPI`
#
#
# # Implementation
#
# Our core data structure for implementing the forward mode of automatic differentiation will be based on the *Scalar* and *Vector* classes. To initialize a *Scalar* class object, the user will pass in a string that represents the variable (i.e. 'x', 'y', 'x1', etc.) and also the value of variable (the seed value). The *Scalar* class will hold two attributes: 1) the value of the variable `val` at the current step and 2) a dictionary `deriv` containing the derivative or partial derivatives (keys will be the names of the variables (i.e *x* and *y*) and the values will be the derivative value with respect to each variable). This allows us to easily compute derivatives with respect to a variable when we are performing operations with multiple variables since then we can update each partial derivative individually based on the variable. When a *Scalar* object is initialized, by default `deriv` will just be a dictionary with the only key being the string the user passes in with value 1. A user can access the value of a *Scalar* object using the *get_value()* method and access the derivative (or partial derivatives) for the object through the *get_derivs()* method. The dunder methods __add__, __sub__, __mul__, __truediv__, __pow__, __iadd__, __isub__, __imul__, __itruediv__, __ipow__ (and the right equivalents for the ones that have one) will all be overwritten so that they return a new *Scalar* object with an updated value and derivatives. By overwriting these methods, we are implementing forward accumulation, as the orders of operation allows us to traverse the chainrule starting from the inside.
#
# Another class called *Vector* will take in a list or array of *Scalar* objects. A *Vector* only has one attribute: a numpy array of *Scalar* objects, since each *Scalar* object will track its current value and derivative. The dunder methods __add__, __sub__, __mul__, __truediv__, __pow__, __iadd__, __isub__, __imul__, __idiv__, __ipow__ (and the right equivalents) will all be overwritten so that they return a new array of *Scalar* objects with updates values and derivatives. Similar to numpy methods, the operations are conducted element-wise, i.e. In an addition operation between two *Vector* objects, the first row is added to the first row, second row is added to the second row, etc. As a result, one vector operation becomes multiple scalar operations. To access the values in the *Vector* object, the user can use the *get_values()* method, which returns a *numpy.array* of values. To access the derivatives in the *Vector* object, the user can use the *get_derivs()* method, which returns a list of dictionaries containing derivatives or partial derivatives for each *Scalar* object in the array. We can also add a function that returns this as a matrix, which is the Hessian. We can also add an optional argument to *get_derivs()* such that the user can just get the derivatives or partial derivatives with respect to the desired variables only (i.e. with respect to 'x', with respect to 'y', etc.). The user can obtain a copy of the *numpy.array* with *Scalar* objects using the *get_vector()* method, which will return a copy of the *numpy.array* to the user.
#
# We will implement functions *sin, cos, tan, arcsin, arccos, arctan, exp* (e^x), *power* (a^x), and *abs* (absolute value). The functions will not be implemented in a specific class, similar to the implementation of *numpy* . These functions will be written such that if a *Scalar* object is passed in, then a new *Scalar* object with an updated value and derivative is returned, depending on the function being called. If a *Vector* object is passed in, then a new *Vector* object with updated values and derivatives is returned. If one of the functions is not differentiable at a given value, then we will throw an error for the user and explain that the function is not differentiable at this specific value.
#
# The implementation of our classes and functions will rely heavily on *numpy*.
#
# # Citations
# 1. <NAME>. “Automatic Differentiation: The Basics.” CS207-Lecture9. Cambridge, MA. 2 October 2018.
#
# 2. Hoffman, <NAME>. “A Hitchhiker’s Guide to Automatic Differentiation.” *Numerical Algorithms*, 72, 24 October 2015, 775-811, *Springer Link*, DOI 10.1007/s11075-015-0067-6.
#
# 3. Calculus-ML Cheatsheet(2017). Retrieved October 17, 2018, from https://ml-cheatsheet.readthedocs.io/en/latest/calculus.html?fbclid=IwAR2vDAEHj1yy-4SSBTUH7Ki_D4S4uaDZJcgNXCtkVtzTrGqR-8dKmHg2L5s
#
# 4. Automatic Differentiation Or mathemagically finding derivatives. (2015, December 05). Retrieved November 16, 2018, from http://www.columbia.edu/~ahd2125/post/2015/12/5/
| docs/milestone1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import the libs
import json
import random
from math import floor
from collections import defaultdict
from shutil import copy
# # Read the data to a list
with open('/home/hector/code/datadump-lab/figure-qa/train1/annotations.json') as js:
data = json.loads(js.read())
# # Prepare the data for training
# ## seperate the pairs categorically
# +
d = defaultdict(dict)
# dot line
d['dot_line'] = [x for x in data if x['type'] == 'dot_line']
# line
d['line'] = [x for x in data if x['type'] == 'line']
# pie
d['pie'] = [x for x in data if x['type'] == 'pie']
# h_bar
d['h_bar'] = [x for x in data if x['type'] == 'hbar_categorical']
# v_bar
d['v_bar'] = [x for x in data if x['type'] == 'vbar_categorical']
# -
# ## randomly select 1000 images from 20K
# +
ran = random.sample(range(0,20000), 1000)
# res = defaultdict(dict)
# for i in ran:
# res[0][i] = d['v_bar'][i]['image_index']
# res[1][i] = d['h_bar'][i]['image_index']
# res[2][i] = d['line'][i]['image_index']
# res[3][i] = d['pie'][i]['image_index']
# res[4][i] = d['dot_line'][i]['image_index']
# -
# # ## copy the files to data/train/{category}
for i in ran:
copy('/home/hector/code/datadump-lab/figure-qa/train1/png/' + str(res[0][i]) + '.png' , '/home/hector/code/datadump-lab/figure-qa/train1/data/train/vbar/')
copy('/home/hector/code/datadump-lab/figure-qa/train1/png/' + str(res[1][i]) + '.png' , '/home/hector/code/datadump-lab/figure-qa/train1/data/train/hbar/')
copy('/home/hector/code/datadump-lab/figure-qa/train1/png/' + str(res[2][i]) + '.png' , '/home/hector/code/datadump-lab/figure-qa/train1/data/train/line/')
copy('/home/hector/code/datadump-lab/figure-qa/train1/png/' + str(res[3][i]) + '.png' , '/home/hector/code/datadump-lab/figure-qa/train1/data/train/pie/')
copy('/home/hector/code/datadump-lab/figure-qa/train1/png/' + str(res[4][i]) + '.png' , '/home/hector/code/datadump-lab/figure-qa/train1/data/train/dot_line/')
#
# # Preparing images for validation set - 1
# ## seperate the pairs categorically
with open('/home/hector/code/datadump-lab/figure-qa/validation1/annotations.json') as js:
data_val1 = json.loads(js.read())
# +
val1 = defaultdict(dict)
# dot line
val1['dot_line'] = [x for x in data_val1 if x['type'] == 'dot_line']
# line
val1['line'] = [x for x in data_val1 if x['type'] == 'line']
# pie
val1['pie'] = [x for x in data_val1 if x['type'] == 'pie']
# h_bar
val1['h_bar'] = [x for x in data_val1 if x['type'] == 'hbar_categorical']
# v_bar
val1['v_bar'] = [x for x in data_val1 if x['type'] == 'vbar_categorical']
# -
# ## randomly select 400 images from 20K
# +
ran_val1 = random.sample(range(0,1000), 400)
# res_val1 = defaultdict(dict)
# for i in ran_val1:
# res_val1[0][i] = val1['v_bar'][i]['image_index']
# res_val1[1][i] = val1['h_bar'][i]['image_index']
# res_val1[2][i] = val1['line'][i]['image_index']
# res_val1[3][i] = val1['pie'][i]['image_index']
# res_val1[4][i] = val1['dot_line'][i]['image_index']
# -
# # ## copy the files to data/validation1/{category}
for i in ran_val1:
copy('/home/hector/code/datadump-lab/figure-qa/validation1/png/' + str(val1['v_bar'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation1/data/validation1/vbar/')
copy('/home/hector/code/datadump-lab/figure-qa/validation1/png/' + str(val1['h_bar'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation1/data/validation1/hbar/')
copy('/home/hector/code/datadump-lab/figure-qa/validation1/png/' + str(val1['line'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation1/data/validation1/line/')
copy('/home/hector/code/datadump-lab/figure-qa/validation1/png/' + str(val1['pie'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation1/data/validation1/pie/')
copy('/home/hector/code/datadump-lab/figure-qa/validation1/png/' + str(val1['dot_line'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation1/data/validation1/dot_line/')
#
# # Preparing images for validation set - 2
# ## seperate the pairs categorically
with open('/home/hector/code/datadump-lab/figure-qa/validation2/annotations.json') as js:
data_val2 = json.loads(js.read())
# +
val2 = defaultdict(dict)
# dot line
val2['dot_line'] = [x for x in data_val2 if x['type'] == 'dot_line']
# line
val2['line'] = [x for x in data_val2 if x['type'] == 'line']
# pie
val2['pie'] = [x for x in data_val2 if x['type'] == 'pie']
# h_bar
val2['h_bar'] = [x for x in data_val2 if x['type'] == 'hbar_categorical']
# v_bar
val2['v_bar'] = [x for x in data_val2 if x['type'] == 'vbar_categorical']
# -
# ## randomly select 400 images from 20K
# +
ran_val2 = random.sample(range(0,1000), 400)
# res_val1 = defaultdict(dict)
# for i in ran_val1:
# res_val1[0][i] = val1['v_bar'][i]['image_index']
# res_val1[1][i] = val1['h_bar'][i]['image_index']
# res_val1[2][i] = val1['line'][i]['image_index']
# res_val1[3][i] = val1['pie'][i]['image_index']
# res_val1[4][i] = val1['dot_line'][i]['image_index']
# -
# # ## copy the files to data/validation2/{category}
for i in ran_val2:
copy('/home/hector/code/datadump-lab/figure-qa/validation2/png/' + str(val1['v_bar'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation2/data/validation2/vbar/')
copy('/home/hector/code/datadump-lab/figure-qa/validation2/png/' + str(val1['h_bar'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation2/data/validation2/hbar/')
copy('/home/hector/code/datadump-lab/figure-qa/validation2/png/' + str(val1['line'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation2/data/validation2/line/')
copy('/home/hector/code/datadump-lab/figure-qa/validation2/png/' + str(val1['pie'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation2/data/validation2/pie/')
copy('/home/hector/code/datadump-lab/figure-qa/validation2/png/' + str(val1['dot_line'][i]['image_index']) + '.png' , '/home/hector/code/datadump-lab/figure-qa/validation2/data/validation2/dot_line/')
| ML/data/random_sample_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
import pandas as pd
import matplotlib as mtl
import matplotlib.pyplot as plt
#helps to visualize
from matplotlib.animation import FuncAnimation
#loading dataset from scikit
from sklearn.datasets import load_boston
# computes the mean square error between the predicted values and the true values
from sklearn.metrics import mean_squared_error
# this function takes dataset as features and and target and splits the data in two stes : training set and testing set
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
#To display on screen
from IPython.display import HTML
# -
boston=load_boston()
print(boston.DESCR)
# boston
features=pd.DataFrame(boston.data,columns=boston.feature_names)
features.head(10)
target=pd.DataFrame(boston.target,columns={'target'})
target.head(10)
max(target['target'])
# if two statements together then write print
min(target['target'])
target.describe()
df=pd.concat([features,target],axis=1)
df.head()
# # data visulaization
df.describe()
# # Correlation between Target and Attribute
corr=df.corr('pearson')#Read the documentation. Here we are using pearson's correlation
corr
#We need correlation only between the target value and the other attributes
corr=[abs(corr[attr]['target']) for attr in list(features)]
corr
# MAke a list of pairs [(corr,features)]
l=list(zip(corr,list(features)))
l
# +
# Sort the list of pairs to have a reverse?descending order
# With the correlation value as the key for sorting
l.sort(reverse =True)
l
# l.sort(key=labda x:x[0],reverse =True)
# +
# l.reverse()
# l
# l
# -
# unzip it again
corrs,labels=list(zip(*l))
# corrs
corrs
labels
# +
#PLot the bar graph
plt.figure(figsize=(15,5))
index=np.arange(len(labels))
plt.bar(index,corrs,width=0.5)
plt.xlabel('Attributes')
plt.ylabel('Correlation with the target variable')
plt.xticks(index,labels)
plt.show()# In juypter notebook not required
# -
# # Normalization of data
# +
#Normalize tha data to brings all the values to the common scale
X=df['LSTAT'].values
Y=df['target'].values
# -
print(Y[0:10])
print(X[0:10])
# X.describe()
#MinMaxScaler. For each value in a feature, MinMaxScaler
#subtracts the minimum value in the feature and then divides
#by the range. The range is the difference between the original
#maximum and original minimum.
#MinMaxScaler preserves the shape of the original distribution.
x_scaler=MinMaxScaler()
# -1 in reshape means that we want numpy to figure out the dimension
X=x_scaler.fit_transform(X.reshape(-1,1)) # this function expects the vertical values so reshape
X=X[:,-1] # means default(0) to figure-out
# or X=X[1:,-1]
# x[start:end:step]
y_scaler=MinMaxScaler()
Y=y_scaler.fit_transform(Y.reshape(-1,1)) # CAlculate and then transform
Y=Y[:,-1]
X[:4]
Y[:4]
# +
# we will use MSE bexause it makes it easier to calculate the gradients
#Gradient -> slope
# -
# # Data Splitting
len(X)
Xtrain,Xtest,Ytrain,Ytest=train_test_split(X,Y,test_size=0.2)
len(Xtrain) #It is 80% of the total length
Xtrain[:5]
# # Linear Regression
# +
# There are three functions in Gradient descent:
# 1) Update function
# 2) Error functio
# 3) Gradient Descent functin
# +
def update(m,x,c ,t,learning_rate):
grad_m=sum(2*((m*x+c)-t)*x)
grad_c=sum(2*((m*x+c)-t))
m= m-grad_m*learning_rate
c= c-grad_c*learning_rate
return m,c
# -
def error(m,x,c,t):
N=x.size
e=sum(((m*x+c)-t)**2)
return e*1/(2*N)
def gradient_descent(int_m,int_c,x,t,learning_rate,iterations,error_threshold):
m=int_m
c=int_c
error_values=list()
mc_values=list()
for i in range(iterations):
e=error(m,x,c,t)
if e<error_threshold:
print("Error less than the threshold. Stopping Gradient Descent ")
error_values.append(e)
m,c=update(m,x,c,t,learning_rate)
mc_values.append((m,c))
return m,c,error_values,mc_values
# +
# %%time
init_m=0.9
init_c=0
learning_rate=0.001
iterations=250
error_threshold=0.001
m,c,error_values,mc_values =gradient_descent(init_m, init_c, Xtrain, Ytrain, learning_rate, iterations, error_threshold)
# -
# # Prediction
#calculate the predictions on the test set as a vectorizes operation
predicted=(m * Xtest)+c
#Compute MSE for the predicted values on the testing set
mean_squared_error(Ytest,predicted)
# +
# Put xtest , ytest and predicted values into a single DataFrame so that we
# can see the predicted values alongside the testing set
p=pd.DataFrame(list(zip(Xtest,Ytest,predicted)) ,columns=['x','target_y','predicted'])
# -
p.head()
# # Plot predicted values against the target values
plt.scatter(Xtest,Ytest,color='b')
plt.plot(Xtest,predicted,color='r')
# # Revert normalization to obtain the predicted price of the houses in $1000s
# +
# Reshape to change the shape that is required by the scaler
predicted= np.array(predicted).reshape(-1,1)
Xtest=Xtest.reshape(-1,1)
Ytest=Ytest.reshape(-1,1)
Xtest_scaled=x_scaler.inverse_transform(Xtest)
Ytest_scaled=y_scaler.inverse_transform(Ytest)
predicted_scaled=y_scaler.inverse_transform(predicted)
#This is to remove the extra dimensions
Xtest_scaled= Xtest_scaled[:,-1]
Ytest_scaled= Ytest_scaled[:,-1]
predicted_scaled= predicted_scaled[:,-1]
p=pd.DataFrame(list(zip(Xtest_scaled,Ytest_scaled,predicted_scaled)), columns=['x','target_y','predicted'])
p=p.round(decimals=2)
p.head()
# -
| booston housing/Linear_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/creamcheesesteak/test_deeplearning/blob/master/single_perceptron.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="NbiB5oEVPevW"
import tensorflow as tf
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="8yBBIbhXQV9C" outputId="848d303e-b494-4aaa-89a0-583f8841526d"
tf.__version__
# + colab={"base_uri": "https://localhost:8080/"} id="8Ojl0X7pQWBu" outputId="0c659629-af60-45ad-c842-0a511b472ca3"
x_data = [[0,0],
[1,0],
[0,1],
[1,1]]
y_data = [[0],
[1],
[1],
[1]]
type(x_data), type(y_data)
# + colab={"base_uri": "https://localhost:8080/"} id="ftOURKd2rdJu" outputId="1b7e854d-aa91-4472-c746-a0e2ebadf656"
import numpy as np
x_train = np.array(x_data)
y_train = np.array(y_data)
x_train.shape, y_train.shape
# + colab={"base_uri": "https://localhost:8080/"} id="dlbPVBp_QWQF" outputId="563efc54-9d73-4661-dcf3-431660d1b21d"
tf.keras.models.Sequential()
# + id="LJt3iXsVQWTU"
model = tf.keras.models.Sequential()
# + colab={"base_uri": "https://localhost:8080/"} id="wIzp6Hs-QWXk" outputId="c59c7ebe-bbb8-4f0a-b102-b82f14174713"
model.add(tf.keras.Input(shape=(2,)))
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer='sgd', loss='mse')
# + colab={"base_uri": "https://localhost:8080/"} id="j0XBD34MQWap" outputId="eef50850-c932-46ae-ab3f-bdc6b32c9420"
model.fit(x_train, y_train, epochs=500)
# + colab={"base_uri": "https://localhost:8080/"} id="hKdotu2UM5BD" outputId="9e23a5b7-fafd-46fe-f02a-b9f21bb04334"
model.predict([[0,1]])
# + colab={"base_uri": "https://localhost:8080/"} id="Qq9rkk44VbwW" outputId="512e2cd7-d727-4843-fc1d-3bb1f3e5670d"
model.predict([[0,1]])
# + colab={"base_uri": "https://localhost:8080/"} id="K_dfk8yuQWe4" outputId="f0c97148-70f3-428d-f097-469ea01afbd5"
model.get_weights()
# + [markdown] id="8CGUoCXYMqqD"
# ## y = ax + bx + c
# ## y = -0.11855409x + 0.3499968x + 0.01285337
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="LvD_Ys-3QWjY" outputId="87a660d4-b09a-47bf-ddb7-222ce9ead8b2"
tf.keras.utils.plot_model(model, show_shapes=True)
| single_perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Field visualisation using `k3d`
#
# **Note:** If you experience any problems in plotting with `k3d`, please make sure you run the Jupyter notebook in Google Chrome.
#
# There are two ways how a field can be visualised, using:
# - `matplotlib`
# - `k3d`
#
# `k3d` provides three-dimensional interactive plots of fields inside Jupyter notebook.
#
# Let us say we have a sample, which is an ellipsoid
#
# $$\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} <= 1$$
#
# with $a=5\,\text{nm}$, $b=3\,\text{nm}$, and $c=2\,\text{nm}$. The space is discretised into cells with dimensions $(0.5\,\text{nm}, 0.5\,\text{nm}, 0.5\,\text{nm})$. The value of the field at $(x, y, z)$ point is $(-cy, cx, cz)$, with $c=10^{9}$. The norm of the field inside the cylinder is $10^{6}$.
#
# Let us first build that field.
# +
import discretisedfield as df
a, b, c = 5e-9, 3e-9, 2e-9
cell = (0.5e-9, 0.5e-9, 0.5e-9)
mesh = df.Mesh(p1=(-a, -b, -c), p2=(a, b, c), cell=cell)
def norm_fun(pos):
x, y, z = pos
if (x/a)**2 + (y/b)**2 + (z/c)**2 <= 1:
return 1e6
else:
return 0
def value_fun(pos):
x, y, z = pos
c = 1e9
return (-c*y, c*x, c*z)
field = df.Field(mesh, dim=3, value=value_fun, norm=norm_fun)
# -
# The most basic plot we can show is the plot of all the cells where the value is non-zero. This can be useful, to inspect the domain created, by plotting the norm.
# NBVAL_IGNORE_OUTPUT
field.norm.k3d_nonzero()
# The plot is interactive, so it can be manipulated using a mouse. To change the color of voxels, we can pass the new color via `color` argument.
# NBVAL_IGNORE_OUTPUT
field.norm.k3d_nonzero(color=0x27ae60)
# Next, we can plot a scalar field. For plotting a scalar field, we are using `discretisedfield.Field.k3d_voxels()` method.
# NBVAL_IGNORE_OUTPUT
try:
field.k3d_scalar()
except ValueError:
print('Exception raised.')
# An exception was raised because we attempted to plot a vector field using voxels. Therefore, we first need to extract a component of the field. Let us plot the $x$ component.
# NBVAL_IGNORE_OUTPUT
field.x.k3d_scalar()
# However, we can see that the points which we consider to be outside the sample are also plotted. This is because, `discretisedfield.Field.k3d_voxels()` method cannot determine the points where norm is zero from the passed scalar field. Therefore, we need to pass the `norm_field.`
# NBVAL_IGNORE_OUTPUT
field.x.k3d_scalar(filter_field=field.norm, multiplier=1e-6)
# By cascading operations, we can similarly plot the slice of the ellipsoid at $z=0$.
field.plane('x').mesh.region.edges
# NBVAL_IGNORE_OUTPUT
field.x.plane('z').k3d_scalar(filter_field=field.plane('z').norm)
# To further modify the plot, keyword arguments for `k3d.voxels()` function are accepted. Plese refer to its [documentation](https://k3d-jupyter.readthedocs.io/en/latest/k3d.html#k3d.k3d.voxels).
#
# Next, we can plot the vector field itself.
# NBVAL_IGNORE_OUTPUT
field.k3d_vector()
# By default, points at the discretisation cell centres are plotted together with vectors to help understand the structure of the field. However, they can be deactivated by passing `points=False`.
# NBVAL_IGNORE_OUTPUT
field.k3d_vector(points=False)
# It is difficult to understand the vector field from this plot. By cascading, we can plot its slice at $x=0$.
# NBVAL_IGNORE_OUTPUT
field.plane(x=0).k3d_vector()
# To improve the understanding of the plot, we can now colour the vectors plotted. For that, we need to pass a scalar field, according to which the vectors will be coloured.
# NBVAL_IGNORE_OUTPUT
field.plane(x=0).k3d_vector(color_field=field.x)
# To further modify the plot, keyword arguments for `k3d.vectors()` function are accepted. Plese refer to its [documentation](https://k3d-jupyter.readthedocs.io/en/latest/k3d.html#k3d.k3d.vectors).
#
# ### Multiple visualisation on the same plot
#
# Sometimes, it is necessary to show, for example, multiple planes of the sample on the same plot. This can be done by exposing the `k3d.plot` and passing it to different plotting methods. For instance.
# NBVAL_IGNORE_OUTPUT
import k3d
plot = k3d.plot()
field.plane(x=-3e-9).k3d_vector(plot=plot, color_field=field.z)
field.plane(x=0).k3d_vector(plot=plot, color_field=field.z, cmap='hsv')
field.plane(x=3e-9).k3d_vector(plot=plot, color_field=field.z)
plot.display()
# ### Plotting regions in the mesh
#
# Different regions can be defined in the mesh. Sometimes it is necessary to visualise those regions in order to make sure they are defined properly.
# +
# NBVAL_IGNORE_OUTPUT
p1 = (0, 0, 0)
p2 = (50, 50, 10)
cell = (5, 5, 5)
mesh = df.Mesh(p1=p1, p2=p2, cell=cell)
region1 = df.Region(p1=(0, 0, 0), p2=(50, 50, 5))
region2 = df.Region(p1=(0, 0, 5), p2=(50, 50, 10))
mesh.subregions = {'region_with_interesting_properties': region1, 'region_with_funny_properties': region2}
mesh.k3d_subregions()
| docs/ipynb/field-k3d-visualisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# # %matplotlib notebook
# %load_ext autoreload
# %autoreload 2
import initdirs
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
import networkx as nx
import nxpd
import os
nxpd.nxpdParams['show'] = 'ipynb'
# +
from epypes.compgraph import CompGraph, CompGraphRunner
from epypes.pipeline import Pipeline
from visionfuncs.io import open_image
from visionfuncs import geometry
from visiongraph.features import create_extended_feature_matching_cg, METHOD_PARAMS
# -
im_gray_1 = open_image(os.path.join(initdirs.DATA_DIR, 'robotmac/left_im0.png'), color_transform=cv2.COLOR_BGR2GRAY)
im_gray_2 = open_image(os.path.join(initdirs.DATA_DIR, 'robotmac/right_im0.png'), color_transform=cv2.COLOR_BGR2GRAY)
CHOSEN_METHOD = 'orb'
cg_match_ext = create_extended_feature_matching_cg(CHOSEN_METHOD)
# +
ft = {p: None for p in METHOD_PARAMS[CHOSEN_METHOD]}
ft['mask_1'] = None
ft['mask_2'] = None
ft['normType'] = cv2.NORM_HAMMING
ft['crossCheck'] = True
runner_match = CompGraphRunner(cg_match_ext, frozen_tokens=ft)
# -
# +
runner_match.run(image_1=im_gray_1, image_2=im_gray_2)
runner_match['keypoints_paired']
# +
_ = plt.figure(figsize=(15, 15))
N_KEYPOINTS = 100
_ = plt.subplot(1, 2, 1)
_ = plt.axis('off')
_ = plt.imshow(im_gray_1, cmap='gray')
_ = plt.plot( runner_match['keypoints_paired'][:N_KEYPOINTS, 0], runner_match['keypoints_paired'][:N_KEYPOINTS, 1], 'mo' )
_ = plt.subplot(1, 2, 2)
_ = plt.axis('off')
_ = plt.imshow(im_gray_2, cmap='gray')
_ = plt.plot( runner_match['keypoints_paired'][:N_KEYPOINTS, 2], runner_match['keypoints_paired'][:N_KEYPOINTS, 3], 'mo' )
# -
# +
# Projection matrices are not from the used cameras. They are
P1 = np.array([[534.9331565 , 0. , 341.16282272, 0. ],
[ 0. , 534.9331565 , 243.19185257, 0. ],
[ 0. , 0. , 1. , 0. ]])
P2 = np.array([[ 5.34933156e+02, 0.00000000e+00, 3.41162823e+02,
-1.78041986e+04],
[ 0.00000000e+00, 5.34933156e+02, 2.43191853e+02,
0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
0.00000000e+00]])
points1 = runner_match['keypoints_paired'][:N_KEYPOINTS, :2]
points2 = runner_match['keypoints_paired'][:N_KEYPOINTS, 2:]
ptcloud = geometry.triangulate_points(P1, P2, points1, points2)
# -
# +
def plot_depth():
fig = plt.figure()
plt.imshow(im_gray_1, cmap='gray')
points = plt.scatter(
runner_match['keypoints_paired'][:N_KEYPOINTS, 0],
runner_match['keypoints_paired'][:N_KEYPOINTS, 1],
c=ptcloud[:, 2],
cmap='rainbow'
)
fig.colorbar(points)
plot_depth()
# -
| notebooks/demo_triangulate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object Following - Live Demo
#
# +
import time
from jetbot import Camera, Robot, ObjectDetector, bgr8_to_jpeg
from IPython.display import display
import ipywidgets.widgets as widgets
import outsourcing_nechl as myhelp
import cv2
import numpy as np
from importlib import reload
from ipywidgets import TwoByTwoLayout
import nechlBot
reload(nechlBot)
#only for the obstacle avoidance
import torch
import torchvision
import torch.nn.functional as F
# -
model_ob = torchvision.models.alexnet(pretrained=False)
model_ob.classifier[6] = torch.nn.Linear(model_ob.classifier[6].in_features, 2)
model_ob.load_state_dict(torch.load('best_model.pth'))
device = torch.device('cuda')
model_ob = model_ob.to(device)
# +
mean = 255.0 * np.array([0.485, 0.456, 0.406])
stdev = 255.0 * np.array([0.229, 0.224, 0.225])
normalize = torchvision.transforms.Normalize(mean, stdev)
def preprocess(camera_value):
global device, normalize
x = camera_value
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
x = x.transpose((2, 0, 1))
x = torch.from_numpy(x).float()
x = normalize(x)
x = x.to(device)
x = x[None, ...]
return x
# -
return_load_things = myhelp.load_things()
items = return_load_things[0]
font = return_load_things[1]
color = return_load_things[2]
fontScale = return_load_things[3]
thickness = return_load_things[4]
# ### Compute detections on single camera image
# + slideshow={"slide_type": "-"}
model = ObjectDetector('ssd_mobilenet_v2_coco.engine')
# -
camera = Camera.instance(width=300, height=300, fps=5)
robot = nechlBot.NechlBot()
detections = myhelp.detectt(model,camera)
# +
reload(myhelp)
image_widget = widgets.Image(format='jpeg', width=300, height=300)
blocked_slider = widgets.FloatSlider(description='blocked', min=0.0, max=1.0, orientation='vertical')
return_create_widgets=myhelp.create_widgets(detections)
button_box = return_create_widgets[0]
detections_widget = return_create_widgets[1]
button_stop_stream = return_create_widgets[2]
button_start_stream = return_create_widgets[3]
label_widget = return_create_widgets[4]
search_for_widget = return_create_widgets[5]
button_search_box = return_create_widgets[6]
button_start_search = return_create_widgets[7]
button_stop_search = return_create_widgets[8]
def cam_stop(change):
camera.unobserve_all()
def cam_start(change):
camera.unobserve_all()
camera.observe(execute, names="value")
def start_moving(change):
robot.searching = True
def restart_searching(change):
robot.searching_status = False
robot.item_found = False
robot.round_search = 0
def force_stop(change):
robot.searching_status = False
robot.item_found = True
button_stop_stream.on_click(cam_stop)
button_start_stream.on_click(cam_start)
button_start_search.on_click(restart_searching)
button_stop_search.on_click(force_stop)
# -
image_number = 0
object_number = 0
try:
det = detections[image_number][object_number]
for i in range(len(items)):
if str(det["label"]) == items[str(i)][1]:
#print("it is a", items[str(i)][2])
item_detected = items[str(i)][2]
except IndexError as e:
pass
#
# ### Loop through the process for a moving image
# +
myhelp.fill_label(items, label_widget)
myhelp.fill_label(items, search_for_widget)
left_col = widgets.HBox([image_widget, blocked_slider])
right_col = widgets.VBox([label_widget,button_box, search_for_widget, button_search_box])
t2x2 = TwoByTwoLayout(
top_left = left_col,
top_right = right_col
)
# +
robot.searching_status = False
display(t2x2)
width = int(image_widget.width)
height = int(image_widget.height)
def execute(change):
image = change['new']
offset_text = 10
x = preprocess(image)
y = model_ob(x)
y = F.softmax(y, dim=1)
prob_blocked = float(y.flatten()[0])
blocked_slider.value = prob_blocked
detections = model(image)
detection_names = myhelp.get_names(detections, items)
try:
first_detection=detections[0][0]
for i in range(len(items)):
if str(first_detection["label"]) == items[str(i)][1]:
item_detected = items[str(i)][2]
except IndexError as e:
pass
for det in detections[0]:
for i in range(len(items)):
if str(det["label"]) == items[str(i)][1]:
item_detected = items[str(i)][2]
item_chance = str(round((det["confidence"]*100),1))
bbox = det['bbox']
if (item_detected != search_for_widget.value) and (item_detected != label_widget.value):
new_img = cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (255, 0, 0), 2)
new_img = cv2.rectangle(new_img,(int(width * bbox[0]), int(height* bbox[1])), (int(width*bbox[2]), int(height * bbox[1]+12)), (255, 0, 0), -1)
cv2.putText(new_img, item_detected+" "+item_chance+"%",(int(width*bbox[0]+5),int(height*bbox[1]+offset_text)), font, 0.3, color,1)
else:
pass
return_check_selected_label = myhelp.check_selected_label(items, label_widget.value, detections)
if return_check_selected_label is not None:
bbox = return_check_selected_label['bbox']
offset_text = 10
item_detected = label_widget.value
item_chance = str(round((return_check_selected_label["confidence"]*100),1))
new_img=cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (0, 255, 0), 2)
new_img=cv2.rectangle(new_img,(int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[1]+12)), (0, 255, 0),-1)
cv2.putText(new_img,item_detected+" "+item_chance+"%" ,(int(width* bbox[0])+2,int(height*bbox[1])+offset_text), font,0.3,(0,0,0),1)
elif return_check_selected_label is None:
new_img=image
else:
pass
###this part draws the box around the item that has to be detected
return_get_names=(myhelp.get_names(detections,items)) #this revonverts the labels in digits to the string again, which later is displayed
return_get_names_bbox = myhelp.check_selected_label(items, search_for_widget.value, detections) #check if the search field item is currently avaiavle
if return_get_names_bbox is None and search_for_widget.value != "background":
pass
if return_get_names_bbox is not None:
bbox = return_get_names_bbox['bbox']
item_detected = search_for_widget.value
item_info = [item_detected,return_get_names_bbox['label']]
item_chance = str(round((return_get_names_bbox['confidence']*100),1))
new_img=cv2.rectangle(new_img, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (0, 0, 255), 2)
new_img=cv2.rectangle(new_img,(int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[1]+12)), (0, 0, 255),-1)
cv2.putText(new_img,item_detected+" "+item_chance+"%" ,(int(width* bbox[0])+2,int(height*bbox[1])+offset_text), font,0.3,(0,0,0),1)
###thanks to item_found, we only need one frame with positive detection and the robot stops, that is very useful.
if (search_for_widget.value not in detection_names) and (robot.item_found == False) and (search_for_widget.value is not 'background'):
if prob_blocked > 0.51:
robot.backward(0.3)
time.sleep(0.1)
robot.rotate_left(22.5)
elif prob_blocked <0.5:
robot.searching_around()
robot.searching_status = True
elif (search_for_widget.value in detection_names) or (robot.item_found == True):
robot.searching_status = False
robot.item_found = True
# update image widget
image_widget.value = bgr8_to_jpeg(new_img)
time.sleep(0.1)
execute({'new': camera.value})
| stable_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/Hongchenglong/colab/blob/main/CertifiableBayesianInference/FCN_Experiments/analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="6TGbK9oIw9gl" outputId="808a16e4-001f-4a59-faac-393a160243a8"
from google.colab import drive
drive.mount('/content/drive')
# + id="G_zNvxljPoFO"
import sys, os
from pathlib import Path
path = Path(os.getcwd())
sys.path.append(str(path.parent))
sys.path.append('/content/drive/MyDrive/ColabNotebooks/CertifiableBayesianInference')
experiment = '/content/drive/MyDrive/ColabNotebooks/CertifiableBayesianInference/FCN_Experiments/'
# + id="SFu_d8cP3aa5"
# python3 MNIST_runner.py --eps 0.11 --lam 0.25 --rob 0 --opt HMC --gpu 0 &
# python3 MNIST_runner.py --eps 0.11 --lam 0.25 --rob 2 --opt HMC --gpu 1 &
dict = {'eps': 0.11, 'lam': 0.25, 'rob': 0, 'opt': 'HMC', 'gpu': '0'}
rob = dict['rob']
opt = dict['opt']
inference = opt
# + colab={"base_uri": "https://localhost:8080/"} id="hN6AkNue3sR7" outputId="5bd7f9d7-7e1e-4528-d406-24b015cd5440"
import BayesKeras
from BayesKeras import PosteriorModel
from BayesKeras import analyzers
import tensorflow as tf
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import numpy as np
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train/255.
X_test = X_test/255.
X_train = X_train.astype("float32").reshape(-1, 28*28)
X_test = X_test.astype("float32").reshape(-1, 28*28)
model = PosteriorModel(experiment + "%s_FCN_Posterior_%s"%(inference, rob))
# + colab={"base_uri": "https://localhost:8080/", "height": 122} id="NmkAoIy0tMTb" outputId="498e3e6c-e75d-46c9-f89f-92b50b7ec0bc"
from tqdm import trange
loss = tf.keras.losses.SparseCategoricalCrossentropy()
num_images = 500
accuracy = tf.keras.metrics.Accuracy()
preds = model.predict(X_test[0:500]) #np.argmax(model.predict(np.asarray(adv)), axis=1)
accuracy.update_state(np.argmax(preds, axis=1), y_test[0:500])
fgsm = accuracy.result()
print("%s Accuracy: "%(inference), accuracy.result())
accuracy = tf.keras.metrics.Accuracy()
adv = analyzers.PGD(model, X_test[0:500], eps=0.1, loss_fn=loss, num_models=10)
preds = model.predict(adv) #np.argmax(model.predict(np.asarray(adv)), axis=1)
accuracy.update_state(np.argmax(preds, axis=1), y_test[0:500])
fgsm = accuracy.result()
print("FGSM Robustness: ", accuracy.result())
accuracy = tf.keras.metrics.Accuracy()
preds = analyzers.chernoff_bound_verification(model, X_test[0:100], 0.1, y_test[0:100], confidence=0.80)
#print(preds.shape)
#print(np.argmax(preds, axis=1).shape)
accuracy.update_state(np.argmax(preds, axis=1), y_test[0:100])
print("Chernoff Lower Bound (IBP): ", accuracy.result())
"""
p = 0
for i in trange(100, desc="Computing FGSM Robustness"):
this_p = analyzers.massart_bound_check(model, np.asarray([X_test[i]]), 0.075, y_test[i])
print(this_p)
p += this_p
print("Massart Lower Bound (IBP): ", p/100.0)
"""
# + id="aCY7sYQV3y4i"
| CertifiableBayesianInference/FCN_Experiments/analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # kinetic Modeling with ODE Solver
# Description
# ### Imports
# Import packages and set global varibales used in this notebook
import os # operating system to work with directories and files
import matplotlib.pyplot as plt # plot data and results
import seaborn as sns # prettier visualization
import pandas as pd # convert excel to dataframe
import numpy as np # convert dataframe to nparray for solver
from scipy.integrate import odeint # solve ode
from lmfit import minimize, Parameters, Parameter, report_fit # fitting
from pyenzyme.enzymeml.tools import EnzymeMLReader # EnzymeML document functionalities
# ## Select EnzymeML document
# Select the EnzymeML document created with BioCatHub, by changing the path vriable accodingly. <br>
# The whole EnzymeML document is stored in the enzmldoc varible. <br>
# Prints a short overview.
# +
path = 'datasets/Hathaway/Experiment Hathaway.omex'
# check for correct file path and file extension:
if os.path.isfile(path) and os.path.basename(path).lower().endswith('.omex'):
enzmldoc = EnzymeMLReader().readFromFile(path)
print(enzmldoc)
else:
print('Incorrect file path.')
# -
# ## Visualization of timecourse data
# A short visualisation to get a first impression of the data. <br>
# First select the reaction to visualize by changing the reaction_id accordingly, see overview above for selction options.
# +
#basic/general settings
sns.set_theme(style="whitegrid", palette ='bright',color_codes=True, context = 'notebook')
# set reaction id to 'r0' or 'r1'
reaction_id = 'r0'
reaction = enzmldoc.getReaction(reaction_id)
reaction_name = reaction.getName()
educts = reaction.getEducts() # list of tuples: (Reactant ID, stoichiometry, Constant, Replicate, Initial-Concentration)
products = reaction.getProducts()
# -
# Visualize educts, if the EnzymeML document contains time course data for educts
# Educts:
for reactant_id, stoich, _, replicates, init_conc in educts:
if len(replicates) > 0:
df = reaction.exportReplicates(reactant_id)
time_val = df.index.tolist()
time = df.index.name.split('/')[0]
time_unit_name = df.index.name.split('/')[1]
time_unit = enzmldoc.getUnitDict()[time_unit_name].getName()
f, ax = plt.subplots(figsize=(7,3.5))
# Visualization
for col in df.columns:
name = col.split('/')[1]+': '+enzmldoc.getReactant(col.split('/')[1]).getName()
unit_name = enzmldoc.getReactant(col.split('/')[1]).getSubstanceUnits()
unit = enzmldoc.getUnitDict()[unit_name].getName()
sns.lineplot( x=time_val, y=df[col], label = col.split('/')[0] )
#set graph title, legend, axes
ax.set_title(reaction_name, fontsize = 12)
ax.legend(fontsize = 10, \
bbox_to_anchor= (1, 0.75), \
title= name, \
title_fontsize = 10, \
shadow = True, \
facecolor = 'white');
xlabel = f"{time} [{time_unit}]"
ylabel = f"{'concentration'} [{unit}]"
ax.set_xlabel(xlabel , fontsize=10)
ax.set_ylabel(ylabel, fontsize=10)
# Visualize products, if the EnzymeML document contains time course data for products <br>
# The example data does not contain measurments of products.
for reactant_id, stoich, _, replicates, init_conc in products:
if len(replicates) > 0:
df = reaction.exportReplicates(reactant_id)
time_val = df.index.tolist()
time = df.index.name.split('/')[0]
time_unit_name = df.index.name.split('/')[1]
time_unit = enzmldoc.getUnitDict()[time_unit_name].getName()
f, ax = plt.subplots(figsize=(7,3.5))
# Visualization
for col in df.columns:
name = enzmldoc.getReactant(col.split('/')[1]).getName()
unit_name = enzmldoc.getReactant(col.split('/')[1]).getSubstanceUnits()
unit = enzmldoc.getUnitDict()[unit_name].getName()
sns.lineplot( x=time_val, y=df[col], label = col.split('/')[0] )
#set graph title, legend, axes
ax.set_title(reaction_name, fontsize = 12)
ax.legend(fontsize = 10, \
bbox_to_anchor= (1, 0.75), \
title= name, \
title_fontsize = 10, \
shadow = True, \
facecolor = 'white');
xlabel = f"{time} [{time_unit}]"
ylabel = f"{'concentration'} [{unit}]"
ax.set_xlabel(xlabel , fontsize=10)
ax.set_ylabel(ylabel, fontsize=10)
# ## Parameter Estimation and Modeling
# #### Data preparation
# Convert pandas dataframe from EnzymeML data to numpy arrays. <br>
# First select the reactant to model by changing the reactant_id accordingly, see overview above for selction options.<br>
# In this example substrate 's0' will be modeled.<br>
# ##### Choose against which timecourse you want to fit.
reactant_id = 's0'
is_product = False
lag = 5
replicates = reaction.exportReplicates(reactant_id)
# time:
data_time = replicates.index.values # numpy array shape (9,)
data_time = data_time[lag:]
# substrate data (absorption):
data_s = np.transpose(replicates.iloc[lag:,:].to_numpy(np.float64)) # shape: (4, 9)
#data_s = np.transpose(replicates.iloc[:,:-1].to_numpy(np.float64)) # shape: (3, 9)
#data_s = np.transpose(replicates.iloc[:,:-2].to_numpy(np.float64)) # shape: (2, 9)
#data_s = np.transpose(replicates.iloc[lag:,0].to_numpy(np.float64)) # shape: (1, 9)
#print(data_s.shape)
# if product "cheating"
if is_product:
for i in range(data_s.shape[0]):
grr = np.transpose(replicates.iloc[lag:,:].to_numpy(np.float64))
init = np.max(grr[i])
temp = np.full(data_s[i].shape,init)
data_s[i]= temp-data_s[i]
#print(data_s)
# ### Fit data to a system of ODEs
# #### Define the ODE functions
# not used
def michaelis_menten_with_lag(w, t, params):
'''
System of differential equations
Arguments:
w: vector of state variables: w = [v,S]
t: time
params: parameters
'''
v, s = w
a = params['a'].value
vmax = params['vmax'].value
km = params['Km'].value
# f(v',s'):
f0 = a*(vmax-v) # v'
f1 = -v*s/(km+s) # S'
return [f0,f1]
def michaelis_menten(w, t, params):
'''
Differential equations
Arguments:
w: vector of state variables, here only one: w = [S]
t: time
params: parameters
'''
s = w
vmax = params['vmax'].value
km = params['Km'].value
# f(s'):
f1 = -vmax*s/(km+s) # S'
return f1
def hill_equation(w, t, params):
'''
Differential equations
Arguments:
w: vector of state variables, here only one: w = [S]
t: time
params: parameters
'''
s = w
vmax = params['vmax'].value
km = params['Km'].value
n = params['n'].value
# f(s'):
f1 = -vmax*(s**n)/(km+(s**n)) # S'
return f1
# #### Solve ODE
# not used
def solve_MM_with_lag(t, w0, params):
'''
Solution to the ODE w'(t)=f(t,w,p) with initial condition w(0)= w0 (= [S0])
'''
w = odeint(michaelis_menten_with_lag, w0, t, args=(params,))
return w
def solve_MM(t, w0, params):
'''
Solution to the ODE w'(t)=f(t,w,p) with initial condition w(0)= w0 (= [S0])
'''
w = odeint(michaelis_menten, w0, t, args=(params,))
return w
def solve_Hill(t, w0, params):
'''
Solution to the ODE w'(t)=f(t,w,p) with initial condition w(0)= w0 (= [S0])
'''
w = odeint(hill_equation, w0, t, args=(params,))
return w
# #### Compute residual between actual data (S) and fitted data
# In this model we assume that the data contains a bias on the y-axis. <br>
# Therfore we compute the distance between the modeled substrate + bias and the actual mesuared substrate
# not used
def residual_with_lag_and_bias(params, t, data_s):
ndata, nt = data_s.shape # get dimensions of data (here we fit against 3 measurments => ndata = 3)
resid = 0.0*data_s[:] # initialize the residual vector
# compute residual per data set
for i in range(ndata):
w0 = params['v0'].value, params['S0'].value
model = solve_MM_with_lag(t, w0, params) # solve the ODE with the given parameters
# get modeled substrate
s_model = model[:,1]
s_model_b = s_model + params['b'].value # adding bias
resid[i,:]=data_s[i,:]-s_model_b # compute distance to measured data
return resid.flatten()
def residual_MM(params, t, data_s):
ndata, nt = data_s.shape # get dimensions of data (here we fit against 4 measurments => ndata = 4)
resid = 0.0*data_s[:] # initialize the residual vector
# compute residual per data set
for i in range(ndata):
w0 = data_s[i,0]
model = solve_MM(t, w0, params) # solve the ODE with the given parameters
# get modeled substrate
s_model = model[:,0]
resid[i,:]=data_s[i,:]-s_model # compute distance to measured data
return resid.flatten()
def residual_MM_single(params, t, data_s):
w0 = data_s[0]
model = solve_MM(t, w0, params)
# only have data for s not v
s_model = model[:,0]
return (s_model - data_s).ravel()
def residual_Hill(params, t, data_s):
ndata, nt = data_s.shape # get dimensions of data (here we fit against 4 measurments => ndata = 4)
resid = 0.0*data_s[:] # initialize the residual vector
# compute residual per data set
for i in range(ndata):
w0 = data_s[i,0]
model = solve_Hill(t, w0, params) # solve the ODE with the given parameters
# get modeled substrate
s_model = model[:,0]
resid[i,:]=data_s[i,:]-s_model # compute distance to measured data
return resid.flatten()
def residual_Hill_single(params, t, data_s):
w0 = data_s[0]
model = solve_Hill(t, w0, params)
# only have data for s not v
s_model = model[:,0]
return (s_model - data_s).ravel()
# #### Functions to compute initial value for vmax and Km
# To get a good guess for vmax, v is computed for each time step. <br>
# For Km the mean of s values at aproximatly vmax/2 is taken.
def get_v(time, data_s):
v_all = 0.0*data_s[:] # initialize velocity vector
if len(data_s.shape)>1:
for i in range(data_s.shape[0]):
prev_value = data_s[i,0]
prev_time = 0.0
for j in range(data_s.shape[1]):
if time[j] == 0:
delta = prev_value - data_s[i,j]
else:
delta = abs( (prev_value - data_s[i,j])/(time[j]-prev_time))
v_all[i,j] = delta
prev_value = data_s[i,j]
prev_time = time[j]
v = np.max(v_all, axis=0)
else:
prev_value = data_s[0]
prev_time = 0.0
for j in range(data_s.shape[0]):
if time[j] == 0:
delta = prev_value - data_s[j]
else:
delta = abs( (prev_value - data_s[j])/(time[j]-prev_time))
v_all[j] = delta
prev_value = data_s[j]
prev_time = time[j]
v = v_all
return v
def get_initial_vmax(time, data_s):
v = get_v(time,data_s)
return np.max(v)
def get_initial_Km(time, data_s):
v = get_v(time,data_s)
idx_max = np.where(v == np.max(v))[0][0]
idx_Km = (np.abs(v[idx_max:]-np.max(v)/2)).argmin()
if len(data_s.shape)>1:
km = np.mean(data_s,axis=0)[idx_max+idx_Km]
else:
km = data_s[idx_max+idx_Km]
return km
# #### Bringing everything together
# Initialize parameters:
# - $v_0$ is fixed on 0.
# - bias is estimated by taking the mean of the last data point for all measured data.
# - for $S_0$ the mean of first data point for all measured data is taken and substracted by the estimated bias.
# - functions to get initial values for $v_{max}$ and $K_m$ are called.
# - initial value for a is set to 1.
# +
# time
t_measured = data_time[:]
# initial conditions:
#v0 = 0
if len(data_s.shape)>1:
s0 = np.max(data_s,axis=0)[0]
else:
s0 = data_s[0]
# Set parameters including bounds
#bias = np.min(data_s,axis=0)[-1]
vmax = get_initial_vmax(t_measured, data_s)
km = get_initial_Km(t_measured, data_s)
# -
# Parameters for different Models
# 'standard' <NAME>
params_MM = Parameters()
params_MM.add('vmax', value=vmax, min=0.0)
params_MM.add('Km', value=km, min=0.0001)
# +
# Hill equation
n = 2
params_Hill = Parameters()
params_Hill.add('vmax', value=vmax, min=0.0)
params_Hill.add('Km', value=km, min=0.0001)
params_Hill.add('n', value=n, min=0.99, max=3)
# -
# #### Fit model and visualize results
# Statistics for the Fit and the parameters a printed. <br>
# In the graph the red line shows the result of the model. <br>
# The dotted curves are the measured data sets.
# 'standard' <NAME>
if len(data_s.shape)>1:
result = minimize(residual_MM , params_MM, args=(t_measured, data_s), method='leastsq')
report_fit(result) # access values of fitted parameters: result.params['Km'].value
# plot the data sets and fits
plt.figure()
for i in range(data_s.shape[0]):
plt.plot(t_measured, data_s[i, :], 'o')
#w0 = params['v0'].value, data_s[i,0]
w0 = data_s[i,0]
data_fitted = solve_MM(t_measured, w0, result.params)
plt.plot(t_measured, data_fitted[:, 0], '-', linewidth=2, label='fitted data')
plt.show()
else:
result = minimize(residual_MM_single , params_MM, args=(t_measured, data_s), method='leastsq')
report_fit(result) # access values of fitted parameters: result.params['Km'].value
# plot the data sets and fits
plt.figure()
plt.plot(t_measured, data_s[:], 'o')
w0 = data_s[0]
data_fitted = solve_MM(t_measured, w0, result.params)
plt.plot(t_measured, data_fitted[:, 0], '-', linewidth=2, label='fitted data')
plt.show()
# $v_{max} = k_{cat} * E_0$
# Hill equation
if len(data_s.shape)>1:
result = minimize(residual_Hill , params_Hill, args=(t_measured, data_s), method='leastsq')
report_fit(result) # access values of fitted parameters: result.params['Km'].value
# plot the data sets and fits
plt.figure()
for i in range(data_s.shape[0]):
plt.plot(t_measured, data_s[i, :], 'o')
#w0 = params['v0'].value, data_s[i,0]
w0 = data_s[i,0]
data_fitted = solve_Hill(t_measured, w0, result.params)
plt.plot(t_measured, data_fitted[:, 0], '-', linewidth=2, label='fitted data')
plt.show()
else:
result = minimize(residual_Hill_single , params_Hill, args=(t_measured, data_s), method='leastsq')
report_fit(result) # access values of fitted parameters: result.params['Km'].value
# plot the data sets and fits
plt.figure()
plt.plot(t_measured, data_s[:], 'o')
w0 = data_s[0]
data_fitted = solve_Hill(t_measured, w0, result.params)
plt.plot(t_measured, data_fitted[:, 0], '-', linewidth=2, label='fitted data')
plt.show()
| Model_MM_and_Hill.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # UK research networks with HoloViews+Bokeh+Datashader
#
# [Datashader](http://datashader.readthedocs.org) makes it possible to plot very large datasets in a web browser, while [Bokeh](http://bokeh.pydata.org) makes those plots interactive, and [HoloViews](http://holoviews.org) provides a convenient interface for building these plots.
# Here, let's use these three programs to visualize an example dataset of 600,000 collaborations between 15000 UK research institutions, previously laid out using a force-directed algorithm by [<NAME>](https://www.digital-science.com/people/ian-calvert).
#
# First, we'll import the packages we are using and set up some defaults.
# +
import pandas as pd
import holoviews as hv
import fastparquet as fp
from colorcet import fire
from datashader.bundling import directly_connect_edges, hammer_bundle
from holoviews.operation.datashader import datashade, dynspread
from holoviews.operation import decimate
from dask.distributed import Client
client = Client()
hv.notebook_extension('bokeh','matplotlib')
decimate.max_samples=20000
dynspread.threshold=0.01
datashade.cmap=fire[40:]
sz = dict(width=150,height=150)
# %opts RGB [xaxis=None yaxis=None show_grid=False bgcolor="black"]
# -
# The files are stored in the efficient Parquet format:
# +
r_nodes_file = '../data/calvert_uk_research2017_nodes.snappy.parq'
r_edges_file = '../data/calvert_uk_research2017_edges.snappy.parq'
r_nodes = hv.Points(fp.ParquetFile(r_nodes_file).to_pandas(index='id'), label="Nodes")
r_edges = hv.Curve( fp.ParquetFile(r_edges_file).to_pandas(index='id'), label="Edges")
len(r_nodes),len(r_edges)
# -
# We can render each collaboration as a single-line direct connection, but the result is a dense tangle:
# +
# %%opts RGB [tools=["hover"] width=400 height=400]
# %time r_direct = hv.Curve(directly_connect_edges(r_nodes.data, r_edges.data),label="Direct")
dynspread(datashade(r_nodes,cmap=["cyan"])) + \
datashade(r_direct)
# -
# Detailed substructure of this graph becomes visible after bundling edges using a variant of [Hurter, Ersoy, & Telea (ECV-2012)](http://www.cs.rug.nl/~alext/PAPERS/EuroVis12/kdeeb.pdf), which takes several minutes even using multiple cores with [Dask](https://dask.pydata.org):
# %time r_bundled = hv.Curve(hammer_bundle(r_nodes.data, r_edges.data),label="Bundled")
# +
# %%opts RGB [tools=["hover"] width=400 height=400]
dynspread(datashade(r_nodes,cmap=["cyan"])) + datashade(r_bundled)
# -
# Zooming into these plots reveals interesting patterns (if you are running a live Python server), but immediately one then wants to ask what the various groupings of nodes might represent. With a small number of nodes or a small number of categories one could color-code the dots (using datashader's categorical color coding support), but here we just have thousands of indistinguishable dots. Instead, let's use hover information so the viewer can at least see the identity of each node on inspection.
#
# To do that, we'll first need to pull in something useful to hover, so let's load the names of each institution in the researcher list and merge that with our existing layout data:
# +
node_names = pd.read_csv("../data/calvert_uk_research2017_nodes.csv", index_col="node_id", usecols=["node_id","name"])
node_names = node_names.rename(columns={"name": "Institution"})
node_names
r_nodes_named = pd.merge(r_nodes.data, node_names, left_index=True, right_index=True)
r_nodes_named.tail()
# -
# We can now overlay a set of points on top of the datashaded edges, which will provide hover information for each node. Here, the entire set of 15000 nodes would be reasonably feasible to plot, but to show how to work with larger datasets we wrap the `hv.Points()` call with `decimate` so that only a finite subset of the points will be shown at any one time. If a node of interest is not visible in a particular zoom, then you can simply zoom in on that region; at some point the number of visible points will be below the specified decimate limit and the required point should be revealed.
# %%opts Points (color="cyan") [tools=["hover"] width=900 height=650]
datashade(r_bundled, width=900, height=650) * \
decimate( hv.Points(r_nodes_named),max_samples=10000)
# If you click around and hover, you should see interesting groups of nodes, and can then set up further interactive tools using [HoloViews' stream support](http://holoviews.org/user_guide/Responding_to_Events.html) to reveal aspects relevant to your research interests or questions.
#
# As you can see, datashader lets you work with very large graph datasets, though there are a number of decisions to make by trial and error, you do have to be careful when doing computationally expensive operations like edge bundling, and interactive information will only be available for a limited subset of the data at any one time due to data-size limitations of current web browsers.
| examples/topics/uk_researchers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:conda-env]
# language: python
# name: conda-env-conda-env-py
# ---
# +
# Copyright (c) 2019 ETH Zurich, <NAME>, <NAME>, <NAME>
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.gridspec
plt.rc('axes', axisbelow=True)
import pandas as pd
from reporting import readTable, parseTable
# -
# provide the list of result files to analyze:
fnames = [
'results/results-190727-015659.pkl',
'results/results-190727-045529.pkl',
'results/results-190727-064751.pkl',
'results/results-190727-132915.pkl',
'results/results-190727-182855.pkl',
]
dfs = [pd.read_pickle(fname) for fname in fnames]
df = pd.concat(dfs, axis=0, join='outer', ignore_index=False)
# +
df2 = df.loc[(df['dataDescr'] == 'outputs') & (df['comprName'] == 'ours')]
totalOverLayers = df2.groupby(['modelName','quantMethod','intraBatchIdx']).agg({'comprSize': 'sum', 'comprSizeBaseline': 'sum'})
totalOverLayers.insert(2, 'comprRatio', totalOverLayers.comprSizeBaseline/totalOverLayers.comprSize)
totalOverLayers.boxplot(column='comprRatio', by=['modelName', 'quantMethod'], rot=90)
plt.ylim(bottom=1)
df2 = df.loc[(df['dataDescr'] == 'gradients')]
totalOverLayers = df2.groupby(['quantMethod', 'modelName','comprName','intraBatchIdx']).agg({'comprSize': 'sum', 'comprSizeBaseline': 'sum'})
totalOverLayers.insert(2, 'comprRatio', totalOverLayers.comprSizeBaseline/totalOverLayers.comprSize)
totalOverLayers.boxplot(column='comprRatio', by=['comprName', 'quantMethod', 'modelName'], rot=90)
plt.ylim(bottom=1, top=12)
#by layer analysis
df2 = df.loc[(df['comprName'] == 'ours') &
(df['dataDescr'] == 'outputs') &
(df['quantMethod'] == 'fixed8')]
totalOverLayers = df2.groupby(['layerIdx', 'modelName', 'intraBatchIdx']).agg({'comprSize': 'sum', 'comprSizeBaseline': 'sum'})
totalOverLayers.insert(2, 'comprRatio', totalOverLayers.comprSizeBaseline/totalOverLayers.comprSize)
totalOverLayers.boxplot(column='comprRatio', by=['modelName', 'layerIdx'], rot=90, figsize=(15,5))
plt.ylim(bottom=1)
# +
df1 = df.loc[df['dataDescr'] == 'outputs']
# df1 = df.loc[df['dataDescr'] == 'gradients']
# modelNames = df1['modelName'].unique()
# modelNames = dict(zip(modelNames, modelNames))
modelNames = {'alexnet': 'AlexNet',
'vgg16': 'VGG-16',
'resnet34': 'ResNet-34',
'squeezenet': 'SqueezeNet',
'mobilenet2': 'MobileNetV2'
}
quantMethods = df1['quantMethod'].unique()
comprNames = ['zero-RLE', 'ZVC', 'BPC', 'ours']
new_colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728',
'#9467bd', '#8c564b', '#e377c2', '#7f7f7f',
'#bcbd22', '#17becf']
fig = plt.figure(figsize=(18,4))
gridOuter = matplotlib.gridspec.GridSpec(nrows=1, ncols=len(modelNames), wspace=0.2, hspace=0.2)
for io, (go, modelName) in enumerate(zip(gridOuter, modelNames.keys())):
gridInner = matplotlib.gridspec.GridSpecFromSubplotSpec(nrows=1, ncols=len(quantMethods),
subplot_spec=go, wspace=0.0, hspace=0.0)
df2 = df1.loc[df1['modelName'] == modelName]
for ii, (gi, quantMethod) in enumerate(zip(gridInner, quantMethods)):
df3 = df2.loc[df2['quantMethod'] == quantMethod]
df4 = df3.groupby(['comprName', 'intraBatchIdx']).agg({'comprSize': 'sum', 'comprSizeBaseline': 'sum'})
df4.insert(2, 'comprRatio', df4.comprSizeBaseline/df4.comprSize)
df4 = df4.reset_index()
if ii == 0:
ax = fig.add_subplot(gi)
axmain = ax
else:
ax = fig.add_subplot(gi, sharey=axmain)
plt.sca(ax)
df4.loc[df4['comprSize'] == 0] = 1
data = [df4.loc[(df4['comprName'] == comprName)]['comprRatio'].tolist() for comprName in comprNames]
bp = plt.boxplot(data,#df3.groupby('comprName')['comprRatio'].apply(list).tolist(),
notch=True, patch_artist=True, showfliers=False,
showmeans=True, meanline=True, whis=[1,99])#5,95])
def setBoxColors(bp, idx, color):
bp['boxes'][idx].set(color=color)
bp['caps'][2*idx].set(color=color)
bp['caps'][2*idx+1].set(color=color)
bp['whiskers'][2*idx].set(color=color)
bp['whiskers'][2*idx+1].set(color=color)
bp['medians'][idx].set(color=color)
bp['means'][idx].set(color='black')
for idx, color in zip(range(len(bp['boxes'])), new_colors):
setBoxColors(bp, idx, color)
plt.grid(axis='y')
plt.xlabel(quantMethod, rotation=30)
plt.ylim(bottom=0.8)
bottom, top = plt.ylim()
if top > 20:
plt.ylim(top=20)
ax.tick_params(axis='x', labelbottom=False, length=0)
if ii > 0:
ax.tick_params(axis='y', labelleft=False)
if (ii == 0) & (io == 0):
plt.ylabel('compr. ratio')
if ii == len(quantMethods)-1:
plt.title(modelNames[modelName], loc='right')
if (io == len(modelNames)-1) & (ii == len(quantMethods)-1):
hs = [plt.plot([1,1], color)[0] for color, _ in zip(new_colors, comprNames)]
plt.legend(hs, comprNames, loc='center left', bbox_to_anchor=(1, 0.5))
for h in hs:
h.set_visible(False)
plt.savefig('figs/totalComprRate-v2.pdf', bbox_inches='tight', pad_inches=0.0)
# +
df1 = df.loc[(df['dataDescr'] == 'outputs') & (df['quantMethod'] == 'fixed8') & (df['comprName'] == 'ours')]
# df1 = df.loc[df['dataDescr'] == 'gradients']
# modelNames = df1['modelName'].unique()
# modelNames = dict(zip(modelNames, modelNames))
modelNames = {'alexnet': 'AlexNet',
'resnet34': 'ResNet-34',
'mobilenet2': 'MobileNetV2',
# 'mobilenetV2-cust': 'mobilenetV2-cust'
}
quantMethods = df1['quantMethod'].unique()
comprNames = ['zero-RLE', 'ZVC', 'BPC', 'ours']#df['comprName'].unique()
new_colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728',
'#9467bd', '#8c564b', '#e377c2', '#7f7f7f',
'#bcbd22', '#17becf']
fig, axarr = plt.subplots(nrows=1, ncols=len(modelNames),
sharey=False, figsize=(18,4),
# gridspec_kw={'width_ratios':[1,3,3]}, # only for final plot
squeeze=True)
for ii, (ax, modelName) in enumerate(zip(axarr, modelNames.keys())):
df2 = df1.loc[df1['modelName'] == modelName]
plt.sca(ax)
numLayers = df2['layerIdx'].max()+1
layerIdxs = list(range(numLayers))
data = [df2.loc[(df2['layerIdx'] == layerIdx)]['comprRatio'].tolist() for layerIdx in layerIdxs]
bp = plt.boxplot(data,
notch=True, patch_artist=True, showfliers=False,
showmeans=True, meanline=True, whis=[1,99])#5,95])
plt.grid(axis='y')
plt.xlabel('layer')
plt.xticks([i+1 for i in layerIdxs], rotation=90)
plt.ylim(bottom=0)
if top > 10:
plt.ylim(bottom=0, top=10)
if ii == 0:
plt.ylabel('compr. ratio')
plt.title(modelNames[modelName], loc='right')
plt.savefig('figs/perLayerComprRate-v2.pdf', bbox_inches='tight', pad_inches=0.0)
| algoEvals/totalComprRatio_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from matplotlib import pyplot as plt
import settings
import numpy as np
## Model Data: 2014-2018
df = pd.read_csv(settings.ASSEMBLED_DIR + "\\Model_data.csv")
## TEST Data : Actuals 2019
df2 = pd.read_csv(settings.ASSEMBLED_DIR + "\\Test_data1.csv")
# +
# Let us build model for Victoria
def series_data(df):
demand = df.loc[df['STATE_VIC']==1, ['DATE_x', 'DEMAND']]
demand.set_index(demand['DATE_x'], inplace = True)
demand.drop(['DATE_x'], axis = 'columns', inplace = True)
demand.sort_index(inplace=True)
return demand
# Accuracy metrics
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE
me = np.mean(forecast - actual) # ME
mae = np.mean(np.abs(forecast - actual)) # MAE
mpe = np.mean((forecast - actual)/actual) # MPE
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse,
'corr':corr})
# -
# Training
demand = series_data(df)
# Test
Y_test =series_data(df2)
# # ARIMA MODEL
# +
from statsmodels.tsa.seasonal import seasonal_decompose
from matplotlib import pyplot as plt
decomp = seasonal_decompose( x = demand, model = 'additive', period= 365, extrapolate_trend = 365)
trends = decomp.trend
seasonals = decomp.seasonal
resids = decomp.resid
fig, axes = plt.subplots(4,1, figsize= (18,10))
demand.plot(ax=axes[0])
trends.plot(ax=axes[1])
seasonals.plot(ax=axes[2])
resids.plot(ax=axes[3])
# -
# ##### We can spot in trend (decreasing over time), mean is not constant over time
# Checking ACF and PACFs for data
fig, ax = plt.subplots(2, 1, figsize = (20,5))
import statsmodels.graphics.tsaplots as tsa
tsa.plot_acf(demand, alpha=0.05, ax = ax[0])
tsa.plot_pacf(demand, alpha=0.05, ax = ax[1])
plt.show()
# ###### Interpretation
# #Seasonality = 7 ( 7 days) and it is non-stationary, ACF shows values of autocorrelation stays large and positive
#
#We will use seasonal differencing with first difference
diff_1 = demand.diff(periods = 7)
diff_1 = diff_1.dropna()
fig, ax = plt.subplots(2, 1, figsize = (20,5))
tsa.plot_acf(diff_1, alpha=0.05, ax = ax[0])
tsa.plot_pacf(diff_1, alpha=0.05, ax = ax[1])
plt.show()
diff_2 = diff_1.diff()
diff_2 = diff_2.dropna()
fig, ax = plt.subplots(2, 1, figsize = (20,5))
tsa.plot_acf(diff_2, alpha=0.05, ax = ax[0])
tsa.plot_pacf(diff_2, alpha=0.05, ax = ax[1])
plt.show()
# +
## Let us identify p,q,d and P,Q,D terms for the model
# There is significant autocorrelation at 7, 14, 21 days( shows weekened seasonality)
# There are 3 significant autocorrelations. and partial autocorrelation decay in damp sine wave manner
#p =3, q =3, d =1
#Seasonal ACF shows spike at 7, but no significant spikes.
#Seasonal PACF shows expoential decay in seasonal lags(P=1, Q=1, D=1)
# +
from pmdarima.pipeline import Pipeline
from pmdarima.preprocessing import BoxCoxEndogTransformer
import pmdarima as pm
# # Seasonal - fit stepwise auto-ARIMA
pipeline = Pipeline([
("boxcox", BoxCoxEndogTransformer()),
("model", pm.AutoARIMA(test='adf',
max_p=3, max_q=3, m=7,
start_P=1,start_Q=1, seasonal=True,
d=1, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=False))
])
pipeline.fit(demand)
# -
# parameters from model:
Sample_predictions= pipeline.predict_in_sample(exogenous=None, return_conf_int=False, alpha=0.05, inverse_transform=True)
Actuals = pd.Series.to_numpy(demand['DEMAND'])
Residuals = Actuals-Sample_predictions
fig, ax = plt.subplots(1,2, figsize = (20,5))
residuals.plot(title="Residuals", ax=ax[0])
residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
#Residuals ACF AND PACF look random noise
tsa.plot_acf(residuals, alpha=0.05)
tsa.plot_pacf(residuals, alpha=0.05)
plt.show()
# ### Model Evaluation & Performance
forecast_2019, conf_2019 = pipeline.predict(n_periods=365, return_conf_int=True,alpha=0.05, inverse_transform=True)
forecast_2019
Actuals_2019 = pd.Series.to_numpy(Y_test['DEMAND'])
forecast_accuracy(forecast_2019, Actuals_2019)
# #Interpetation
# MAPE is 9.52%, Explanatory values may have more predicting power than time series model. let us explore more
#Visualization
df2 = pd.DataFrame(conf_2019, columns=['min', 'max'], index = Y_test.index)
df2['Mean'] = pd.DataFrame(forecast_2019, columns = ['mean'], index =Y_test.index )
df3 = df2[1:90]
plt.figure(figsize=(20,5))
predicted, = plt.plot(Y_test[1:90].index, df3['Mean'], 'go-', label='Predicted')
actual, = plt.plot(Y_test[1:90].index, Y_test[1:90], 'ro-', label='Actual')
lower, = plt.plot(Y_test[1:90].index, df3['min'], color='#990099', marker='.', linestyle=':', label='Lower 95%')
upper, = plt.plot(Y_test[1:90].index, df3['max'], color='#0000cc', marker='.', linestyle=':', label='Upper 95%')
plt.fill_between(Y_test[1:90].index, df3['min'], df3['max'], color = 'b', alpha = 0.2)
plt.legend(handles=[predicted, actual, lower, upper])
plt.xticks(rotation=90)
plt.show()
# # REGRESSION
#
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error
# +
from sklearn.preprocessing import StandardScaler
def preprocess(df):
''' Keeps the columns needed to run the model
Divide the data into labels and predictors'''
df1 = df[["DEMAND","MAX_TEMP", "MIN_TEMP", "Holiday_Flag", "Weekened_Flag", "STATE_VIC"]]
Data = df1.loc[df['STATE_VIC']==1, ["DEMAND","MAX_TEMP", "MIN_TEMP", "Holiday_Flag", "Weekened_Flag"]]
Data.reset_index(inplace = True)
X = Data.loc[:, ["MAX_TEMP", "MIN_TEMP", "Holiday_Flag", "Weekened_Flag"]]
Y = Data["DEMAND"]
scale = StandardScaler()
X_scale = scale.fit_transform(X)
return X_scale, Y
def rmse_score(X_test, Y_test):
Y_pred = model.predict(X_test)
rmse = np.sqrt(mean_squared_error(Y_test, Y_pred))
return rmse
def cross_val(model):
scores = cross_val_score(model, X_train, Y_train, scoring ="neg_mean_squared_error", cv =10)
rmse_scores = np.sqrt(-scores)
m = rmse_scores.mean()
std = rmse_scores.std()
return m, std
# -
X_train, Y_train = preprocess(df)
X_test, Y_test = preprocess(df2)
#Linear Model
lin_reg = LinearRegression()
model = lin_reg.fit(X_train, Y_train)
# Rsquare Linear Model
model.score(X_train, Y_train)
## Very poor predictor
## Let us check rmse
rmse_score(X_test, Y_test)
## Let us try Decision Tree
tree_reg = DecisionTreeRegressor()
model = tree_reg.fit(X_train, Y_train)
model.score(X_train, Y_train)
## R square looks good, let us check rmse on test data
rmse_score(X_test, Y_test)
## There is slight improvement in the rmse
## We can try ensemble methods
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
model = forest_reg.fit(X_train, Y_train)
cross_val(tree_reg)
cross_val(forest_reg)
## There is improvement using random forest
## Let us use grid search to optimize the model
param_grid = [
{'n_estimators': [3, 7], 'max_features':[2, 3, 4]},
{'bootstrap':[False], 'n_estimators':[3, 10], 'max_features':[1, 2, 3, 4]},
]
grid_search = GridSearchCV(forest_reg, param_grid, cv=10,
scoring = 'neg_mean_squared_error',
return_train_score= True)
grid_search.fit(X_train, Y_train)
grid_search.best_params_
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, parmas in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score),parmas)
feature_importances = grid_search.best_estimator_.feature_importances_
cols = ["MAX_TEMP", "MIN_TEMP", "Holiday_Flag", "Weekened_Flag"]
sorted(zip(feature_importances, cols), reverse=True)
model = grid_search.best_estimator_
rmse_score(X_test,Y_test)
model.score(X_train, Y_train)
plt.figure(figsize=(20,5))
plt.plot(Y_test[0:59])
plt.plot(model.predict(X_test)[0:59])
forecast_accuracy(model.predict(X_test), Y_test)
# ### Arima with Regression variables
# ###### Exogenous variables are used as additional features in the regression operation.
#
# +
from pmdarima.pipeline import Pipeline
from pmdarima.preprocessing import BoxCoxEndogTransformer
import pmdarima as pm
# # Seasonal - fit stepwise auto-ARIMA
pipeline = Pipeline([
("boxcox", BoxCoxEndogTransformer()),
("model", pm.AutoARIMA(test='adf',
max_p=3, max_q=3, m=7,
start_P=1,start_Q=1, seasonal=True,
d=1, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=False))
])
pipeline.fit(Y_train, exogenous=X_train)
# -
Sample_predictions= pipeline.predict_in_sample(exogenous=X_train, return_conf_int=False, alpha=0.05, inverse_transform=True)
Actuals = pd.Series.to_numpy(Y_train)
residuals = Actuals-Sample_predictions
Residuals = pd.Series(residuals)
fig, ax = plt.subplots(1,2, figsize = (20,5))
Residuals.plot(title="Residuals", ax=ax[0])
Residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
forecast_2019, conf_2019 = pipeline.predict(exogenous=X_test, n_periods=365, return_conf_int=True,alpha=0.05, inverse_transform=True)
forecast_2019
Actuals_2019 = pd.Series.to_numpy(Y_test['DEMAND'])
forecast_accuracy(forecast_2019, Actuals_2019)
# +
# For productionizng model
# Out of five models( ARIMA, ARIMA with regressors, Linear Regression, DecisionTree, Random Forest)
# Random Forest performed best with 92% R-squared and rmse of 19063 on the test data data. So we will productionialize the
# Random Forest Model.
model = grid_search.best_estimator_
import pickle
# Serialize with Pickle
with open('RandomForest.pkl', 'wb') as pkl:
pickle.dump(model, pkl)
# -
| model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tensorizing Interpolators
#
# This notebook will introduce some tensor algebra concepts about being able to convert from calculations inside for-loops into a single calculation over the entire tensor. It is assumed that you have some familiarity with what interpolation functions are used for in `pyhf`.
#
# To get started, we'll load up some functions we wrote whose job is to generate sets of histograms and alphas that we will compute interpolations for. This allows us to generate random, structured input data that we can use to test the tensorized form of the interpolation function against the original one we wrote. For now, we will consider only the `numpy` backend for simplicity, but can replace `np` to `pyhf.tensorlib` to achieve identical functionality.
#
# The function `random_histosets_alphasets_pair` will produce a pair `(histogramsets, alphasets)` of histograms and alphas for those histograms that represents the type of input we wish to interpolate on.
# +
import numpy as np
def random_histosets_alphasets_pair(nsysts = 150, nhistos_per_syst_upto = 300, nalphas = 1, nbins_upto = 1):
def generate_shapes(histogramssets,alphasets):
h_shape = [len(histogramssets),0,0,0]
a_shape = (len(alphasets),max(map(len,alphasets)))
for hs in histogramssets:
h_shape[1] = max(h_shape[1],len(hs))
for h in hs:
h_shape[2] = max(h_shape[2],len(h))
for sh in h:
h_shape[3] = max(h_shape[3],len(sh))
return tuple(h_shape),a_shape
def filled_shapes(histogramssets,alphasets):
# pad our shapes with NaNs
histos, alphas = generate_shapes(histogramssets,alphasets)
histos, alphas = np.ones(histos) * np.nan, np.ones(alphas) * np.nan
for i,syst in enumerate(histogramssets):
for j,sample in enumerate(syst):
for k,variation in enumerate(sample):
histos[i,j,k,:len(variation)] = variation
for i,alphaset in enumerate(alphasets):
alphas[i,:len(alphaset)] = alphaset
return histos,alphas
nsyst_histos = np.random.randint(1, 1+nhistos_per_syst_upto, size=nsysts)
nhistograms = [np.random.randint(1, nbins_upto+1, size=n) for n in nsyst_histos]
random_alphas = [np.random.uniform(-1, 1,size=nalphas) for n in nsyst_histos]
random_histogramssets = [
[# all histos affected by systematic $nh
[# sample $i, systematic $nh
np.random.uniform(10*i+j,10*i+j+1, size = nbin).tolist() for j in range(3)
] for i,nbin in enumerate(nh)
] for nh in nhistograms
]
h,a = filled_shapes(random_histogramssets,random_alphas)
return h,a
# -
# ## The (slow) interpolations
#
# In all cases, the way we do interpolations is as follows:
#
# 1. Loop over both the `histogramssets` and `alphasets` simultaneously (e.g. using python's `zip()`)
# 2. Loop over all histograms set in the set of histograms sets that correspond to the histograms affected by a given systematic
# 3. Loop over all of the alphas in the set of alphas
# 4. Loop over all the bins in the histogram sets simultaneously (e.g. using python's `zip()`)
# 5. Apply the interpolation across the same bin index
#
# This is already exhausting to think about, so let's put this in code form. Depending on the kind of interpolation being done, we'll pass in `func` as an argument to the top-level interpolation loop to switch between linear (`interpcode=0`) and non-linear (`interpcode=1`).
def interpolation_looper(histogramssets, alphasets, func):
all_results = []
for histoset, alphaset in zip(histogramssets, alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
v = func(down, nom, up, alpha)
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
# And we can also define our linear and non-linear interpolations we'll consider in this notebook that we wish to tensorize.
# +
def interpolation_linear(histogramssets,alphasets):
def summand(down, nom, up, alpha):
delta_up = up - nom
delta_down = nom - down
if alpha > 0:
delta = delta_up*alpha
else:
delta = delta_down*alpha
return nom + delta
return interpolation_looper(histogramssets, alphasets, summand)
def interpolation_nonlinear(histogramssets,alphasets):
def product(down, nom, up, alpha):
delta_up = up/nom
delta_down = down/nom
if alpha > 0:
delta = delta_up**alpha
else:
delta = delta_down**(-alpha)
return nom*delta
return interpolation_looper(histogramssets, alphasets, product)
# -
# We will also define a helper function that allows us to pass in two functions we wish to compare the outputs for:
def compare_fns(func1, func2):
h,a = random_histosets_alphasets_pair()
def _func_runner(func, histssets, alphasets):
return np.asarray(func(histssets,alphasets))
old = _func_runner(func1, h, a)
new = _func_runner(func2, h, a)
return (np.all(old[~np.isnan(old)] == new[~np.isnan(new)]), (h,a))
# For the rest of the notebook, we will detail in explicit form how the linear interpolator gets tensorized, step-by-step. The same sequence of steps will be shown for the non-linear interpolator -- but it is left up to the reader to understand the steps.
# ## Tensorizing the Linear Interpolator
#
# ### Step 0
#
# Step 0 requires converting the innermost conditional check on `alpha > 0` into something tensorizable. This also means the calculation itself is going to become tensorized. So we will convert from
#
# ```python
# if alpha > 0:
# delta = delta_up*alpha
# else:
# delta = delta_down*alpha
# ```
#
# to
#
# ```python
# delta = np.where(alpha > 0, delta_up*alpha, delta_down*alpha)
# ```
#
# Let's make that change now, and let's check to make sure we still do the calculation correctly.
# get the internal calculation to use tensorlib backend
def new_interpolation_linear_step0(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up - nom
delta_down = nom - down
delta = np.where(alpha > 0, delta_up*alpha, delta_down*alpha)
v = nom + delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step0)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step0(h,a)
# Great! We're a little bit slower right now, but that's expected. We're just getting started.
#
# ### Step 1
#
# In this step, we would like to remove the innermost `zip()` call over the histogram bins by calculating the interpolation between the histograms in one fell swoop. This means, instead of writing something like
#
# ```python
# for down,nom,up in zip(histo[0],histo[1],histo[2]):
# delta_up = up - nom
# ...
# ```
#
# one can instead write
#
# ```python
# delta_up = histo[2] - histo[1]
# ...
# ```
#
# taking advantage of the automatic broadcasting of operations on input tensors. This sort of feature of the tensor backends allows us to speed up code, such as interpolation.
# update the delta variations to remove the zip() call and remove most-nested loop
def new_interpolation_linear_step1(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
deltas_up = histo[2]-histo[1]
deltas_dn = histo[1]-histo[0]
calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
v = histo[1] + calc_deltas
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step1)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step1(h,a)
# Great!
#
# ### Step 2
#
# In this step, we would like to move the giant array of the deltas calculated to the beginning -- outside of all loops -- and then only take a subset of it for the calculation itself. This allows us to figure out the entire structure of the input for the rest of the calculations as we slowly move towards including `einsum()` calls (einstein summation). This means we would like to go from
#
#
# ```python
# for histo in histoset:
# delta_up = histo[2] - histo[1]
# ...
# ```
#
# to
#
# ```python
# all_deltas = ...
# for nh, histo in enumerate(histoset):
# deltas = all_deltas[nh]
# ...
# ```
#
# Again, we are taking advantage of the automatic broadcasting of operations on input tensors to calculate all the deltas in a single action.
# figure out the giant array of all deltas at the beginning and only take subsets of it for the calculation
def new_interpolation_linear_step2(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alpha_deltas = []
for alpha in alphaset:
alpha_result = []
deltas_up = all_histo_deltas_up[nh]
deltas_dn = all_histo_deltas_dn[nh]
calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
alpha_deltas.append(calc_deltas)
set_result.append([histo[1]+ d for d in alpha_deltas])
all_results.append(set_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step2)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step2(h,a)
# Great!
#
# ### Step 3
#
# In this step, we get to introduce einstein summation to generalize the calculations we perform across many dimensions in a more concise, straightforward way. See [this blog post](https://rockt.github.io/2018/04/30/einsum) for some more details on einstein summation notation. In short, it allows us to write
#
# $$
# c_j = \sum_i \sum_k = A_{ik} B_{kj} \qquad \rightarrow \qquad \texttt{einsum("ij,jk->i", A, B)}
# $$
#
# in a much more elegant way to express many kinds of common tensor operations such as dot products, transposes, outer products, and so on. This step is generally the hardest as one needs to figure out the corresponding `einsum` that keeps the calculation preserved (and matching). To some extent it requires a lot of trial and error until you get a feel for how einstein summation notation works.
#
# As a concrete example of a conversion, we wish to go from something like
#
# ```python
# for nh,histo in enumerate(histoset):
# for alpha in alphaset:
# deltas_up = all_histo_deltas_up[nh]
# deltas_dn = all_histo_deltas_dn[nh]
# calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
# ...
# ```
#
# to get rid of the loop over `alpha`
#
# ```python
# for nh,histo in enumerate(histoset):
# alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
# alphas_times_deltas_dn = np.einsum('i,j->ij',alphaset,all_histo_deltas_dn[nh])
# masks = np.einsum('i,j->ij',alphaset > 0,np.ones_like(all_histo_deltas_dn[nh]))
#
# alpha_deltas = np.where(masks,alphas_times_deltas_up, alphas_times_deltas_dn)
# ...
# ```
#
# In this particular case, we need an outer product that multiplies across the `alphaset` to the corresponding `histoset` for the up/down variations. Then we just need to select from either the up variation calculation or the down variation calculation based on the sign of alpha. Try to convince yourself that the einstein summation does what the for-loop does, but a little bit more concisely, and perhaps more clearly! How does the function look now?
# remove the loop over alphas, starts using einsum to help generalize to more dimensions
def new_interpolation_linear_step3(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
alphas_times_deltas_dn = np.einsum('i,j->ij',alphaset,all_histo_deltas_dn[nh])
masks = np.einsum('i,j->ij',alphaset > 0,np.ones_like(all_histo_deltas_dn[nh]))
alpha_deltas = np.where(masks,alphas_times_deltas_up, alphas_times_deltas_dn)
set_result.append([histo[1]+ d for d in alpha_deltas])
all_results.append(set_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step3)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step3(h,a)
# Great! Note that we've been getting a little bit slower during these steps. It will all pay off in the end when we're fully tensorized! A lot of the internal steps are overkill with the heavy einstein summation and broadcasting at the moment, especially for how many loops in we are.
#
# ### Step 4
#
# Now in this step, we will move the einstein summations to the outer loop, so that we're calculating it once! This is the big step, but a little bit easier because all we're doing is adding extra dimensions into the calculation. The underlying calculation won't have changed. At this point, we'll also rename from `i` and `j` to `a` and `b` for `alpha` and `bin` (as in the bin in the histogram). To continue the notation as well, here's a summary of the dimensions involved:
#
# - `s` will be for the set under consideration (e.g. the modifier)
# - `a` will be for the alpha variation
# - `h` will be for the histogram affected by the modifier
# - `b` will be for the bin of the histogram
#
# So we wish to move the `einsum` code from
#
# ```python
# for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
# ...
#
# for nh,histo in enumerate(histoset):
# alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
# ...
# ```
#
# to
#
# ```python
# all_alphas_times_deltas_up = np.einsum('...',alphaset,all_histo_deltas_up)
# for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
# ...
#
# for nh,histo in enumerate(histoset):
# ...
# ```
#
# So how does this new function look?
# move the einsums to outer loops to get ready to get rid of all loops
def new_interpolation_linear_step4(histogramssets,alphasets):
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_all_histo_nom = histogramssets[:,:,1]
allsets_all_histos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_up)
allsets_all_histos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_dn)
allsets_all_histos_masks = np.einsum('sa,s...u->s...au',alphasets > 0,np.ones_like(allset_all_histo_deltas_dn))
allsets_all_histos_deltas = np.where(allsets_all_histos_masks,allsets_all_histos_alphas_times_deltas_up, allsets_all_histos_alphas_times_deltas_dn)
all_results = []
for nset,histoset in enumerate(histogramssets):
all_histos_deltas = allsets_all_histos_deltas[nset]
set_result = []
for nh,histo in enumerate(histoset):
set_result.append([d + histoset[nh,1] for d in all_histos_deltas[nh]])
all_results.append(set_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step4)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step4(h,a)
# Great! And look at that huge speed up in time already, just from moving the multiple, heavy einstein summation calculations up through the loops. We still have some more optimizing to do as we still have explicit loops in our code. Let's keep at it, we're almost there!
#
# ### Step 5
#
# The hard part is mostly over. We have to now think about the nominal variations. Recall that we were trying to add the nominals to the deltas in order to compute the new value. In practice, we'll return the delta variation only, but we'll show you how to get rid of this last loop. In this case, we want to figure out how to change code like
#
# ```python
# all_results = []
# for nset,histoset in enumerate(histogramssets):
# all_histos_deltas = allsets_all_histos_deltas[nset]
# set_result = []
# for nh,histo in enumerate(histoset):
# set_result.append([d + histoset[nh,1] for d in all_histos_deltas[nh]])
# all_results.append(set_result)
# ```
#
# to get rid of that most-nested loop
#
# ```python
# all_results = []
# for nset,histoset in enumerate(histogramssets):
# # look ma, no more loops inside!
# ```
#
# So how does this look?
# slowly getting rid of our loops to build the right output tensor -- gotta think about nominals
def new_interpolation_linear_step5(histogramssets,alphasets):
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_all_histo_nom = histogramssets[:,:,1]
allsets_all_histos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_up)
allsets_all_histos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_dn)
allsets_all_histos_masks = np.einsum('sa,s...u->s...au',alphasets > 0,np.ones_like(allset_all_histo_deltas_dn))
allsets_all_histos_deltas = np.where(allsets_all_histos_masks,allsets_all_histos_alphas_times_deltas_up, allsets_all_histos_alphas_times_deltas_dn)
all_results = []
for nset,(_,alphaset) in enumerate(zip(histogramssets,alphasets)):
all_histos_deltas = allsets_all_histos_deltas[nset]
noms = histogramssets[nset,:,1]
all_histos_noms_repeated = np.einsum('a,hn->han',np.ones_like(alphaset),noms)
set_result = all_histos_deltas + all_histos_noms_repeated
all_results.append(set_result)
return all_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step5)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step5(h,a)
# Fantastic! And look at the speed up. We're already faster than the for-loop and we're not even done yet.
#
# ### Step 6
#
# The final frontier. Also probably the best Star Wars episode. In any case, we have one more for-loop that needs to die in a slab of carbonite. This should be much easier now that you're more comfortable with tensor broadcasting and einstein summations.
#
# What does the function look like now?
def new_interpolation_linear_step6(histogramssets,alphasets):
allset_allhisto_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_allhisto_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_allhisto_nom = histogramssets[:,:,1]
#x is dummy index
allsets_allhistos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_allhisto_deltas_up)
allsets_allhistos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_allhisto_deltas_dn)
allsets_allhistos_masks = np.einsum('sa,sxu->sxau',np.where(alphasets > 0, np.ones(alphasets.shape), np.zeros(alphasets.shape)),np.ones(allset_allhisto_deltas_dn.shape))
allsets_allhistos_deltas = np.where(allsets_allhistos_masks,allsets_allhistos_alphas_times_deltas_up, allsets_allhistos_alphas_times_deltas_dn)
allsets_allhistos_noms_repeated = np.einsum('sa,shb->shab',np.ones(alphasets.shape),allset_allhisto_nom)
set_results = allsets_allhistos_deltas + allsets_allhistos_noms_repeated
return set_results
# And does the calculation still match?
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step6)
print(result)
# %%timeit
interpolation_linear(h,a)
# %%timeit
new_interpolation_linear_step6(h,a)
# And we're done tensorizing it. There are some more improvements that could be made to make this interpolation calculation even more robust -- but for now we're done.
#
# ## Tensorizing the Non-Linear Interpolator
#
# This is very, very similar to what we've done for the case of the linear interpolator. As such, we will provide the resulting functions for each step, and you can see how things perform all the way at the bottom. Enjoy and learn at your own pace!
# +
def interpolation_nonlinear(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up/nom
delta_down = down/nom
if alpha > 0:
delta = delta_up**alpha
else:
delta = delta_down**(-alpha)
v = nom*delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step0(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up/nom
delta_down = down/nom
delta = np.where(alpha > 0, np.power(delta_up, alpha), np.power(delta_down, np.abs(alpha)))
v = nom*delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step1(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
deltas_up = np.divide(histo[2], histo[1])
deltas_down = np.divide(histo[0], histo[1])
bases = np.where(alpha > 0, deltas_up, deltas_down)
exponents = np.abs(alpha)
calc_deltas = np.power(bases, exponents)
v = histo[1] * calc_deltas
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step2(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], histogramssets[:,:,1])
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], histogramssets[:,:,1])
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alpha_deltas = []
for alpha in alphaset:
alpha_result = []
deltas_up = all_histo_deltas_up[nh]
deltas_down = all_histo_deltas_dn[nh]
bases = np.where(alpha > 0, deltas_up, deltas_down)
exponents = np.abs(alpha)
calc_deltas = np.power(bases, exponents)
alpha_deltas.append(calc_deltas)
set_result.append([histo[1]*d for d in alpha_deltas])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step3(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], histogramssets[:,:,1])
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], histogramssets[:,:,1])
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
# bases and exponents need to have an outer product, to esentially tile or repeat over rows/cols
bases_up = np.einsum('a,b->ab', np.ones(alphaset.shape), all_histo_deltas_up[nh])
bases_dn = np.einsum('a,b->ab', np.ones(alphaset.shape), all_histo_deltas_dn[nh])
exponents = np.einsum('a,b->ab', np.abs(alphaset), np.ones(all_histo_deltas_up[nh].shape))
masks = np.einsum('a,b->ab',alphaset > 0,np.ones(all_histo_deltas_dn[nh].shape))
bases = np.where(masks, bases_up, bases_dn)
alpha_deltas = np.power(bases, exponents)
set_result.append([histo[1]*d for d in alpha_deltas])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step4(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
all_results = []
for nset,histoset in enumerate(histogramssets):
all_histos_deltas = allsets_all_histos_deltas[nset]
set_result = []
for nh,histo in enumerate(histoset):
set_result.append([histoset[nh,1]*d for d in all_histos_deltas[nh]])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step5(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
all_results = []
for nset,(_,alphaset) in enumerate(zip(histogramssets,alphasets)):
all_histos_deltas = allsets_all_histos_deltas[nset]
noms = allset_all_histo_nom[nset]
all_histos_noms_repeated = np.einsum('a,hn->han',np.ones_like(alphaset),noms)
set_result = all_histos_deltas * all_histos_noms_repeated
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step6(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
allsets_allhistos_noms_repeated = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_nom)
set_results = allsets_all_histos_deltas * allsets_allhistos_noms_repeated
return set_results
# -
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step0)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step0(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step1)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step1(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step2)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step2(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step3)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step3(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step4)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step4(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step5)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step5(h,a)
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step6)
print(result)
# %%timeit
interpolation_nonlinear(h,a)
# %%timeit
new_interpolation_nonlinear_step6(h,a)
| docs/examples/notebooks/learn/TensorizingInterpolations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github"
# <a href="https://colab.research.google.com/github/7201krap/PYTHON_applied_data_science/blob/main/search_dep_score.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="noZ-BsUFikmK"
from sklearn import datasets
# How are we going to use evaluate the performance?
# 1. accuracy
from sklearn import metrics
# 2. f1 score
from sklearn.metrics import f1_score
# Machine learning models
# Linear Regression
# url : https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
from sklearn.linear_model import LinearRegression
# SVM
# url: https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
from sklearn import svm
# KNN
# url: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
from sklearn.neighbors import KNeighborsClassifier
# Decision Tree
# url: https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
from sklearn.tree import DecisionTreeClassifier
# Random Forest
# url: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
from sklearn.ensemble import RandomForestClassifier
# Logistic Classifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import learning_curve, RandomizedSearchCV, GridSearchCV
from sklearn.model_selection import train_test_split, KFold
from sklearn.datasets import make_classification
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import numpy as np
import matplotlib.pyplot as plt
# PCA
from sklearn.decomposition import PCA
# Linear Regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.model_selection import learning_curve
from sklearn.metrics import classification_report
# + [markdown] id="q-1HtfE0ikmY"
# # Preprocessing
# + id="WPB84Blaikmg"
url = '../../data/0&1/oversampling/oversampled_panic_score.csv'
sampled_panic_score = pd.read_csv(url)
X_s = sampled_panic_score.copy()
del X_s['panic_score']
y_s = sampled_panic_score['panic_score']
# + colab={"base_uri": "https://localhost:8080/"} id="pNGVcPBgikmg" outputId="2bc1e7b1-90af-4144-a34a-cc4d303448c8"
y_s.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="tH7UnO0qikmh" outputId="f40a56a4-803b-434b-fbb0-fc0549e90a18"
print(X_s)
print(y_s)
# + [markdown] id="6HHxtbMGiknD"
# # 2. sampled
# + [markdown] id="9vMBdAsPiknD"
# # SVM
# + [markdown] id="EM5IQ0d7iknD"
# ## Seed 100
# + id="t1YaC2mWiknF"
seed = 100
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="IR23YcWqiknF" outputId="b35de200-aa67-4995-e9e9-9610cb7df411"
svm_hyper_params = [
{
'gamma': np.logspace(-4, -1, 4),
'C': np.logspace(-3, 1, 5),
'kernel': ['linear', 'poly', 'rbf', 'sigmoid']
}
]
# specify model
svm_model = svm.SVC()
# set up GridSearchCV()
svm_model_cv = GridSearchCV(estimator = svm_model,
param_grid = svm_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
svm_model_cv.fit(X_train, y_train)
print("best hyper parameters", svm_model_cv.best_params_)
svm_y_pred = svm_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, svm_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, svm_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, svm_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, svm_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="1azGgQuhiknF" outputId="25452506-8a87-4106-ae1d-8285dfe6f99f"
# ------ HERE ------
# plug in suitable hyper-parameters
# 여기만 바꾸면 된다.
plot_model = svm.SVC(kernel='rbf', C=10, gamma=0.1)
train_sizes_seed100, train_scores_seed100, val_scores_seed100 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed100 = np.mean(train_scores_seed100, axis = 1)
train_std_seed100 = np.std(train_scores_seed100, axis=1)
val_mean_seed100 = np.mean(val_scores_seed100, axis=1)
val_std_seed100 = np.std(val_scores_seed100, axis=1)
plt.plot(train_sizes_seed100, train_mean_seed100, label='Training accuracy')
plt.plot(train_sizes_seed100, val_mean_seed100, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="Hb98EDSUiknF"
# ## Seed 1234
# + id="vL3KK29_iknH"
seed = 1234
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="b8LQsEtHiknH" outputId="58301671-74b0-44e3-fbf3-c0fd46c098e6"
svm_hyper_params = [
{
'gamma': np.logspace(-4, -1, 4),
'C': np.logspace(-3, 1, 5),
'kernel': ['linear', 'poly', 'rbf', 'sigmoid']
}
]
# specify model
svm_model = svm.SVC()
# set up GridSearchCV()
svm_model_cv = GridSearchCV(estimator = svm_model,
param_grid = svm_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
svm_model_cv.fit(X_train, y_train)
print("best hyper parameters", svm_model_cv.best_params_)
svm_y_pred = svm_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, svm_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, svm_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, svm_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, svm_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="vWLKatggiknH" outputId="387b6628-99e6-465a-a071-cbe4ee832660"
plot_model = svm.SVC(kernel='rbf', C=10, gamma=0.1)
train_sizes_seed1234, train_scores_seed1234, val_scores_seed1234 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed1234 = np.mean(train_scores_seed1234, axis = 1)
train_std_seed1234 = np.std(train_scores_seed1234, axis=1)
val_mean_seed1234 = np.mean(val_scores_seed1234, axis=1)
val_std_seed1234 = np.std(val_scores_seed1234, axis=1)
plt.plot(train_sizes_seed1234, train_mean_seed1234, label='Training accuracy')
plt.plot(train_sizes_seed1234, val_mean_seed1234, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="UyRKzIf2iknI"
# ## Seed 500
# + id="8FksC80_iknJ"
seed = 500
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="pZ3Yebc1iknJ" outputId="76cb7352-ffea-40be-c474-1b2166c65580"
svm_hyper_params = [
{
'gamma': np.logspace(-4, -1, 4),
'C': np.logspace(-3, 1, 5),
'kernel': ['linear', 'poly', 'rbf', 'sigmoid']
}
]
# specify model
svm_model = svm.SVC()
# set up GridSearchCV()
svm_model_cv = GridSearchCV(estimator = svm_model,
param_grid = svm_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
svm_model_cv.fit(X_train, y_train)
print("best hyper parameters", svm_model_cv.best_params_)
svm_y_pred = svm_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, svm_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, svm_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, svm_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, svm_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="WImF0iypiknL" outputId="708f0e37-9519-4b1a-8b74-a02cf6114aa0"
plot_model = svm.SVC(kernel='rbf', C=10, gamma=0.1)
train_sizes_seed500, train_scores_seed500, val_scores_seed500 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed500 = np.mean(train_scores_seed500, axis = 1)
train_std_seed500 = np.std(train_scores_seed500, axis=1)
val_mean_seed500 = np.mean(val_scores_seed500, axis=1)
val_std_seed500 = np.std(val_scores_seed500, axis=1)
plt.plot(train_sizes_seed500, train_mean_seed500, label='Training accuracy')
plt.plot(train_sizes_seed500, val_mean_seed500, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + id="zD8THOEDiknL"
# learning curve considering different seeds
# + id="F2a5wjMgiknL"
acc_avg = list()
acc_se = list()
val_avg = list()
val_se = list()
for i in range(len(train_sizes_seed100)):
acc_avg.append(np.mean([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]))
acc_se .append(np.std ([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]) / np.sqrt(3))
val_avg.append(np.mean([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]))
val_se .append(np.std ([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]) / np.sqrt(3))
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="pBVfsNM1iknM" outputId="84d7c9b3-bf2b-47c4-9f9b-944fc97e82db"
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(train_sizes_seed100, acc_avg, c='darkred', label='Training accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(acc_avg, acc_se), np.add(acc_avg, acc_se), color='lightcoral', alpha=0.5)
ax.plot(train_sizes_seed100, val_avg, c='darkgreen', label='Validation accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(val_avg, val_se), np.add(val_avg, val_se), color='lime', alpha=0.5)
ax.set_xlabel('Training Size')
ax.set_ylabel('Accuracy')
ax.set_title('SVM Oversampled panic_score')
ax.axhline(y=0.5, color='blue', linestyle='-', label='Random Guess: Always predict no depression')
ax.legend()
ax.grid()
# + [markdown] id="ifJGt2Teu9t3"
# The learning algorithm suffers from high variance. The training and validation curve look like it is converging. Training with more data is likely to help. Although they converge, the error is quite high. This indicates that it is unlikely to have the relationship between screen time and mental health.
# + [markdown] id="32hidP7eiknM"
# # Logistic Regression
# + [markdown] id="CC4ev6ebiknM"
# ## Seed 100
# + id="oNF-xwx1iknN"
seed = 100
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="Q4r_kXoDiknN" outputId="b2921971-6340-4dbb-aab8-1ed3d65f8d60"
log_hyper_params = [
{
'C': np.logspace(-4, 2, 7),
'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'penalty' : ['l1', 'l2', 'elasticnet', 'none'],
'multi_class' : ['auto', 'ovr', 'multinomial']
}
]
# specify model
log_model = LogisticRegression()
# set up GridSearchCV()
log_model_cv = GridSearchCV(estimator = log_model,
param_grid = log_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
log_model_cv.fit(X_train, y_train)
print("best hyper parameters", log_model_cv.best_params_)
log_y_pred = log_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, log_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, log_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, log_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, log_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="MOVeuxdOiknN" outputId="fdd16cac-419e-4ec4-8b1e-2cac2307ac5b"
# ------ HERE ------
# plug in suitable hyper-parameters
# 여기만 바꾸면 된다.
plot_model = LogisticRegression(C=0.1, multi_class='multinomial', penalty='l2', solver='newton-cg')
train_sizes_seed100, train_scores_seed100, val_scores_seed100 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed100 = np.mean(train_scores_seed100, axis = 1)
train_std_seed100 = np.std(train_scores_seed100, axis=1)
val_mean_seed100 = np.mean(val_scores_seed100, axis=1)
val_std_seed100 = np.std(val_scores_seed100, axis=1)
plt.plot(train_sizes_seed100, train_mean_seed100, label='Training accuracy')
plt.plot(train_sizes_seed100, val_mean_seed100, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="ccwNeeBLiknO"
# ## Seed 1234
# + id="UROMdP05iknP"
seed = 1234
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="ExoQNZ6FiknP" outputId="9a8410ce-1ce5-46c2-f97c-b519644620ac"
log_hyper_params = [
{
'C': np.logspace(-4, 2, 7),
'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'penalty' : ['l1', 'l2', 'elasticnet', 'none'],
'multi_class' : ['auto', 'ovr', 'multinomial']
}
]
# specify model
log_model = LogisticRegression()
# set up GridSearchCV()
log_model_cv = GridSearchCV(estimator = log_model,
param_grid = log_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
log_model_cv.fit(X_train, y_train)
print("best hyper parameters", log_model_cv.best_params_)
log_y_pred = log_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, log_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, log_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, log_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, log_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="A1li0aC2iknQ" outputId="6d16ca3d-20f4-425c-a318-c56e58f84ff1"
plot_model = LogisticRegression(C=0.01, multi_class='auto', penalty='l2', solver='newton-cg')
train_sizes_seed1234, train_scores_seed1234, val_scores_seed1234 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed1234 = np.mean(train_scores_seed1234, axis = 1)
train_std_seed1234 = np.std(train_scores_seed1234, axis=1)
val_mean_seed1234 = np.mean(val_scores_seed1234, axis=1)
val_std_seed1234 = np.std(val_scores_seed1234, axis=1)
plt.plot(train_sizes_seed1234, train_mean_seed1234, label='Training accuracy')
plt.plot(train_sizes_seed1234, val_mean_seed1234, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="VKJU2QtiiknS"
# ## Seed 500
# + id="m2JG45HoiknV"
seed = 500
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="v5efwtjMiknW" outputId="4c8b9d58-f893-4063-bd6d-5a4d7159a021"
log_hyper_params = [
{
'C': np.logspace(-4, 2, 7),
'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'penalty' : ['l1', 'l2', 'elasticnet', 'none'],
'multi_class' : ['auto', 'ovr', 'multinomial']
}
]
# specify model
log_model = LogisticRegression()
# set up GridSearchCV()
log_model_cv = GridSearchCV(estimator = log_model,
param_grid = log_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
log_model_cv.fit(X_train, y_train)
print("best hyper parameters", log_model_cv.best_params_)
log_y_pred = log_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, log_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, log_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, log_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, log_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="xVknib0WiknW" outputId="d8b0baae-8d3f-4725-84d6-b3db0f11b314"
plot_model = LogisticRegression(C=0.1, multi_class='auto', penalty='l2', solver='liblinear')
train_sizes_seed500, train_scores_seed500, val_scores_seed500 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed500 = np.mean(train_scores_seed500, axis = 1)
train_std_seed500 = np.std(train_scores_seed500, axis=1)
val_mean_seed500 = np.mean(val_scores_seed500, axis=1)
val_std_seed500 = np.std(val_scores_seed500, axis=1)
plt.plot(train_sizes_seed500, train_mean_seed500, label='Training accuracy')
plt.plot(train_sizes_seed500, val_mean_seed500, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + id="vu1Uhn9ViknX"
# learning curve considering different seeds
# + id="D4iB49w5iknX"
acc_avg = list()
acc_se = list()
val_avg = list()
val_se = list()
for i in range(len(train_sizes_seed100)):
acc_avg.append(np.mean([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]))
acc_se .append(np.std ([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]) / np.sqrt(3))
val_avg.append(np.mean([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]))
val_se .append(np.std ([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]) / np.sqrt(3))
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="HBORt27SiknX" outputId="4962eb6b-f3a2-4ee1-809c-2cd09282270b"
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(train_sizes_seed100, acc_avg, c='darkred', label='Training accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(acc_avg, acc_se), np.add(acc_avg, acc_se), color='lightcoral', alpha=0.5)
ax.plot(train_sizes_seed100, val_avg, c='darkgreen', label='Validation accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(val_avg, val_se), np.add(val_avg, val_se), color='lime', alpha=0.5)
ax.set_xlabel('Training Size')
ax.set_ylabel('Accuracy')
ax.set_title('Logistic Regression Oversampled panic_score')
ax.axhline(y=0.5, color='blue', linestyle='-', label='Random Guess: Always predict no depression')
ax.legend()
ax.grid()
# + [markdown] id="ccMnWj65u9t8"
# The training and validation curve look like it is converging. Although they converge, the error is quite high. This indicates that it is unlikely to have the relationship between screen time and mental health. Maybe LR model is not appropriate to find the relationship.
# + [markdown] id="NGUlv63WiknX"
# # KNN
# + [markdown] id="iiggWIdgiknX"
# ## Seed 100
# + id="HQvaOtGNiknY"
seed = 100
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="piT8c5bLiknY" outputId="eb34624f-2062-42eb-99df-27939a32cfbd"
knn_hyper_params = [
{
'weights' : ['uniform', 'distance'],
'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
'leaf_size' : np.linspace(2, 100, 10, dtype=int),
'n_neighbors' : [int(x) for x in np.linspace(2, 50, 10)]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
knn_model = KNeighborsClassifier()
# set up GridSearchCV()
knn_model_cv = GridSearchCV(estimator = knn_model,
param_grid = knn_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
knn_model_cv.fit(X_train, y_train)
print("best hyper parameters", knn_model_cv.best_params_)
knn_y_pred = knn_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, knn_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, knn_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, knn_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, knn_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="gQ-ciWVIiknZ" outputId="48a84d25-1b75-471c-d053-9c35a7a948d3"
# ------ HERE ------
# plug in suitable hyper-parameters
# 여기만 바꾸면 된다.
plot_model = KNeighborsClassifier(algorithm='auto', leaf_size=34, weights='distance', n_neighbors=18)
train_sizes_seed100, train_scores_seed100, val_scores_seed100 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed100 = np.mean(train_scores_seed100, axis = 1)
train_std_seed100 = np.std(train_scores_seed100, axis=1)
val_mean_seed100 = np.mean(val_scores_seed100, axis=1)
val_std_seed100 = np.std(val_scores_seed100, axis=1)
plt.plot(train_sizes_seed100, train_mean_seed100, label='Training accuracy')
plt.plot(train_sizes_seed100, val_mean_seed100, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="-YvozWatikna"
# ## Seed 1234
# + id="VV6ihyyLikna"
seed = 1234
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="bETJZcxHikna" outputId="fb966a90-c8c2-4697-83d6-6337b5827a39"
knn_hyper_params = [
{
'weights' : ['uniform', 'distance'],
'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
'leaf_size' : np.linspace(2, 100, 10, dtype=int),
'n_neighbors' : [int(x) for x in np.linspace(2, 50, 10)]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
knn_model = KNeighborsClassifier()
# set up GridSearchCV()
knn_model_cv = GridSearchCV(estimator = knn_model,
param_grid = knn_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
knn_model_cv.fit(X_train, y_train)
print("best hyper parameters", knn_model_cv.best_params_)
knn_y_pred = knn_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, knn_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, knn_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, knn_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, knn_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="rPVd4M8Mikna" outputId="62479c31-2dce-45e4-9e51-7780bedc1c70"
plot_model = KNeighborsClassifier(algorithm='brute', leaf_size=2, weights='distance', n_neighbors=23)
train_sizes_seed1234, train_scores_seed1234, val_scores_seed1234 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed1234 = np.mean(train_scores_seed1234, axis = 1)
train_std_seed1234 = np.std(train_scores_seed1234, axis=1)
val_mean_seed1234 = np.mean(val_scores_seed1234, axis=1)
val_std_seed1234 = np.std(val_scores_seed1234, axis=1)
plt.plot(train_sizes_seed1234, train_mean_seed1234, label='Training accuracy')
plt.plot(train_sizes_seed1234, val_mean_seed1234, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="t5htSu2zikna"
# ## Seed 500
# + id="CdDMzagvikna"
seed = 500
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="_YUwJnEKikna" outputId="3f9d152f-a742-4ddd-a204-4b3dbb8157bb"
knn_hyper_params = [
{
'weights' : ['uniform', 'distance'],
'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
'leaf_size' : np.linspace(2, 100, 10, dtype=int),
'n_neighbors' : [int(x) for x in np.linspace(2, 50, 10)]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
knn_model = KNeighborsClassifier()
# set up GridSearchCV()
knn_model_cv = GridSearchCV(estimator = knn_model,
param_grid = knn_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
knn_model_cv.fit(X_train, y_train)
print("best hyper parameters", knn_model_cv.best_params_)
knn_y_pred = knn_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, knn_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, knn_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, knn_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, knn_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="Yymvz_wViknb" outputId="f94255b3-5fd9-4497-bb12-3ef8fd2be3d2"
plot_model = KNeighborsClassifier(algorithm='ball_tree', leaf_size=12, weights='distance', n_neighbors=7)
train_sizes_seed500, train_scores_seed500, val_scores_seed500 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed500 = np.mean(train_scores_seed500, axis = 1)
train_std_seed500 = np.std(train_scores_seed500, axis=1)
val_mean_seed500 = np.mean(val_scores_seed500, axis=1)
val_std_seed500 = np.std(val_scores_seed500, axis=1)
plt.plot(train_sizes_seed500, train_mean_seed500, label='Training accuracy')
plt.plot(train_sizes_seed500, val_mean_seed500, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + id="IHBu_U0Qiknc"
# learning curve considering different seeds
# + id="PYUqbPNpiknc"
acc_avg = list()
acc_se = list()
val_avg = list()
val_se = list()
for i in range(len(train_sizes_seed100)):
acc_avg.append(np.mean([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]))
acc_se .append(np.std ([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]) / np.sqrt(3))
val_avg.append(np.mean([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]))
val_se .append(np.std ([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]) / np.sqrt(3))
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="0ziXdCRviknc" outputId="538e8b5d-7b64-4f7c-932b-e145f80e39ef"
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(train_sizes_seed100, acc_avg, c='darkred', label='Training accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(acc_avg, acc_se), np.add(acc_avg, acc_se), color='lightcoral', alpha=0.5)
ax.plot(train_sizes_seed100, val_avg, c='darkgreen', label='Validation accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(val_avg, val_se), np.add(val_avg, val_se), color='lime', alpha=0.5)
ax.set_xlabel('Training Size')
ax.set_ylabel('Accuracy')
ax.set_title('KNN Oversampled panic_score')
ax.axhline(y=0.5, color='blue', linestyle='-', label='Random Guess: Always predict no depression')
ax.legend()
ax.grid()
# + [markdown] id="TYUIsK99u9t_"
# The error is high. Also, it seems like they are not converging. KNN is not a valid model for detecting the relationship between screen time and mental health?
# + [markdown] id="IM0k9-Omiknc"
# # Random Forest
# + [markdown] id="GRKqzoQ-iknc"
# ## Seed 100
# + id="67iZ5NsQikne"
seed = 100
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="446VbQq2ikne" outputId="107ac093-2963-4762-c316-88ddee51d042"
rf_hyper_params = [
{
'n_estimators' : [int(x) for x in np.linspace(5, 50, 5)],
'criterion' : ['gini', 'entropy'],
'max_depth' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_split' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_leaf' : [int(x) for x in np.linspace(2, 50, 5)],
'max_features' : ['auto', 'sqrt', 'log2'],
'bootstrap' : [True, False]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
rf_model = RandomForestClassifier()
# set up GridSearchCV()
rf_model_cv = GridSearchCV(estimator = rf_model,
param_grid = rf_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
rf_model_cv.fit(X_train, y_train)
print("best hyper parameters", rf_model_cv.best_params_)
rf_y_pred = rf_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, rf_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, rf_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, rf_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, rf_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="IUiOHZzNikne" outputId="d25b3463-3d6d-4d01-d9bb-f57a4c6955c7"
# ------ HERE ------
# plug in suitable hyper-parameters
# 여기만 바꾸면 된다.
plot_model = RandomForestClassifier(bootstrap=False, criterion='gini', max_depth=14, max_features='auto', min_samples_leaf=2, min_samples_split=2, n_estimators=38)
train_sizes_seed100, train_scores_seed100, val_scores_seed100 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed100 = np.mean(train_scores_seed100, axis = 1)
train_std_seed100 = np.std(train_scores_seed100, axis=1)
val_mean_seed100 = np.mean(val_scores_seed100, axis=1)
val_std_seed100 = np.std(val_scores_seed100, axis=1)
plt.plot(train_sizes_seed100, train_mean_seed100, label='Training accuracy')
plt.plot(train_sizes_seed100, val_mean_seed100, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="d0yJhrSQiknf"
# ## Seed 1234
# + id="ohfojc9Mikng"
seed = 1234
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="BSnesUiSikng" outputId="618ff9c2-1f36-471c-d865-aca608d297bb"
rf_hyper_params = [
{
'n_estimators' : [int(x) for x in np.linspace(5, 50, 5)],
'criterion' : ['gini', 'entropy'],
'max_depth' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_split' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_leaf' : [int(x) for x in np.linspace(2, 50, 5)],
'max_features' : ['auto', 'sqrt', 'log2'],
'bootstrap' : [True, False]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
rf_model = RandomForestClassifier()
# set up GridSearchCV()
rf_model_cv = GridSearchCV(estimator = rf_model,
param_grid = rf_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
rf_model_cv.fit(X_train, y_train)
print("best hyper parameters", rf_model_cv.best_params_)
rf_y_pred = rf_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, rf_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, rf_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, rf_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, rf_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="XnYVGyseikng" outputId="7e99f458-2385-462a-c206-82cd308625d3"
plot_model = RandomForestClassifier(bootstrap=False, criterion='entropy', max_depth=14, max_features='auto', min_samples_leaf=2, min_samples_split=14, n_estimators=16)
2
train_sizes_seed1234, train_scores_seed1234, val_scores_seed1234 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed1234 = np.mean(train_scores_seed1234, axis = 1)
train_std_seed1234 = np.std(train_scores_seed1234, axis=1)
val_mean_seed1234 = np.mean(val_scores_seed1234, axis=1)
val_std_seed1234 = np.std(val_scores_seed1234, axis=1)
plt.plot(train_sizes_seed1234, train_mean_seed1234, label='Training accuracy')
plt.plot(train_sizes_seed1234, val_mean_seed1234, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + [markdown] id="BDeD7WMiiknh"
# ## Seed 500
# + id="BxEwjhFSiknh"
seed = 500
X_train, X_test, y_train, y_test = train_test_split(X_s, y_s, test_size=0.3, random_state=seed)
folds = KFold(n_splits = 5, shuffle = True, random_state=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="8CydNuqZikni" outputId="a89cd918-31d1-4bfc-fcf7-5f82f0b98905"
rf_hyper_params = [
{
'n_estimators' : [int(x) for x in np.linspace(5, 50, 5)],
'criterion' : ['gini', 'entropy'],
'max_depth' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_split' : [int(x) for x in np.linspace(2, 50, 5)],
'min_samples_leaf' : [int(x) for x in np.linspace(2, 50, 5)],
'max_features' : ['auto', 'sqrt', 'log2'],
'bootstrap' : [True, False]
}
]
# specify model
# THIS SECTION SHOULD BE CHANGED.
# n_neighbors SHOULD BE MODIFIED TO ANOTHER VALUE DEPENDING ON THE TARGET VALUE.
rf_model = RandomForestClassifier()
# set up GridSearchCV()
rf_model_cv = GridSearchCV(estimator = rf_model,
param_grid = rf_hyper_params,
scoring= 'accuracy',
cv = folds,
verbose = 2,
return_train_score=True,
n_jobs=-1)
# fit the model
rf_model_cv.fit(X_train, y_train)
print("best hyper parameters", rf_model_cv.best_params_)
rf_y_pred = rf_model_cv.predict(X_test)
# accuracy
print("Accuracy:", metrics.accuracy_score(y_test, rf_y_pred))
# f1 score
print("F1 score micro:", f1_score(y_test, rf_y_pred, average='micro'))
print("F1 score binary:", f1_score(y_test, rf_y_pred, average='binary'))
print("\nclassification report:\n", classification_report(y_test, rf_y_pred))
# + colab={"base_uri": "https://localhost:8080/", "height": 379} id="wNDCALcLikni" outputId="e25b1536-e2d9-49e1-eef6-81ccdf4437d0"
plot_model = RandomForestClassifier(bootstrap=True, criterion='entropy', max_depth=50, max_features='auto', min_samples_leaf=2, min_samples_split=2, n_estimators=38)
train_sizes_seed500, train_scores_seed500, val_scores_seed500 = learning_curve(plot_model,
X_train,
y_train,
cv=5,
scoring='accuracy',
n_jobs=-1, # 이거 바꾸고 싶으면 바꾸고
train_sizes=np.linspace(0.2, 1, 20),
verbose=2)
train_mean_seed500 = np.mean(train_scores_seed500, axis = 1)
train_std_seed500 = np.std(train_scores_seed500, axis=1)
val_mean_seed500 = np.mean(val_scores_seed500, axis=1)
val_std_seed500 = np.std(val_scores_seed500, axis=1)
plt.plot(train_sizes_seed500, train_mean_seed500, label='Training accuracy')
plt.plot(train_sizes_seed500, val_mean_seed500, label='Cross-validation accuracy')
plt.title ('Learning curve')
plt.xlabel('Training Size')
plt.ylabel('Accuracy score')
plt.legend(loc='best')
plt.show()
# + id="a4W4FwITikni"
# learning curve considering different seeds
# + id="7K9Mjatciknj"
acc_avg = list()
acc_se = list()
val_avg = list()
val_se = list()
for i in range(len(train_sizes_seed100)):
acc_avg.append(np.mean([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]))
acc_se .append(np.std ([train_mean_seed100[i], train_mean_seed500[i], train_mean_seed1234[i]]) / np.sqrt(3))
val_avg.append(np.mean([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]))
val_se .append(np.std ([val_mean_seed100[i], val_mean_seed500[i], val_mean_seed1234[i]]) / np.sqrt(3))
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="_isL6QLMiknj" outputId="dc7c276e-f7be-467e-da7d-9242d37c68bd"
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(train_sizes_seed100, acc_avg, c='darkred', label='Training accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(acc_avg, acc_se), np.add(acc_avg, acc_se), color='lightcoral', alpha=0.5)
ax.plot(train_sizes_seed100, val_avg, c='darkgreen', label='Validation accuracy')
ax.fill_between(train_sizes_seed100, np.subtract(val_avg, val_se), np.add(val_avg, val_se), color='lime', alpha=0.5)
ax.set_xlabel('Training Size')
ax.set_ylabel('Accuracy')
ax.set_title('Random Forest Oversampled panic_score')
ax.axhline(y=0.5, color='blue', linestyle='-', label='Random Guess: Always predict no depression')
ax.legend()
ax.grid()
# + [markdown] id="SNE9VWBsu9uD"
# The error is high. Also, it seems like they are not converging. RF is not a valid model for detecting the relationship between screen time and mental health?
# + id="3Te5qVzeu9uD"
| coursework/binary_classification/oversampling/search_panic_score.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="2a24SCVVo7W_"
import os
import cv2
import tensorflow as tf
import numpy as np
from tensorflow.keras import layers, optimizers
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
from tensorflow.keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, Dropout
from tensorflow.keras.models import Model, load_model
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint, LearningRateScheduler
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# + id="517J1xWZo8Bm"
TRAIN_DIR = "/content/drive/MyDrive/Glaucoma/Fundus_Train_Val_Data/Fundus_Scanes_Sorted/Train"
TEST_DIR = "/content/drive/MyDrive/Glaucoma/Fundus_Train_Val_Data/Fundus_Scanes_Sorted/Validation"
# + id="e563Hjz-pHmE"
HEIGHT = 255
WIDTH = 255
# + id="PMjffK8cpSsq"
basemodel = ResNet50(weights = "imagenet", include_top = False, input_tensor = Input(shape = (HEIGHT, WIDTH, 3)))
# + colab={"base_uri": "https://localhost:8080/"} id="ug51gXSIplJ6" outputId="b3f52a6f-fd58-47dc-d0c1-949bfde70b06"
basemodel.summary()
# + id="IxQDUoDuppZg"
for layer in basemodel.layers[:-10]:
layer.trainable = False
# + id="sBT3-mtypyF2"
headmodel = basemodel.output
headmodel = AveragePooling2D(pool_size=(2,2))(headmodel)
headmodel = Flatten(name = 'flatten')(headmodel)
headmodel = Dense(1024, activation = 'relu')(headmodel)
headmodel = Dropout(0.4)(headmodel)
headmodel = Dense(512, activation = 'relu')(headmodel)
headmodel = Dropout(0.35)(headmodel)
headmodel = Dense(256, activation = 'relu')(headmodel)
headmodel = Dropout(0.3)(headmodel)
headmodel = Dense(128, activation = 'relu')(headmodel)
headmodel = Dropout(0.2)(headmodel)
headmodel = Dense(2, activation = 'softmax')(headmodel)
model = Model(inputs = basemodel.input , outputs = headmodel)
# + id="LVhgAZZap4f3"
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-4), loss='binary_crossentropy', metrics=['accuracy'] )
# + id="4Q5UW5miqXnW"
tgen = ImageDataGenerator(preprocessing_function = preprocess_input,
rotation_range = 90,
horizontal_flip = True,
vertical_flip = True,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.1,)
vgen = ImageDataGenerator(preprocessing_function = preprocess_input,
rotation_range = 90,
horizontal_flip = True,
vertical_flip = False)
# + colab={"base_uri": "https://localhost:8080/"} id="hhD0rXtUp_dl" outputId="0b664979-62a4-42b1-e0d1-093f052dd10a"
train_generator = tgen.flow_from_directory(batch_size=4, directory = TRAIN_DIR, shuffle = True, target_size=(HEIGHT, WIDTH))
val_generator = vgen.flow_from_directory(batch_size=4, directory = TEST_DIR, shuffle = True, target_size=(HEIGHT, WIDTH))
# + id="6OHwFpyCulF8"
earlystopping = EarlyStopping(monitor = 'val_loss', mode = 'min', verbose = 1, patience = 20)
checkpointer = ModelCheckpoint(filepath = "weights.hdf5", verbose = 1, save_best_only=True)
# + colab={"base_uri": "https://localhost:8080/"} id="kjmhdzCVqRDk" outputId="e3ef0314-8c61-4a61-c9e1-b01f6bb8c245"
history = model.fit_generator(train_generator, steps_per_epoch=train_generator.n//4, epochs = 200,
validation_data = val_generator, validation_steps = val_generator.n // 4,
callbacks = [checkpointer, earlystopping])
# + id="3bgKiAkSsaPe" colab={"base_uri": "https://localhost:8080/"} outputId="89714134-b10f-4f66-c954-33d6259f2e75"
history.history.keys()
# + id="O8U8dpSo0h9H" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="4ad888f7-9905-4973-8658-1bb1dd575de4"
plt.plot(history.history['accuracy'])
plt.plot(history.history['loss'])
plt.title("Perdida y Precision en la fase de Entrenamiento del Modelo")
plt.xlabel("Epoch")
plt.ylabel("Precision y Perdida")
plt.legend(["Precision en Entrenamiento", "Perdida en Entrenamiento"])
# + id="qNK44Pbh0ke-" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="c4e06fa8-6bc1-4790-bf9e-aa669a6ffa03"
plt.plot(history.history['val_loss'])
plt.title("Perdida en la fase de Validacion Cruzada del Modelo")
plt.xlabel("Epoch")
plt.ylabel("Perdida en Validacion")
plt.legend(["Perdida en Validacion"])
# + id="yegp4Kv00obX" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="c38af5c8-54a6-4409-c3a8-cc0089b2e54b"
plt.plot(history.history['val_accuracy'])
plt.title("Precision en la fase de Validacion Cruzada del Modelo")
plt.xlabel("Epoch")
plt.ylabel("Precision en Validacion")
plt.legend(["Precision en Validacion"])
# + id="wAng15dI0q69"
test_directory = "/content/drive/MyDrive/test"
# + id="wHYjR1Ax2OxF" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="0bc170ee-f1b1-4b2a-ee51-5dc6baa13f06"
test_gen = ImageDataGenerator(rescale = 1./255)
test_generator = test_gen.flow_from_directory(batch_size=4, directory = TEST_DIR, shuffle = True, target_size=(HEIGHT, WIDTH))
evaluate = model.evaluate_generator(test_generator, steps = test_generator.n // 4, verbose = 1)
print("Precision en la fase de Test : {}".format(evaluate[1]))
# + [markdown] id="sYKcyIfR30U3"
# **MATRIZ DE CONFUSION**
# + id="_XPNEgBo2iTK" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="97748adf-aae4-403d-acf5-a572b3000d75"
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
prediction = []
original = []
image = []
for i in range(len(os.listdir(test_directory))):
for item in os.listdir(os.path.join(test_directory, str(i))):
print(item)
img = cv2.imread(os.path.join(test_directory, str(i), item))
img = cv2.resize(img, (HEIGHT,WIDTH))
image.append(img)
img = img/255
img = img.reshape(-1, HEIGHT, WIDTH, 3)
predict = model.predict(img)
predict = np.argmax(predict)
prediction.append(predict)
original.append(i)
# + id="_2KTsIxpCk-p"
os.chdir("/content/drive/My Drive/test")
# !ls
# + id="66GYqF9O34Wp"
L = 8
W = 5
fig, axes = plt.subplots(L, W, figsize = (12,12))
axes = axes.ravel()
for i in np.arange(0, L*W):
axes[i].imshow(image[i])
axes[i].set_title("Pred={}\nVerd={}".format(str(label_names[prediction[i]]), str(label_names[original[i]])))
axes[i].axis('off')
plt.subplots_adjust(wspace = 1.2, hspace=1)
# + id="bBeMU1AH4ui8"
| GlaucomaAIV3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# %matplotlib inline
# importing required libraries
import os
import subprocess
import stat
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import datetime
sns.set(style="white")
# absolute path till parent folder
abs_path = os.getcwd()
path_array = abs_path.split("/")
path_array = path_array[:len(path_array)-1]
homefolder_path = ""
for i in path_array[1:]:
homefolder_path = homefolder_path + "/" + i
# +
# path to clean data
clean_data_path = homefolder_path + "/CleanData/CleanedDataSet/cleaned_autos.csv"
# reading csv into raw dataframe
df = pd.read_csv(clean_data_path,encoding="latin-1")
# -
trial = pd.DataFrame()
for b in list(df["brand"].unique()):
for v in list(df["vehicleType"].unique()):
z = df[(df["brand"] == b) & (df["vehicleType"] == v)]["price"].mean()
trial = trial.append(pd.DataFrame({'brand':b , 'vehicleType':v , 'avgPrice':z}, index=[0]))
trial = trial.reset_index()
del trial["index"]
trial["avgPrice"].fillna(0,inplace=True)
trial["avgPrice"].isnull().value_counts()
trial["avgPrice"] = trial["avgPrice"].astype(int)
trial.head(5)
# ## Average price of a vehicle by brand as well as vehicle type
# HeatMap tp show average prices of vehicles by brand and type together
tri = trial.pivot("brand","vehicleType", "avgPrice")
fig, ax = plt.subplots(figsize=(15,20))
sns.heatmap(tri,linewidths=1,cmap="YlGnBu",annot=True, ax=ax, fmt="d")
ax.set_title("Average price of vehicles by vehicle type and brand",fontdict={'size':20})
ax.xaxis.set_label_text("Type Of Vehicle",fontdict= {'size':20})
ax.yaxis.set_label_text("Brand",fontdict= {'size':20})
plt.show()
fig.savefig((abs_path + "/Plots/heatmap-price-brand-vehicleType.png"))
df.head(5)
| Analysis4/Analysis4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img width=700px; src="../img/logoUPSayPlusCDS_990.png">
#
# <p style="margin-top: 3em; margin-bottom: 2em;"><b><big><big><big><big>Introduction to Pandas</big></big></big></big></b></p>
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
# -
# # 1. Let's start with a showcase
#
# #### Case 1: titanic survival data
df = pd.read_csv("data/titanic.csv")
df.head()
# Starting from reading this dataset, to answering questions about this data in a few lines of code:
# **What is the age distribution of the passengers?**
df['Age'].hist()
# **How does the survival rate of the passengers differ between sexes?**
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
# **Or how does it differ between the different classes?**
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
# All the needed functionality for the above examples will be explained throughout this tutorial.
# #### Case 2: air quality measurement timeseries
# + [markdown] slideshow={"slide_type": "subslide"}
# AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
#
# Starting from these hourly data for different stations:
# -
data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
data.head()
# + [markdown] slideshow={"slide_type": "subslide"}
# to answering questions about this data in a few lines of code:
#
# **Does the air pollution show a decreasing trend over the years?**
# -
data['1999':].resample('M').mean().plot(ylim=[0,120])
data['1999':].resample('A').mean().plot(ylim=[0,100])
# + [markdown] slideshow={"slide_type": "subslide"}
# **What is the difference in diurnal profile between weekdays and weekend?**
# -
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot()
# We will come back to these example, and build them up step by step.
# # 2. Pandas: data analysis in python
#
# For data-intensive work in Python the [Pandas](http://pandas.pydata.org) library has become essential.
#
# What is `pandas`?
#
# * Pandas can be thought of as *NumPy arrays with labels* for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
# * Pandas can also be thought of as `R`'s `data.frame` in Python.
# * Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...
#
# It's documentation: http://pandas.pydata.org/pandas-docs/stable/
#
#
# ** When do you need pandas? **
#
# When working with **tabular or structured data** (like R dataframe, SQL table, Excel spreadsheet, ...):
#
# - Import data
# - Clean up messy data
# - Explore data, gain insight into data
# - Process and prepare your data for analysis
# - Analyse your data (together with scikit-learn, statsmodels, ...)
#
# <div class="alert alert-warning">
# <b>ATTENTION!</b>: <br><br>
#
# Pandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!
# <ul>
# <li>When working with array data (e.g. images, numerical algorithms): just stick with numpy</li>
# <li>When working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/)</li>
# </ul>
# </div>
# # 2. The pandas data structures: `DataFrame` and `Series`
#
# A `DataFrame` is a **tablular data structure** (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
#
#
# <img align="left" width=50% src="../img/schema-dataframe.svg">
df
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Attributes of the DataFrame
#
# A DataFrame has besides a `index` attribute, also a `columns` attribute:
# -
df.index
df.columns
# + [markdown] slideshow={"slide_type": "subslide"}
# To check the data types of the different columns:
# -
df.dtypes
# + [markdown] slideshow={"slide_type": "subslide"}
# An overview of that information can be given with the `info()` method:
# -
df.info()
# + [markdown] slideshow={"slide_type": "subslide"}
# Also a DataFrame has a `values` attribute, but attention: when you have heterogeneous data, all values will be upcasted:
# -
df.values
# + [markdown] slideshow={"slide_type": "subslide"}
# Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
#
# Note that in the IPython notebook, the dataframe will display in a rich HTML view:
# -
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries
# + [markdown] slideshow={"slide_type": "subslide"}
# ### One-dimensional data: `Series` (a column of a DataFrame)
#
# A Series is a basic holder for **one-dimensional labeled data**.
# -
df['Age']
age = df['Age']
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Attributes of a Series: `index` and `values`
#
# The Series has also an `index` and `values` attribute, but no `columns`
# -
age.index
# You can access the underlying numpy array representation with the `.values` attribute:
age.values[:10]
# + [markdown] slideshow={"slide_type": "subslide"}
# We can access series values via the index, just like for NumPy arrays:
# -
age[0]
# + [markdown] slideshow={"slide_type": "subslide"}
# Unlike the NumPy array, though, this index can be something other than integers:
# -
df = df.set_index('Name')
df
age = df['Age']
age
age['Dooley, Mr. Patrick']
# + [markdown] slideshow={"slide_type": "fragment"}
# but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
#
# Eg element-wise operations:
# -
age * 1000
# A range of methods:
age.mean()
# Fancy indexing, like indexing with a list or boolean indexing:
age[age > 70]
# But also a lot of pandas specific methods, e.g.
df['Embarked'].value_counts()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What is the maximum Fare that was paid? And the median?</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction31.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction32.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate the average survival ratio for all passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)).</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction33.py
# -
# # 3. Data import and export
# + [markdown] slideshow={"slide_type": "subslide"}
# A wide range of input/output formats are natively supported by pandas:
#
# * CSV, text
# * SQL database
# * Excel
# * HDF5
# * json
# * html
# * pickle
# * sas, stata
# * (parquet)
# * ...
# +
#pd.read
# +
#df.to
# -
# Very powerful csv reader:
# +
# pd.read_csv?
# -
# Luckily, if we have a well formed csv file, we don't need many of those arguments:
df = pd.read_csv("data/titanic.csv")
df.head()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`
# <br><br>
# Some aspects about the file:
# <ul>
# <li>Which separator is used in the file?</li>
# <li>The second row includes unit information and should be skipped (check `skiprows` keyword)</li>
# <li>For missing values, it uses the `'n/d'` notation (check `na_values` keyword)</li>
# <li>We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword)</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction39.py
# + clear_cell=false
no2
# -
# # 4. Exploration
# + [markdown] slideshow={"slide_type": "subslide"}
# Some useful methods:
#
# `head` and `tail`
# + slideshow={"slide_type": "-"}
no2.head(3)
# -
no2.tail()
# + [markdown] slideshow={"slide_type": "subslide"}
# `info()`
# -
no2.info()
# + [markdown] slideshow={"slide_type": "subslide"}
# Getting some basic summary statistics about the data with `describe`:
# -
no2.describe()
# + [markdown] slideshow={"slide_type": "subslide"}
# Quickly visualizing the data
# + slideshow={"slide_type": "-"}
no2.plot(kind='box', ylim=[0,250])
# + slideshow={"slide_type": "subslide"}
no2['BASCH'].plot(kind='hist', bins=50)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Plot the age distribution of the titanic passengers</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction47.py
# -
# The default plot (when not specifying `kind`) is a line plot of all columns:
# + slideshow={"slide_type": "subslide"}
no2.plot(figsize=(12,6))
# -
# This does not say too much ..
# + [markdown] slideshow={"slide_type": "subslide"}
# We can select part of the data (eg the latest 500 data points):
# -
no2[-500:].plot(figsize=(12,6))
# Or we can use some more advanced time series features -> see further in this notebook!
# # 5. Selecting and filtering data
# <div class="alert alert-warning">
# <b>ATTENTION!</b>: <br><br>
#
# One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br> We now have to distuinguish between:
#
# <ul>
# <li>selection by **label**</li>
# <li>selection by **position**</li>
# </ul>
# </div>
df = pd.read_csv("data/titanic.csv")
# ### `df[]` provides some convenience shortcuts
# + [markdown] slideshow={"slide_type": "subslide"}
# For a DataFrame, basic indexing selects the columns.
#
# Selecting a single column:
# -
df['Age']
# + [markdown] slideshow={"slide_type": "subslide"}
# or multiple columns:
# -
df[['Age', 'Fare']]
# + [markdown] slideshow={"slide_type": "subslide"}
# But, slicing accesses the rows:
# -
df[10:15]
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Systematic indexing with `loc` and `iloc`
#
# When using `[]` like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
#
# * `loc`: selection by label
# * `iloc`: selection by position
# -
df = df.set_index('Name')
df.loc['<NAME>', 'Fare']
df.loc['<NAME>':'Andersson, Mr. <NAME>', :]
# + [markdown] slideshow={"slide_type": "subslide"}
# Selecting by position with `iloc` works similar as indexing numpy arrays:
# -
df.iloc[0:2,1:3]
# + [markdown] slideshow={"slide_type": "subslide"}
# The different indexing methods can also be used to assign data:
# -
df.loc['Braund, Mr. <NAME>', 'Survived'] = 100
df
# ### Boolean indexing (filtering)
# Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
#
# The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
# + run_control={"frozen": false, "read_only": false}
df['Fare'] > 50
# + run_control={"frozen": false, "read_only": false}
df[df['Fare'] > 50]
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers</li>
# </ul>
# </div>
df = pd.read_csv("data/titanic.csv")
# + clear_cell=true
# # %load snippets/02-pandas_introduction64.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction65.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction66.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Based on the titanic data set, how many passengers older than 70 were on the Titanic?</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction67.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction68.py
# -
# # 6. The group-by operation
# ### Some 'theory': the groupby operation (split-apply-combine)
# + run_control={"frozen": false, "read_only": false}
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
# -
# ### Recap: aggregating functions
# When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:
# + run_control={"frozen": false, "read_only": false}
df['data'].sum()
# -
# However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
#
# For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:
# + run_control={"frozen": false, "read_only": false}
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
# -
# This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
#
# What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this.
# ### Groupby: applying functions per group
# + [markdown] slideshow={"slide_type": "subslide"}
# The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets**
#
# This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
#
# * **Splitting** the data into groups based on some criteria
# * **Applying** a function to each group independently
# * **Combining** the results into a data structure
#
# <img src="../img/splitApplyCombine.png">
#
# Similar to SQL `GROUP BY`
# -
# Instead of doing the manual filtering as above
#
#
# df[df['key'] == "A"].sum()
# df[df['key'] == "B"].sum()
# ...
#
# pandas provides the `groupby` method to do exactly this:
# + run_control={"frozen": false, "read_only": false}
df.groupby('key').sum()
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "subslide"}
df.groupby('key').aggregate(np.sum) # 'sum'
# -
# And many more methods are available.
# + run_control={"frozen": false, "read_only": false}
df.groupby('key')['data'].sum()
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Application of the groupby concept on the titanic data
# -
# We go back to the titanic passengers survival data:
# + run_control={"frozen": false, "read_only": false}
df = pd.read_csv("data/titanic.csv")
# + run_control={"frozen": false, "read_only": false}
df.head()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Using groupby(), calculate the average age for each sex.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction77.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate the average survival ratio for all passengers.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction78.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction79.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What is the difference in the survival ratio between the sexes?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction80.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Or how does it differ between the different classes? Make a bar plot visualizing the survival ratio for the 3 classes.</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction81.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li>
# </ul>
# </div>
# + clear_cell=false run_control={"frozen": false, "read_only": false}
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load snippets/02-pandas_introduction83.py
# -
# # 7. Working with time series data
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
# + [markdown] slideshow={"slide_type": "fragment"}
# When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available:
# -
no2.index
# + [markdown] slideshow={"slide_type": "subslide"}
# Indexing a time series works with strings:
# -
no2["2010-01-01 09:00": "2010-01-01 12:00"]
# + [markdown] slideshow={"slide_type": "subslide"}
# A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
# + [markdown] slideshow={"slide_type": "-"}
# E.g. all data of January up to March 2012:
# -
no2['2012-01':'2012-03']
# + [markdown] slideshow={"slide_type": "subslide"}
# Time and date components can be accessed from the index:
# -
no2.index.hour
no2.index.year
# + [markdown] slideshow={"slide_type": "subslide"}
# ## The power of pandas: `resample`
# -
# A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).
#
# Remember the air quality data:
no2.plot()
# The time series has a frequency of 1 hour. I want to change this to daily:
no2.head()
no2.resample('D').mean().head()
# + [markdown] slideshow={"slide_type": "subslide"}
# Above I take the mean, but as with `groupby` I can also specify other methods:
# -
no2.resample('D').max().head()
# + [markdown] slideshow={"slide_type": "skip"}
# The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
# These strings can also be combined with numbers, eg `'10D'`.
# + [markdown] slideshow={"slide_type": "subslide"}
# Further exploring the data:
# -
no2.resample('M').mean().plot() # 'A'
# +
# no2['2012'].resample('D').plot()
# + clear_cell=true slideshow={"slide_type": "subslide"}
# # %load snippets/02-pandas_introduction96.py
# + [markdown] slideshow={"slide_type": "subslide"}
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: The evolution of the yearly averages with, and the overall mean of all stations
#
# <ul>
# <li>Use `resample` and `plot` to plot the yearly averages for the different stations.</li>
# <li>The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`).</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction97.py
# + [markdown] slideshow={"slide_type": "subslide"}
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: how does the *typical monthly profile* look like for the different stations?
#
# <ul>
# <li>Add a 'month' column to the dataframe.</li>
# <li>Group by the month to obtain the typical monthly averages over the different years.</li>
# </ul>
# </div>
# -
# First, we add a column to the dataframe that indicates the month (integer value of 1 to 12):
# + clear_cell=true
# # %load snippets/02-pandas_introduction98.py
# + [markdown] slideshow={"slide_type": "subslide"}
# Now, we can calculate the mean of each month over the different years:
# + clear_cell=true
# # %load snippets/02-pandas_introduction99.py
# + clear_cell=true slideshow={"slide_type": "subslide"}
# # %load snippets/02-pandas_introduction100.py
# + [markdown] slideshow={"slide_type": "subslide"}
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: The typical diurnal profile for the different stations
#
# <ul>
# <li>Similar as for the month, you can now group by the hour of the day.</li>
# </ul>
# </div>
# + clear_cell=true slideshow={"slide_type": "fragment"}
# # %load snippets/02-pandas_introduction101.py
# + [markdown] slideshow={"slide_type": "subslide"}
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: What is the difference in the typical diurnal profile between week and weekend days for the 'BASCH' station.
#
# <ul>
# <li>Add a column 'weekday' defining the different days in the week.</li>
# <li>Add a column 'weekend' defining if a days is in the weekend (i.e. days 5 and 6) or not (True/False).</li>
# <li>You can groupby on multiple items at the same time. In this case you would need to group by both weekend/weekday and hour of the day.</li>
# </ul>
# </div>
# -
# Add a column indicating the weekday:
# +
# no2.index.weekday?
# + clear_cell=true
# # %load snippets/02-pandas_introduction103.py
# + [markdown] slideshow={"slide_type": "subslide"}
# Add a column indicating week/weekend
# + clear_cell=true
# # %load snippets/02-pandas_introduction104.py
# -
# Now we can groupby the hour of the day and the weekend (or use `pivot_table`):
# + clear_cell=true
# # %load snippets/02-pandas_introduction105.py
# + clear_cell=true slideshow={"slide_type": "subslide"}
# # %load snippets/02-pandas_introduction106.py
# + clear_cell=true slideshow={"slide_type": "subslide"}
# # %load snippets/02-pandas_introduction107.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction108.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?
#
# Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?
# <br><br>
#
# Hints:
#
# <ul>
# <li>Create a new DataFrame, called `exceedances`, (with boolean values) indicating if the threshold is exceeded or not</li>
# <li>Remember that the sum of True values can be used to count elements. Do this using groupby for each year.</li>
# <li>Adding a horizontal line can be done with the matplotlib function `ax.axhline`.</li>
# </ul>
# </div>
# + clear_cell=true
# # %load snippets/02-pandas_introduction109.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction110.py
# + clear_cell=true
# # %load snippets/02-pandas_introduction111.py
# -
# # 9. What I didn't talk about
# - Concatenating data: `pd.concat`
# - Merging and joining data: `pd.merge`
# - Reshaping data: `pivot_table`, `melt`, `stack`, `unstack`
# - Working with missing data: `isnull`, `dropna`, `interpolate`, ...
# - ...
#
# ## Further reading
#
# * Pandas documentation: http://pandas.pydata.org/pandas-docs/stable/
#
# * Books
#
# * "Python for Data Analysis" by <NAME>
# * "Python Data Science Handbook" by <NAME>
#
# * Tutorials (many good online tutorials!)
#
# * https://github.com/jorisvandenbossche/pandas-tutorial
# * https://github.com/brandon-rhodes/pycon-pandas-tutorial
#
# * <NAME>'s blog
#
# * https://tomaugspurger.github.io/modern-1.html
| Day_1_Scientific_Python/02-pandas_introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PCA and kernel PCA explained
#
# This notebook is the companion to our tutorial on PCA and __[kernel PCA available here](https://nirpyresearch.com/pca-kernel-pca-explained/)__
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, KernelCenterer
from sklearn.decomposition import PCA, KernelPCA
from sklearn.utils import extmath
from sklearn.metrics.pairwise import euclidean_distances
# -
def pca(X, n_components=2):
# Presprocessing - Standard Scaler
X_std = StandardScaler().fit_transform(X)
#Calculate covariance matrix
cov_mat = np.cov(X_std.T)
# Get eigenvalues and eigenvectors
eig_vals, eig_vecs = np.linalg.eigh(cov_mat)
# flip eigenvectors' sign to enforce deterministic output
eig_vecs, _ = extmath.svd_flip(eig_vecs, np.empty_like(eig_vecs).T)
# Concatenate the eigenvectors corresponding to the highest n_components eigenvalues
matrix_w = np.column_stack([eig_vecs[:,-i] for i in range(1,n_components+1)])
# Get the PCA reduced data
Xpca = X_std.dot(matrix_w)
return Xpca
data = pd.read_csv('../data/plums.csv')
X = data.values[:,1:]
Xstd = StandardScaler().fit_transform(X)
# +
# Scikit-learn PCA
pca1 = PCA(n_components=2)
Xpca1 = pca1.fit_transform(X)
# Our implementation
Xpca2 = pca(X, n_components=2)
with plt.style.context(('ggplot')):
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
#plt.figure(figsize=(8,6))
ax[0].scatter(Xpca1[:,0], Xpca1[:,1], s=100, edgecolors='k')
ax[0].set_xlabel('PC 1')
ax[0].set_ylabel('PC 2')
ax[0].set_title('Scikit learn')
ax[1].scatter(Xpca2[:,0], Xpca2[:,1], s=100, facecolor = 'b', edgecolors='k')
ax[1].set_xlabel('PC 1')
ax[1].set_ylabel('PC 2')
ax[1].set_title('Our implementation')
plt.show()
# -
def ker_pca(X, n_components=3, gamma = 0.01):
# Calculate euclidean distances of each pair of points in the data set
dist = euclidean_distances(X, X, squared=True)
# Calculate Kernel matrix
K = np.exp(-gamma * dist)
Kc = KernelCenterer().fit_transform(K)
# Get eigenvalues and eigenvectors of the kernel matrix
eig_vals, eig_vecs = np.linalg.eigh(Kc)
# flip eigenvectors' sign to enforce deterministic output
eig_vecs, _ = extmath.svd_flip(eig_vecs, np.empty_like(eig_vecs).T)
# Concatenate the eigenvectors corresponding to the highest n_components eigenvalues
Xkpca = np.column_stack([eig_vecs[:,-i] for i in range(1,n_components+1)])
return Xkpca
# +
kpca1 = KernelPCA(n_components=3, kernel='rbf', gamma=0.01)
Xkpca1 = kpca1.fit_transform(Xstd)
Xkpca2 = ker_pca(Xstd)
with plt.style.context(('ggplot')):
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
#plt.figure(figsize=(8,6))
ax[0].scatter(Xkpca1[:,0], Xkpca1[:,1], s=100, edgecolors='k')
ax[0].set_xlabel('PC 1')
ax[0].set_ylabel('PC 2')
ax[0].set_title('Scikit learn')
ax[1].scatter(Xkpca2[:,0], Xkpca2[:,1], s=100, facecolor = 'b', edgecolors='k')
ax[1].set_xlabel('PC 1')
ax[1].set_ylabel('PC 2')
ax[1].set_title('Our implementation')
plt.show()
| snippets/PCA_and_kernelPCA_explained.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import time
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pybind11
import numpy.linalg as la
from scipy.stats import multivariate_normal
# # 1. Vectorize using naive `numpy`
#
# We stick to the example as in univariate normal simulation, where
#
# $$X|\mu\sim N(\mu,1), \qquad \mu\sim N(0,1)$$
#
# and we use different method to calcualte the gradient and seek possible ways of code optimization.
n = int(1e5)
nbatch= 500
data = np.random.normal(1, size = n)
idx = np.random.choice(len(data), nbatch)
batch = data[idx]
# +
# 1. compare different ways of iterating over data.
# list comprehension
gradU_list = lambda mu, batch: mu + sum([mu-x for x in batch]) * len(data) / len(batch)
# for loop
gradUi = lambda mu, x: mu-x
def gradU_for(mu, batch):
"""
Using forloop to calculate gradient.
"""
gradU = 0
for x in batch:
gradU += gradUi(mu, x)
gradU *= len(data) / len(batch)
gradU += mu
return gradU
# np.array vectorization
gradU_array = lambda mu, batch: mu + np.sum(mu-batch) * len(data) / len(batch)
# -
# %timeit gradU_for(1, batch)
# %timeit gradU_list(1, batch)
# %timeit gradU_array(1, batch)
# +
# time comparison
ls = (10 ** np.linspace(2, 5, 50)).astype(int)
T = np.zeros((len(ls), 3, 100))
f_list = [gradU_for, gradU_list, gradU_array]
for i, nbatch in enumerate(ls) :
idx = np.random.choice(len(data), nbatch)
batch = data[idx]
for j, f in enumerate(f_list):
for k in range(100):
start = time.time()
f(1, batch)
elapsed = time.time() - start
T[i, j, k] = elapsed
print((i+1)/len(ls), end='\r')
# +
T_mean = T.mean(2)
T_sd = np.sqrt(((T-T.mean(2)[:,:,np.newaxis]) ** 2).mean(2))
T_log_mean = np.log(T).mean(2)
plt.figure(figsize=(16,4.5))
plt.subplot(121)
plt.plot(ls, T_mean[:,0], label = 'list comprehension')
plt.plot(ls, T_mean[:,1], label = 'for loop')
plt.plot(ls, ls*1e-6, label = 'linear')
plt.plot(ls, T_mean[:,2], label = 'numpy array vectorization')
plt.legend()
plt.title('Runtime by mini-batch size')
plt.subplot(122)
plt.plot(np.log10(ls), T_log_mean[:,0], label = 'list comprehension')
plt.plot(np.log10(ls), T_log_mean[:,1], label = 'for loop')
plt.plot(np.log10(ls), np.log(ls*1e-6), label = 'linear')
plt.plot(np.log10(ls), T_log_mean[:,2], label = 'numpy array vectorization')
plt.title('Runtime by mini-batch size (log-log scale)')
plt.legend();
# plt.savefig('runtime1.png');
# -
# # 2. Precompute invariant/common quantities
# # 3. Use the easier version of sampler
# As noted by the authors in the paper, there are two equivalent ways of updating our SGHMC samplar:
#
# + As in equation (13)
#
# + As in equation (15)
#
# These two are obviously equivalent and we can use the second update rule and borrow experience from parameter settings of stochastic gradient descent with momentum. The $\beta$ term corresponds to the estimation of noise that comes from the gradient. One simple choice is to ignore the gradient noise by setting $\hat\beta$ = 0 and relying on small $\epsilon$. We can also set $\hat\beta = \eta\hat V/2$, where $\hat V$ is estimated using empirical Fisher information as in (Ahn et al., 2012).
# # 4. Using cython to detect the bottle neck
# %load_ext cython
# + magic_args="-a" language="cython"
#
# import numpy as np
# import scipy.linalg as la
# nbatch = 500
# np.random.seed(2019)
# mean_or = np.array([1,-1])
# sig_or = np.array([[1,0.75],[0.75,1]])
# sig_or_i = la.inv(sig_or)
# data = np.random.normal(1, size = 10000)
# gradU = lambda mu, batch: mu - sig_or_i.dot((batch-mu).T).sum(1) / len(batch) * len(data)
# Vhat = lambda mu, batch: np.cov(sig_or_i.dot((batch-mu).T))
#
# def SGHMC(gradU, p, r, alpha, eta, beta = 0, eps = 0.01, L = 100):
# """
# Using leapfrog to discretalize
#
# Args:
# gradU: gradient of potential energy (posterior)
# p: position (parameters)
# r: momentum (auxiliary)
# eps: stepsize
# L: # of steps
# M_i: inversion of preconditioned mass matrix
# """
#
# v = eps * r
# for i in range(L):
# p += v
# idx = np.random.choice(len(data), nbatch)
# batch = data[idx]
# V = Vhat(p, batch)
# grad = gradU(p, batch)
# rnd = np.random.normal(0, 2*alpha*eta, 2)
# v = v - eta * grad - alpha * v + rnd
# return p, v
# -
# As the results shows, the calculation of gradient and V costs huge amount of time and they are the very crucial part of our sampler. So we chose to code them in `C++` and use `pybind11` to wrap them.
# +
# %%file SGHMC_utils.cpp
<%
cfg["compiler_args"] = ["-std=c++11"]
cfg["include_dirs"] = ["../notebook/eigen"]
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
double U(double mu, Eigen::VectorXd batch) {
return mu*mu/2 + (mu-batch.array()).square().sum()/2;
}
double gradU(double mu, Eigen::VectorXd batch, int ndata) {
return mu + (mu-batch.array()).sum() * ndata/ batch.size();
}
double Vhat(Eigen::VectorXd batch) {
return (batch.array() - batch.mean()).square().sum()/(batch.size()-1);
}
PYBIND11_MODULE(SGHMC_utils, m) {
m.doc() = "module to do calculate basic quantities for updating based on pybind11";
m.def("U", &U, "Potential energy evaluated based one the whole dataset");
m.def("gradU", &gradU, "estimated gradient of U based on minibatch");
m.def("Vhat", &Vhat, "empirical Fishier Informatetion");
}
# -
import cppimport
cppimport.force_rebuild()
SGHMC_utils=cppimport.imp("SGHMC_utils")
U_array = lambda mu, batch: mu**2/2 + np.sum((mu-batch)**2/2)
gradU_array = lambda mu, batch, ndata: mu + np.sum(mu-batch) * ndata / len(batch)
Vhat_array = lambda batch: np.cov(batch)
print(np.isclose(gradU_array(1, batch, len(data)), SGHMC_utils.gradU(1, batch, len(data))))
print(np.isclose(U_array(1, batch), SGHMC_utils.U(1, batch)))
print(np.isclose(Vhat_array(batch), SGHMC_utils.Vhat(batch)))
# %timeit gradU_array(1, batch, len(data))
# %timeit SGHMC_utils.gradU(1, batch, len(data))
# %timeit Vhat_array(batch)
# %timeit SGHMC_utils.Vhat(batch)
def SGHMC_update(Vhat, gradU, p, r, nbatch = 50, eps = 0.01, L = 100, M_i = 1):
"""
Using leapfrog to discretalize
Args:
Vhat: empirical fisher info matrix
gradU: gradient of potential energy (posterior)
p: position (parameters)
r: momentum (auxiliary)
eps: stepsize
L: # of steps
M_i: inversion of preconditioned mass matrix
"""
for i in range(L):
p = p + eps*M_i * r
idx = np.random.choice(len(data), nbatch)
batch = data[idx]
V = Vhat(batch)
B = 1/2 * eps * V
C = 3
r = r - eps*gradU(p, batch, len(data)) - eps*C*M_i*r + np.random.normal(0, np.sqrt(2*(C-B)*eps))
return p, r
p, r0 = 0, 0
# %timeit SGHMC_update(Vhat, gradU_array, p, r0, eps = 0.01, L = 100, M_i = 1)
# %timeit SGHMC_update(SGHMC_utils.Vhat, SGHMC_utils.gradU, p, r0, eps = 0.01, L = 100, M_i = 1)
# +
data = np.random.normal(1, size = int(1e5))
ls = (10 ** np.linspace(2, 5, 10)).astype(int)
T2 = np.zeros((len(ls), 100, 2))
for i, nbatch in enumerate(ls):
for j in range(100):
t1 = time.time()
SGHMC_update(Vhat, gradU_array, p, r0, nbatch, eps = 0.01, L = 100, M_i = 1)
t2 = time.time()
SGHMC_update(SGHMC_utils.Vhat, SGHMC_utils.gradU, p, r0, eps = 0.01, L = 100, M_i = 1)
t3 = time.time()
T2[i, j, 0] = t2 - t1
T2[i, j, 1] = t3 - t2
print((i+1)/len(ls), end='\r')
# -
Tpy = T2.mean(1)[:,0]
Tc = T2.mean(1)[:,1]
print(Tpy)
print(Tc)
import pandas as pd
T2l = np.log10(T2)
df1 = pd.melt(pd.DataFrame(T2l[:,:,0].T, columns=ls), col_level=0)
df2 = pd.melt(pd.DataFrame(T2l[:,:,1].T, columns=ls), col_level=0)
plt.figure(figsize=(16,9))
sns.boxplot(y="value", x= "variable", data = df1, palette = sns.color_palette("Blues", n_colors = 10))
sns.boxplot(y="value", x= "variable", data = df2, palette = sns.color_palette("Greens", n_colors = 10))
plt.xlabel('batch size')
plt.ylabel('log-avg. runtime')
plt.title("Runtime by batch size (naive Python vs pybind11)");
# plt.savefig("py_vs_cpp.png");
| Code_optimization/Code Optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pastas Noise model
#
# *Developed by <NAME> and <NAME>*
#
# This Notebook contains a number of examples and tests with synthetic data. The purpose of this notebook is to demonstrate the noise model of Pastas.
#
# In this Notebook, heads are generated with a known response function. Next, Pastas is used to solve for the parameters of the model it is verified that Pastas finds the correct parameters back. Several different types of errors are introduced in the generated heads and it is tested whether the confidence intervals computed by Pastas are reasonable.
#
# The first step is to import all the required python packages.
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import gammainc, gammaincinv
import pandas as pd
import pastas as ps
# ## Load data and define functions
# The rainfall and reference evaporation are read from file and truncated for the period 1980 - 2000. The rainfall and evaporation series are taken from KNMI station De Bilt. The reading of the data is done using Pastas.
#
# Heads are generated with a Gamma response function which is defined below.
rain = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='RH').series
evap = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='EV24').series
rain = rain['1980':'1999']
evap = evap['1980':'1999']
# +
def gamma_tmax(A, n, a, cutoff=0.99):
return gammaincinv(n, cutoff) * a
def gamma_step(A, n, a, cutoff=0.99):
tmax = gamma_tmax(A, n, a, cutoff)
t = np.arange(0, tmax, 1)
s = A * gammainc(n, t / a)
return s
def gamma_block(A, n, a, cutoff=0.99):
# returns the gamma block response starting at t=0 with intervals of delt = 1
s = gamma_step(A, n, a, cutoff)
return np.append(s[0], s[1:] - s[:-1])
# -
# The Gamma response function requires 3 input arguments; A, n and a. The values for these parameters are defined along with the parameter d, the base groundwater level. The response function is created using the functions defined above.
Atrue = 800
ntrue = 1.1
atrue = 200
dtrue = 20
h = gamma_block(Atrue, ntrue, atrue) * 0.001
tmax = gamma_tmax(Atrue, ntrue, atrue)
plt.plot(h)
plt.xlabel('Time (days)')
plt.ylabel('Head response (m) due to 1 mm of rain in day 1')
plt.title('Gamma block response with tmax=' + str(int(tmax)));
# ### Create synthetic observations
# Rainfall is used as input series for this example. No errors are introduced. A Pastas model is created to test whether Pastas is able to . The generated head series is purposely not generated with convolution.
# Heads are computed for the period 1990 - 2000. Computations start in 1980 as a warm-up period. Convolution is not used so that it is clear how the head is computed. The computed head at day 1 is the head at the end of day 1 due to rainfall during day 1. No errors are introduced.
# +
step = gamma_block(Atrue, ntrue, atrue)[1:]
lenstep = len(step)
h = dtrue * np.ones(len(rain) + lenstep)
for i in range(len(rain)):
h[i:i + lenstep] += rain[i] * step
head = pd.DataFrame(index=rain.index, data=h[:len(rain)],)
head = head['1990':'1999']
plt.figure(figsize=(12,5))
plt.plot(head,'k.', label='head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Time (years)');
# -
# ### Create Pastas model
# The next step is to create a Pastas model. The head generated using the Gamma response function is used as input for the Pastas model.
#
# A `StressModel` instance is created and added to the Pastas model. The `StressModel` intance takes the rainfall series as input aswell as the type of response function, in this case the Gamma response function ( `ps.Gamma`).
#
# The Pastas model is solved without a noise model since there is no noise present in the data. The results of the Pastas model are plotted.
ml = ps.Model(head)
sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm)
ml.solve(noise=False)
ml.plots.results();
# The results of the Pastas model show the calibrated parameters for the Gamma response function. The parameters calibrated using pastas are equal to the `Atrue`, `ntrue`, `atrue` and `dtrue` parameters defined above. The Explained Variance Percentage for this example model is 100%.
#
# The results plots show that the Pastas simulation is identical to the observed groundwater. The residuals of the simulation are shown in the plot together with the response function and the contribution for each stress.
#
# Below the Pastas block response and the true Gamma response function are plotted.
plt.plot(gamma_block(Atrue, ntrue, atrue), label='Synthetic response')
plt.plot(ml.get_block_response('recharge'), '-.', label='Pastas response')
plt.legend(loc=0)
plt.ylabel('Head response (m) due to 1 m of rain in day 1')
plt.xlabel('Time (days)');
# ### Test 1: Adding noise
# In the next test example noise is added to the observations of the groundwater head. The noise is normally distributed noise with a mean of 0 and a standard deviation of 1 and is scaled with the standard deviation of the head.
#
# The noise series is added to the head series created in the previous example.
# +
random_seed = np.random.RandomState(15892)
noise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.5
head_noise = head[0] + noise
# -
# ### Create Pastas model
#
# A pastas model is created using the head with noise. A stress model is added to the Pastas model and the model is solved.
ml2 = ps.Model(head_noise)
sm2 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml2.add_stressmodel(sm2)
ml2.solve(noise=True)
ml2.plots.results();
# The results of the simulation show that Pastas is able to filter the noise from the observed groundwater head. The simulated groundwater head and the generated synthetic head are plotted below. The parameters found with the Pastas optimization are similair to the original parameters of the Gamma response function.
plt.figure(figsize=(12,5))
plt.plot(head_noise, '.k', alpha=0.1, label='Head with noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml2.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
# ### Test 2: Adding correlated noise
# In this example correlated noise is added to the observed head. The correlated noise is generated using the noise series created in the previous example. The correlated noise is implemented as exponential decay using the following formula:
#
# $$ n_{c}(t) = e^{-1/\alpha} \cdot n_{c}(t-1) + n(t)$$
#
# where $n_{c}$ is the correlated noise, $\alpha$ is the noise decay parameter and $n$ is the uncorrelated noise. The noise series that is created is added to the observed groundwater head.
# +
noise_corr = np.zeros(len(noise))
noise_corr[0] = noise[0]
alphatrue = 2
for i in range(1, len(noise_corr)):
noise_corr[i] = np.exp(-1/alphatrue) * noise_corr[i - 1] + noise[i]
head_noise_corr = head[0] + noise_corr
# -
# ### Create Pastas model
# A Pastas model is created using the head with correlated noise as input. A stressmodel is added to the model and the Pastas model is solved. The results of the model are plotted.
ml3 = ps.Model(head_noise_corr)
sm3 = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')
ml3.add_stressmodel(sm3)
ml3.solve(noise=True)
ml3.plots.results();
# The Pastas model is able to calibrate the model parameters fairly well. The calibrated parameters are close to the true values defined above. The `noise_alpha` parameter calibrated by Pastas is close the the `alphatrue` parameter defined for the correlated noise series.
#
# Below the head simulated with the Pastas model is plotted together with the head series and the head series with the correlated noise.
plt.figure(figsize=(12,5))
plt.plot(head_noise_corr, '.k', alpha=0.1, label='Head with correlated noise')
plt.plot(head, '.k', label='Head true')
plt.plot(ml3.simulate(), label='Pastas simulation')
plt.title('Simulated Pastas head compared with synthetic head')
plt.legend(loc=0)
plt.ylabel('Head (m)')
plt.xlabel('Date (years)');
| examples/notebooks/8_pastas_synthetic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# https://arxiv.org/pdf/1601.01754.pdf
import numpy as np
from math import *
import matplotlib.pyplot as plt
# %matplotlib inline
# +
from math import *
class ACDual:
def __init__(self, real, dual):
self.real = real
self.dual = dual
@classmethod
def FromPt(cls,pt):
C = pt[0]+pt[1]*complex('j')
return cls(complex(1.0),C)
@classmethod
def RotateAngle(cls,theta):
''' returns a ACDual for the rotation by theta
system is cos(theta/2) i*sin(theta/2) because its a double cover
'''
c = cos(theta/2)
s = sin(theta/2)
return cls(c+s*complex('j'),0)
@classmethod
def Translate(cls,D):
''' returns a ACDual for the translation by D
system is 1, 0*i, D[]0]/2 e ,D[1] i*e because its a double cover
'''
return cls(complex(1.0),D[0]/2+D[1]/2*complex('j'))
@classmethod
def RotateAroundPoint(cls,theta,D):
"""Performs t(D)*R*T(D)inv to move the rotation point to the origin, rotate at the origin and then translate back"""
T = cls.Translate(D)
Tinv = cls.Translate([-D[0],-D[1]])
R = cls.RotateAngle(theta)
return T*R*Tinv
def __toDual(self,other):
'''private function to convert numbers into ACDuals, '''
if isinstance(other, ACDual): return other
if isinstance(other,complex):return ACDual(other,0)
return ACDual(complex(other),0)
def __add__(self, other1):
other = self.__toDual(other1)
return ACDual(self.real + other.real,
self.dual + other.dual)
__radd__ = __add__
def __sub__(self, other1):
other = self.__toDual(other1)
return ACDual(self.real - other.real,
self.dual - other.dual)
def __rsub__(self, other):
return ACDual(other, 0) - self
def __truediv__(self, other1):
''
other = self.__toDual(other1)
return ACDual(self.real/other.real,
(self.dual*other.real - self.real*other.dual)/(other.real**2))
def __rtruediv__(self, other1):
other = self.__toDual(other1)
return other/self
def __pow__(self, other):
return ACDual(self.real**other,
self.dual * other * self.real**(other - 1))
def involution(self):
return ACDual(self.real.conjugate(),self.dual)
def __mul__(self, other1):
other = self.__toDual(other1)
return ACDual(self.real * other.real,
self.dual * other.real.conjugate() + self.real * other.dual)
__rmul__ = __mul__
def mag(self):
return sqrt(self.real*self.real.conjugate())
def __matmul__(self,other):
return self*other*self.involution()
def __rmatmul(self,other):
return other*self*other.involution()
def asPt(self):
'''returns the XY point the dual component represents'''
C = self.dual
return C.real,C.imag
def __repr__(self):
return repr(self.real) + ' + ' + repr(self.dual) + '*'+'\u03B5'
# -
# 
# +
P_start = (10,0)
P = ACDual.FromPt(P_start)
print(P)
# +
theta = np.pi
R = ACDual.RotateAngle(theta)
print(R)
# -
P_rot = (15,0)
D = np.array(P_rot)
T = ACDual.Translate(D)
print(T)
# +
theta = np.pi/2
P_start = (10,0)
P_rot = (0,0)
P = ACDual.FromPt(P_start)
print(P)
D = np.array(P_rot)
T = ACDual.Translate(D)
Tinv = ACDual.Translate(-D)
R = ACDual.RotateAngle(theta)
print((T@(R@(Tinv@P))).asPt())
AboutD = ACDual.RotateAroundPoint(theta,D)
P_end = (AboutD@P).asPt()
print(P_end)
# -
fig, ax = plt.subplots()
ax.plot(P_start[0],P_start[1],"ko")
ax.plot(P_rot[0],P_rot[1],"go")
ax.plot(P_end[0],P_end[1],"bo")
#plt.axis('equal')
ax.grid()
ax.set_xlim(-10,20)
ax.set_ylim(-10,10)
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
| ACDuals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.externals.joblib import Memory
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
from numpy import *
import numpy as np
import pylab as pl
mem = Memory("./mycache")
# 导入数据
@mem.cache
def get_data():
data = load_svmlight_file("/home/picher/workSpace/ML_exp2/a9a.txt",123,dtype=float64)
return data[0], data[1]
def ini_para(feature_num):
prng = random.seed(1)
# 全 0
#w = zeros((1,feature_num), dtype = float)
# 随机
w1 = np.random.random([1,feature_num])
# 卡方分布
#w2 = np.random.chisquare(1,size=(1,feature_num))
# 正态分布
#w3 = np.random.randn(1,feature_num)
# 打印参数,测试用
#print (w)
#print(w1.dtype)
return w1
def likelihood(y,model):
result = y*model.transpose(1,0)
return result
def logistic_regression(x,y,w,compute_times,type):
eta = 0.00233
gama = 0.9
times = 0
v = 0
G = 0
error = 1
right = 0
shape = y.shape[0]
predit = np.ones((shape,1))
if type ==1:
while times < 500:
model = model_compute(w, x)
for i in range(0, shape):
predit[i] = predit[i]/(1 + np.exp(-model[0][i]))
right = right_percent(y, predit)
RIGHT_NAG.append(right)
like = likelihood(y, model)
error = cross_entropy_error(like, shape)
error_NAG.append(error)
#print('error is ', error)
w, v = NAG(w, v, x, y,like, gama, eta,shape)
times = times + 1
compute_times.append(times)
if times == 1:
print('first error percent is: ', error)
print('loss is: ', error)
print('right percent is: ', right)
return w
elif type ==2:
while times < 500:
model = model_compute(w, x)
for i in range(0, shape):
predit[i] = predit[i] / (1 + np.exp(-model[0][i]))
right = right_percent(y, predit)
RIGHT_RMSProp.append(right)
like = likelihood(y, model)
error = cross_entropy_error(like, shape)
error_RMSProp.append(error)
# print('error is ', error)
w, G = RMSProp(w, G, x, y, like, gama, eta,shape)
times = times + 1
compute_times.append(times)
if times == 1:
print('first error percent is: ', error)
print('loss is: ', error)
print('right percent is: ', right)
return w
def NAG(w_old, v, x, y,like, gama, eta,shape):
likelihood = np.sum(like)
denominator = 1 + np.exp(likelihood)
y1 = y.transpose(1,0)
numerator = np.dot(y1,x)
g_t = (numerator/denominator)/shape
v_t = gama*v + eta*g_t
res = w_old + v_t
return res, v_t
def RMSProp(w_old, G, x, y,like, gama, eta,shape):
likelihood = np.sum(like)
denominator = 1 + np.exp(likelihood)
small = 1e-8
y1 = y.transpose(1, 0)
numerator = np.dot(y1, x)
g_t = (numerator / denominator) /shape
G_t = gama*G + (1-gama)*(g_t*g_t)
result = w_old + (eta/(pow(G_t+small,0.5)))*g_t
return result, G_t
def model_compute(w,x):
result = np.dot(w,x.transpose(1,0))
return result
def cross_entropy_error(like,shape):
result = np.sum(np.log(1 + exp(-like)))/shape
return result
def right_percent(y,y1):
right = 0
shape = y.shape[0]
compute = zeros((shape, 1))
for i in range(0, shape):
if y1[i]>=0.7:
compute[i]=1
else:
compute[i]=-1
for k in range(0, shape):
if compute[k]==y[k]:
right = right+1
result = right/shape
return result
# 处理数据格式等
w = ini_para(124)
x, y = get_data()
x = x.toarray()
b = np.ones((32561,1))
x = np.c_[x,b]
size = x.shape[0]
y = np.reshape(y,(32561,1))
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.06)
# 数组用于存放训练集和测试集的训练结果,用于画图
# 次数
compute_times_NAG = []
compute_times_RMSProp = []
RIGHT_NAG = []
RIGHT_RMSProp = []
# 错误率
error_RMSProp = []
error_NAG = []
# loss
loss_pic_train = []
loss_pic_test = []
# 分类器训练
logistic_regression(x_test,y_test,w,compute_times_NAG,1)
logistic_regression(x_test,y_test,w,compute_times_RMSProp,2)
# 以下为作图部分
# 图一展示 loss 随训练次数的变化
pl.figure(1)
pl.plot(compute_times_NAG, error_NAG)# use pylab to plot x and y
pl.plot(compute_times_RMSProp, error_RMSProp)# use pylab to plot x and y
pl.title('linear_regression')# give plot a title
pl.xlabel('times')# make axis labels
pl.ylabel('loss')
# 图二展示 right 随训练次数的变化
pl.figure(2)
pl.plot(compute_times_NAG, RIGHT_NAG)# use pylab to plot x and y
pl.plot(compute_times_RMSProp, RIGHT_RMSProp)# use pylab to plot x and y
pl.title('linear_regression')# give plot a title
pl.xlabel('times')# make axis labels
pl.ylabel('rightpercent')
pl.show()# show the plot on the screen
# -
| RegressionExperiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Jupyter notebook DeepArray
# This just includes a basic DeepArray for a notebook that uses the most common libraries for data i/o and analysis. It also sets up some basic directory calls for data and figures.
# +
### system-level libs
import os, sys
### analysis libs
import numpy as np
import pandas as pd
import scipy as sp
### home-made libs
# sys.path.append('../scripts/')
# import utils
# import vis
### plotting libs
import matplotlib.pyplot as plt
### some convenient magics
# %load_ext autoreload
# %autoreload 2
### Directory setup
cwd = os.getcwd()
datapath = os.path.abspath(os.path.join(cwd, '../../data/'))
figpath = os.path.abspath(os.path.join(cwd, '../figs/'))
| DeepArray/notebooks/notebook_template_DATE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python385jvsc74a57bd0c39885171911825c85dd2872b53a1f517ccf6205e737fb46b3c547492811485f
# ---
import matplotlib.pyplot as plt
import numpy as np
# +
figure1 = plt.figure(figsize=(10, 10)) # 改动点
ax1 = figure1.add_subplot(1,1,1)
ax1.set_title("fig1")
ax1.set_title("y = 2x +1")
ax1.set_xlabel("X")
ax1.set_xlim([0, 20])
ax1.set_ylabel("Y")
ax1.set_ylim([0, 20])
ax1.plot(x_ranges, x_ranges * 2 + 1)
plt.show()
# -
x_ranges = np.arange(0, 10)
x_ranges
# +
figure1 = plt.figure(figsize=(10, 10)) # 改动点
ax1 = figure1.add_subplot(1,1,1)
plt.title("y = 2x +1")
plt.xlabel("X")
plt.xlim([0, 20])
plt.ylabel("Y")
plt.ylim([0, 20])
plt.plot(x_ranges, x_ranges * 2 + 1)
plt.show()
# +
figure2 = plt.figure(figsize=[18,9])
# 第一幅,摆在右下角,也就是第四个位置
ax1 = figure2.add_subplot(2,2, 4)
plt.title("fig1")
# 第二幅,摆在第三个位置,也就是第二行第一个
ax2 = figure2.add_subplot(2,2,3)
plt.title("fig2")
# 第三幅,摆在第二个位置,也就是第一行第二个
ax3 = figure2.add_subplot(2,2,2)
plt.title("fig3")
# 第四幅,摆在第一个位置,也就是第一行第一个
ax4 = figure2.add_subplot(2,2,1)
plt.title("fig4")
plt.show()
# +
# 通过 figure 函数的 facecolor 参数设置画布颜色
figure1 = plt.figure(figsize=(10, 10), facecolor=[1, 0, 0]) # 改动点
# 通过 add_subplot 函数的参数设置子图的颜色
ax1 = figure1.add_subplot(1,1,1, facecolor = [0, 0, 1])
plt.title("y = 2x +1")
plt.xlabel("X")
plt.xlim([0, 20])
plt.ylabel("Y")
plt.ylim([0, 20])
# 通过 plot 函数的 linewidth 参数设定线条的粗细
# 通过 color 参数来设定线条的颜色
plt.plot(x_ranges, x_ranges * 2 + 1, linewidth = 3, color=[0,1,0])
plt.show()
# +
ax1.set_facecolor([1,0,0])
# -
np.arange(0,10)
import pandas as pd
df_rating = pd.read_csv("tv_rating.csv")
df_rating
df_rating["rating_num"] = df_rating.rating.apply(lambda x:float(x.replace("分","")))
df_rating
df_head_100 = df_rating[:100]
df_head_100
# +
# 创建图表
figure = plt.figure(figsize=[18,9])
# 创建1x1分割的网格,并选中第一个区域创建子图
ax1 = figure.add_subplot(1,1,1)
# 要显示中文需要指定字体
plt.rcParams["font.sans-serif"] = "SimHei"
# 设置标题和轴属性
plt.title("电视剧评分折线图")
plt.xlabel("序号")
plt.xlim([0, 100])
plt.ylabel("评分")
plt.ylim([0,5])
# 画图,x 轴是序号,所以可以直接用 index 属性去索引即可
# y 轴的评分的值,则直接用我们准备的 rating_num 列
plt.plot(df_head_100.index, df_head_100.rating_num)
plt.show()
# -
df_top_100.rating_num = pd.to_numeric(df_top_100.rating_num)
df_top_100.rating_num
df_head_100_200 = df_rating[100:200]
df_head_100_200
# +
figure = plt.figure(figsize=[18,9])
ax1 = figure.add_subplot(2,1,1)
plt.rcParams["font.sans-serif"] = "SimHei"
plt.title("电视剧评分折线图 (0-100)")
plt.xlabel("序号")
plt.xlim([0, 100])
plt.ylabel("评分")
plt.ylim([0,5])
plt.plot(df_head_100.index, df_head_100.rating_num)
ax1 = figure.add_subplot(2,1,2)
plt.rcParams["font.sans-serif"] = "SimHei"
plt.title("电视剧评分折线图 (100-200)")
plt.xlabel("序号")
plt.xlim([100, 200])
plt.ylabel("评分")
plt.ylim([0,5])
plt.plot(df_head_100_200.index, df_head_100_200.rating_num)
plt.show()
# -
| py-course/chapter21/chapter21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Validating models (e.g., CellML, SBML files)
# This tutorial illustrates how to check whether model files are consistent with the specifications of their associated formats.
#
# BioSimulators currently supports several languages including
# * [BioNetGen Language (BNGL)](https://bionetgen.org)
# * [CellML](https://cellml.org): 1.0 and 2.0 (validation for 1.1 is not available)
# * [NeuroML](https://neuroml.org/)
# * [Low Entropy Model Specification (LEMS)](https://lems.github.io/LEMS/)
# * [Smoldyn simulation configurations](http://www.smoldyn.org/)
# * [Systems Biology Markup Language (SBML)](http://sbml.org), including all packages and versions
# * [XML format for Resource Balance Analysis (RBA) models](https://github.com/SysBioInra/RBApy/blob/master/docs/XML_format%20(RBApy.xml).pdf)
# * [XPP ODE format](http://www.math.pitt.edu/~bard/xpp/help/xppodes.html)
#
# <div class="alert alert-block alert-info">
# BioSimulators integrates community-contributed validators for each model language. For some model languages, these validators provide limited validation and/or limited reports of errors. We welcome contributions of improved validation tools.
# </div>
# ## 1. Validate a model online
# The easiest way to validate models is to use the web interface at https://run.biosimulations.org. An HTTP API for validating models is also available at [https://combine.api.biosimulations.org](https://combine.api.biosimulations.org/).
# + [markdown] tags=[]
# ## 2. Validate a model with the BioSimulators command-line application
# -
# First, install [BioSimulators-utils](https://github.com/biosimulators/Biosimulators_utils). Installation instructions are available at [https://docs.biosimulators.org](https://docs.biosimulators.org/Biosimulators_utils). Note, BioSimulators-utils must be installed with the installation options for the model languages that you wish to validate. A Docker image with BioSimulators utils and all dependencies is also available ([`ghcr.io/biosimulators/biosimulators`](https://github.com/biosimulators/Biosimulators/pkgs/container/biosimulators)).
# Inline help for the `biosimulators-utils` command-line program is available by running the program with the `--help` option.
# + tags=[]
# !biosimulators-utils --help
# + tags=[]
# !biosimulators-utils validate-model --help
# -
# Next, use the command-line program to validate the [model](../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml).
# + tags=[]
# !biosimulators-utils validate-model SBML ../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml
# -
# If the model is invalid, a list of errors will be printed to your console.
# ## 3. Validate a model programmatically with Python
# First, install [BioSimulators-utils](https://github.com/biosimulators/Biosimulators_utils). Installation instructions are available at [https://docs.biosimulators.org](https://docs.biosimulators.org/Biosimulators_utils). Note, BioSimulators-utils must be installed with the installation options for the model languages that you wish to validate. A Docker image with BioSimulators utils and all dependencies is also available ([`ghcr.io/biosimulators/biosimulators`](https://github.com/biosimulators/Biosimulators/pkgs/container/biosimulators)).
# Next, import BioSimulators-utils' enumeration of model languages and model validation method.
from biosimulators_utils.sedml.data_model import ModelLanguage
from biosimulators_utils.sedml.validation import validate_model_with_language
# This enumeration can be inspected to determine the key for each model language.
# + tags=[]
print('\n'.join(sorted('ModelLanguage.' + lang for lang in ModelLanguage.__members__.keys())))
# -
# Next, use the `validate_model_with_language` method to check the validity of a model file and retrieve list of errors and warnings and information about the model.
model_filename = '../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml'
model_language = ModelLanguage.SBML
errors, warnings, model = validate_model_with_language(model_filename, model_language)
# The first and second outputs (`errors` and `warnings`) are nested lists of error and warning messages. Next, use the `flatten_nested_list_of_strings` method to print out human-readable messages.
# + tags=[]
from biosimulators_utils.utils.core import flatten_nested_list_of_strings
from warnings import warn
if warnings:
warn(flatten_nested_list_of_strings(warnings), UserWarning)
if errors:
raise ValueError(flatten_nested_list_of_strings(errors))
# -
# The third output of `validate_model_with_language` (`model`) contains information about the model. This type of this output depends on the model langauge. For SBML, this output is an instance of `libsbml.SBMLDocument`.
model.__class__
# `get_parameters_variables_outputs_for_simulation` uses this third output to identify the inputs (e.g., constants, initiation conditions) and outputs (observables, such as concentrations of species and velocities of reactions, that could be recorded from simulations) of models. See the [model introspection tutorial](../1.%20Introspecting%20models/Introspecting%20models.ipynb) for more information.
| tutorials/3. Validating models and simulations/1. Validating models (e.g., CellML, SBML files).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''venv'': venv)'
# name: python3
# ---
# +
import datetime as dt
from matplotlib import pyplot as plt
from sklearn import model_selection
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from pathlib import Path
current_path = Path.cwd()
# +
start = dt.datetime(2010,4,23) ####added 60 days
end = dt.datetime(2018,12,31)
stk_data = pd.read_csv(current_path / 'Data/1Day_only_trading_time_std_60.csv',delimiter=',')
print((stk_data['Date'][2]))
stk_data['Date'] = pd.to_datetime(stk_data['Date'], format="%Y-%m-%d")
whole_data = stk_data.set_index('Date')
stk_data = whole_data[start:end]
test_start = dt.datetime(2019,1,1)
test_end = dt.datetime(2019,9,2)
test_data = whole_data[test_start: test_end]
# -
plt.figure(figsize=(14,14))
plt.plot(stk_data['Close'])
plt.title('Historical Future Value')
plt.xlabel('Date')
plt.ylabel('Future Price')
plt.show()
stk_data['Date'] = stk_data.index
data2 = pd.DataFrame(columns = ['Date', 'Open', 'High', 'Low', 'Close'])
data2['Date'] = stk_data['Date']
data2['Open'] = stk_data['Open']
data2['High'] = stk_data['High']
data2['Low'] = stk_data['Low']
data2['Close'] = stk_data['Close']
train_set = data2.iloc[:, 1:2].values
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(train_set)
X_train = []
y_train = []
for i in range(60, len(stk_data)):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
regressor = Sequential()
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
regressor.fit(X_train, y_train, epochs = 15, batch_size = 32)
testdataframe= test_data
testdataframe['Date'] = testdataframe.index
testdata = pd.DataFrame(columns = ['Date', 'Open', 'High', 'Low', 'Close'])
testdata['Date'] = testdataframe['Date']
testdata['Open'] = testdataframe['Open']
testdata['High'] = testdataframe['High']
testdata['Low'] = testdataframe['Low']
testdata['Close'] = testdataframe['Close']
real_stock_price = testdata.iloc[:, 1:2].values
dataset_total = pd.concat((data2['Open'], testdata['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(testdata) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, len(inputs)):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.figure(figsize=(20,10))
plt.plot(real_stock_price, color = 'green', label = 'SBI Stock Price')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted SBI Stock Price')
plt.title('SBI Stock Price Prediction')
plt.xlabel('Trading Day')
plt.ylabel('SBI Stock Price')
plt.legend()
plt.show()
# +
predicted_stock_price = regressor.predict(X_train)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
testdataframe= stk_data[60:]
testdataframe['Date'] = testdataframe.index
testdata = pd.DataFrame(columns = ['Date', 'Open', 'High', 'Low', 'Close'])
testdata['Date'] = testdataframe['Date']
testdata['Open'] = testdataframe['Open']
testdata['High'] = testdataframe['High']
testdata['Low'] = testdataframe['Low']
testdata['Close'] = testdataframe['Close']
real_stock_price = testdata.iloc[:, 1:2].values
# -
plt.figure(figsize=(50,40))
plt.plot(real_stock_price, color = 'green', label = 'SBI Stock Price')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted SBI Stock Price')
plt.title('SBI Stock Price Prediction')
plt.xlabel('Trading Day')
plt.ylabel('SBI Stock Price')
plt.legend()
plt.show()
| predict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# RMinimum : Phase 2 - Test
import random
import queue
import math
# Testfall:
# +
n = 20
k = 5
X = [i for i in range(n)]
cnt = [0 for _ in range(n)]
# -
# Algorithmus : Phase 2
# +
def phase2(L, k, cnt):
random.shuffle(L)
res = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
minele = [0 for _ in range(len(res))]
var = list(res)
for i in range(len(var)):
q = queue.Queue()
for item in var[i]:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
minele[i] = q.get()
return minele, res, cnt
# Testfall
me, res, cnt = phase2(X, k, cnt)
# -
# Resultat :
# +
def test(X, k, res, me, cnt):
n = len(X)
r = len(res)
rs = len(res[0])
m = len(me)
mx = max(cnt)
print('')
print('Testfall n / k:', n, '/', k)
print('====================================')
print('# L_i :', r)
print('|L_i| :', rs)
print('# min :', m)
print('max(cnt) :', mx)
print('log(k) :', math.ceil(math.log(k)/math.log(2)))
print('====================================')
return
# Testfall
test(X, k, res, me, cnt)
# -
| jupyter/.ipynb_checkpoints/phase2_test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # scipy, matplotlib.pyplot, pandas (and datetime)
#
# Note the **```pyplot```** module is imported directoy from **```matplotlib```** and is shortened to **```plt```**. Pyplot is the main tool you will need to plot on screen and save figures using Matplotlib.
#
# The best way to learn about scipy is through its [official tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/).
#
# ## Look at the official scipy doc [here](https://docs.scipy.org/doc/scipy-1.0.0/reference/). It has a TON of goodies:
# * ```scipy.stats```
#
# * ```scipy.integrate```
#
# * ```scipy.optimize```
#
# * ```scipy.interpolate```
#
# * ```scipy.fftpack```
#
# * ```scipy.signal```
#
# * ```scipy.linalg```
#
# * ```scipy.io```
# +
import numpy
import scipy
import scipy.stats
import matplotlib.pyplot as plt # note, this is often imported as "plt"
import pandas # for 2D tables like csv and text files
import datetime # for time series data
# special code for Jupyter Notebook; allows in-line plotting (may not be needed on your machine)
# %matplotlib inline
# -
# * Now let's create a "noisy" array of data. Add in noise by using **```numpy.random.normal()```**, which draws random samples around a Gaussian distribution taking 3 arguments as input (location, stdev/scale, and size)
# * ```numpy.random.normal()``` documentation is [here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html#numpy.random.normal)
# * Other random sampling options from numpy are [here](https://docs.scipy.org/doc/numpy/reference/routines.random.html)
N = 1000
xvals = numpy.linspace(1,100,N)
a_signal = numpy.linspace(1,100,N)
a_noise = numpy.random.normal(loc=0, scale=5, size=N)
a = a_signal+a_noise
scipy.stats.describe(a)
b_signal = numpy.linspace(1,100,N)
b_noise = numpy.random.normal(loc=0, scale=15, size=N)
b = b_signal+b_noise
plt.scatter(xvals, b) # blue
plt.scatter(xvals, a) # orange
# ## ==========> NOW YOU TRY <==========
#
# * Create an array ```c``` with twice the spread of ```b```, then add it to the plot above
# +
#plt.scatter(xvals, c)
#plt.scatter(xvals, b)
#plt.scatter(xvals, a)
# -
# ## Standard deviation
#
# * Center a and b by their means:
#
# ```a_stdev = numpy.std(a)```
a_ctd = a - a.mean()
b_ctd = b - b.mean()
# * Compute the standard deviation of a and b. The following lines of code are equivalent to the standard deviation formula:
#
# $$ \sigma_a = \sqrt{ \frac{1}{N-1} \sum^n_{i=1}(a_i - \bar{a})^2 } $$
# +
a_stdev = numpy.std(a, ddof=1) # ensures 1/(N-1), default is (1/N)
#a_stdev = ( (1./(N-1)) * numpy.sum(a_ctd**2.) ) ** 0.5
#a_stdev = numpy.sqrt( (1./(N-1)) * numpy.sum(a_ctd**2.) )
# -
b_stdev = numpy.sqrt( (1./(N-1)) * numpy.sum(b_ctd**2.) )
# ## Pearson correlation using ```scipy.stats.pearsonr()```
#
# * Compute the correlation between a and b. You can do this using **```scipy.stats.pearsonr()```**.
#
# * Note that this function outputs a tuple with the correlation value and the p-value. See the documentation [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html).
#
# * Also beware that the **```scipy.stats.pearsonr()```** function does not use a conservative $\frac{1}{(N-1)}$ estimate of standard deviation.
#
# $$ \mathrm{corr} = \frac{1}{N} \frac{ \sum^n_{i=1} (a_i-\bar{a})(b_i-\bar{b})}{\sigma_a \sigma_b} $$
ab_corr = scipy.stats.pearsonr(a,b)
print(ab_corr) # returns corr coeff. and p-value
# * You can also calculate the correlation by hand (you're on your own for the p-value, though...)
a_stdev, b_stdev = numpy.std(a), numpy.std(b) # note multiple assignments per line, NON-conservative estimate
ab_corr = numpy.mean( (b_ctd*a_ctd) / (a_stdev*b_stdev) )
print(ab_corr)
# ## Linear regression using ```scipy.stats.linregress()```
#
# * Now calculate a simple linear regression on the a and b arrays. Note **```scipy.stats.linregress()```** outputs 5 different variables. See its documentation [here](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html).
a_slope, a_intercept, a_rval, a_pval, a_stderr = scipy.stats.linregress(xvals,a)
b_slope, b_intercept, b_rval, b_pval, b_stderr = scipy.stats.linregress(xvals,b)
print(a_slope, b_slope)
# Calculate a line of best fit using the linear regression info:
a_fit = a_slope * xvals + a_intercept
b_fit = b_slope * xvals + b_intercept
# # matplotlib.pyplot
#
# If/when you have the time, the official [Pyplot tutorial](https://matplotlib.org/users/pyplot_tutorial.html) is a good place to start.
# ## Simple plotting
#
# * Now plot the a and b data along with a best-fit line.
# * There are a few different ways of creating a figure.
# * One way is using **```plt.plot()```** directory:
plt.scatter(xvals, a, label='a')
plt.plot(xvals, a_fit, label='a fit', c='red')
plt.legend()
# Another way is calling **```plt.subplot()```**, which will allow you to plot panels using a (row,col,plot_number) syntax:
ax1 = plt.subplot(1,1,1) # (rows, cols, plot), (111) also works --> commas not necessary
ax1.scatter(xvals, a, label='a')
ax1.plot(xvals, a_fit, label='a fit', c='red')
ax1.legend()
# The __most flexible__ way of creating a figure is to create it using **```fig = plt.figure()```** and *adding* subplots one by one using **```fig.add_subplot()```**.
#
# * ** The advantage of this method is that axes can be adjusted individually and there is a LOT of flexibitily here.**
# * **If you plan to be creating publication-ready figures, this is a great place to start.**
# * Note this figure is saved as a PDF using the **```plt.savefig()```** function: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig
# +
fig = plt.figure(figsize=(8,6)) # size is optional
my_font_size = 12
ax1 = fig.add_subplot(2,1,1) # (rows, cols, plot)
ax1.scatter(xvals, a, label='a', color='black')
ax1.plot(xvals, a_fit, color='red', label='a fit', lw=3)
ax1.set_ylim(-50,150)
ax1.set_ylabel('a label', fontsize = my_font_size)
ax1.tick_params(labelsize = my_font_size)
ax1.legend(loc=0)
ax2 = fig.add_subplot(2,1,2)
ax2.scatter(xvals, b, label='b', color='black')
ax2.plot(xvals, b_fit, color='red', label='b fit', lw=3)
ax2.set_ylim(-50,150)
ax2.set_ylabel('b label', fontsize = my_font_size)
ax2.tick_params(labelsize = my_font_size)
ax2.legend(loc=0)
plt.tight_layout() # helpful for stretching axes to the "figsize" chosen in line 1
plt.savefig('ab_trends.pdf', transparent=True)
# -
# # Plotting the Niño 3.4 index (with ```pandas``` and ```datetime```)
#
# * Download MONTHLY Niño index data (.txt file) from the Climate Prediction Center website:
# http://www.cpc.ncep.noaa.gov/data/indices/ersst4.nino.mth.81-10.ascii
#
# * This file is available in the week2 folder on the seminar webpage
# * Read a txt or csv file using the ```pandas.read_table()``` function
filename = 'ersst4.nino.mth.81-10.ascii.txt'
data_file = pandas.read_table(filename, delim_whitespace=True)
data = data_file.values
type(data_file)
data_file
#data_file.describe()
# the first column [0] is the year of the data set
# the ninth column [9] is the Nino3.4 index
print(data.shape)
years = data[:,0]
months = data[:,1]
nino34 = data[:,8]
nino34_centered = nino34 - nino34.mean()
# ### Use the ```datetime``` module in Python to handle dates and time series
#
# This file contains monthly averages of ENSO indices. The time is only given as the year, however, so we must convert
today = datetime.date(2018,3,7)
now = datetime.datetime(2018,3,7,13,45,0)
print(today)
print(now)
# +
ntime = years.size # length of time series array
# TWO WAYS to create a list of datetime objects
# here, looping
year_month_list = []
for i in range(ntime):
year_month_list.append(datetime.date( int(years[i]), int(months[i]) ,15))
# -
# * [List comprehensions](http://www.secnetix.de/olli/Python/list_comprehensions.hawk) are a fast way to create a list that has a "built-in" for loop:
# here, list comprehension (kind of like a backwards list, all tucked into brackets)
year_month_list = [datetime.date(int(years[i]), int(months[i]), 15) for i in range(ntime)]
# * Now create a figure of the monthly, centered Niño 3.4 index
# +
fig = plt.figure(figsize=(10,3)) # figsize=(inches wide, inches tall) --> not necessary
ax = fig.add_subplot(1,1,1)
ax.plot(year_month_list,nino34_centered, color='red', lw=2) # a higher zorder means the line will sit over others
ax.set_xlabel('Year')
ax.set_ylabel('SST anomaly')
ax.set_title('Nino 3.4 index (monthly)')
ax.axhline(y=0, color='black', ls='-', lw=2, zorder=1)
# -
# **Create a moving or rolling average using the ```pandas``` module, which comes with the Anaconda distribution but can be installed separately**
#
# * Note pandas is the Python Data Analysis Library and is *distinct* from NumPy and SciPy but provides a lot of complementary functions. Read about it [here](http://pandas.pydata.org/).
nino34_centered_rolling_mean = pandas.Series(nino34_centered).rolling(window=12,center=True).mean()
fig = plt.figure(figsize=(10,3))
ax = fig.add_subplot(1,1,1)
ax.plot(year_month_list,nino34_centered_rolling_mean, color='red', lw=2, zorder=2)
ax.set_xlabel('Year')
ax.set_ylabel('SST anomaly')
ax.set_title('Nino 3.4 index (rolling mean)')
ax.axhline(y=0, color='black', ls='-', lw=2, zorder=1)
# ## ==========> NOW YOU TRY <==========
#
# * Change the rolling mean above to be a 3-year mean, and re-run the cells to plot
| materials/week2/3_scipy_matplotlib_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SMP and snow pit profile matching
# An example of SMP profiles at snow pit locations are scaled to account for differences
# in the target snowpack structure. Because the SMP and density cutter profiles are physically
# displaced we use a brute-force approach to match them as best as possible using a 4 step
# procedure
#
# 1. Make a first guess at the density from the SMP using the P15
# 2. Break up the SMP profile into L_RESAMPLE sized layers
# 3. Randomly scale each layer according to MAX_STRETCH_LAYER
# 4. Compare against density profile
# 5. Select best fit scaling where RMSE and R are optimized
#
# +
# Community packages
import os
import numpy as np
np.random.seed(2019)
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.patches import ConnectionPatch
from scipy import stats
from statsmodels.formula.api import ols
import pickle
# Local packages
import smpfunc #SMP helper functions
# Import SLF SMP Package
from snowmicropyn import Profile, proksch2015, loewe2012
# Import data
pit_summary = pd.read_csv("./data/Pit/pit_summary.csv")
pit_desnity = pd.read_csv("./data/Pit/pit_density.csv")
input_data = os.path.abspath("./data/SMP/Calibration")
# Set constants
CUTTER_SIZE = 15 # Half the height of the density cutter in mm
WINDOW_SIZE = 5 # SMP analysis window in mm
H_RESAMPLE = 1 # delta height in mm for standardized SMP profiles
L_RESAMPLE = 50 # layer unit height in mm for SMP matching
MAX_STRETCH_LAYER = 0.75 # Max layer change in % of height
MAX_STRETCH_OVERALL = 0.15 # Max profile change in % of total height
NUM_TESTS = 10000
axis_value_size = 12
axis_label_size = 14
coeffs = pickle.load(open('./output/density_k20b_coeffs.pkl', 'rb'))
# +
# Load the SMP calibration profiles, should be 25 for the ECCC case
def load_smp(smp_file):
p = Profile.load(smp_file)
p = smpfunc.preprocess(p, smoothing = 0)
ground = p.detect_ground()
surface = p.detect_surface()
return p
file_list = [
os.path.join(input_data, f)
for f in sorted(os.listdir(input_data))
if f.endswith(".pnt")]
smp_data = [load_smp(file) for file in file_list]
# +
smp = smp_data[11]
smp_file_num = int(smp.name[-4:])
pit_df = pit_summary[pit_summary['SMPF'] == smp_file_num] # Select the matching pit
density_df = pit_desnity[pit_desnity['ID'] == pit_df['ID'].values[0]]
density_df = density_df.assign(relative_height=np.abs(((density_df['TOP']*10) - CUTTER_SIZE) - density_df['TOP'].max()*10).values)
# Make first guess at microstructure based on original profile
l2012 = loewe2012.calc(smp.samples_within_snowpack(), window=WINDOW_SIZE)
p2015 = proksch2015.calc(smp.samples_within_snowpack(), window=WINDOW_SIZE)
# Estimate offset of the snow depth and SMP profile
smp_profile_height = p2015.distance.max()
smp_height_diff = pit_df.MPD.values*1000 - smp_profile_height
# Create new SMP resampled arrays and determine the number of layers
depth_array = np.arange(0, p2015.distance.max() + smp_height_diff, H_RESAMPLE)
density_array = np.interp(depth_array,p2015.distance,p2015.P2015_density)
force_array = np.interp(depth_array,p2015.distance,l2012.force_median)
l_array = np.interp(depth_array,p2015.distance,l2012.L2012_L)
smp_df = pd.DataFrame({'distance': depth_array,
'density': density_array,
'force_median': force_array,
'l': l_array})
num_sections = np.ceil(len(smp_df.index)/L_RESAMPLE).astype(int)
random_tests = [smpfunc.random_stretch(x, MAX_STRETCH_OVERALL, MAX_STRETCH_LAYER) for x in np.repeat(num_sections, NUM_TESTS)]
scaled_profiles = [smpfunc.scale_profile(test, smp_df.distance.values, smp_df.density.values, L_RESAMPLE, H_RESAMPLE) for test in random_tests]
compare_profiles = [smpfunc.extract_samples(dist, rho, density_df.relative_height.values, CUTTER_SIZE) for dist, rho in scaled_profiles]
compare_profiles = [pd.concat([profile, density_df.reset_index()], axis=1, sort=False) for profile in compare_profiles]
retrieved_skill = [smpfunc.calc_skill(profile, CUTTER_SIZE) for profile in compare_profiles]
retrieved_skill = pd.DataFrame(retrieved_skill,columns = ['r','rmse','rmse_corr','mae'])
# +
min_scaling_idx = retrieved_skill.sort_values(['r', 'rmse_corr'], ascending=[False, True]).head(1).index.values
min_scaling_coeff = random_tests[int(min_scaling_idx)]
dist, scaled_l = smpfunc.scale_profile(min_scaling_coeff, smp_df.distance.values, smp_df.l.values, L_RESAMPLE, H_RESAMPLE)
dist, scaled_force_median = smpfunc.scale_profile(min_scaling_coeff, smp_df.distance.values, smp_df.force_median.values, L_RESAMPLE, H_RESAMPLE)
result = compare_profiles[int(min_scaling_idx)].assign(l=smpfunc.extract_samples(dist, scaled_l, density_df.relative_height.values, CUTTER_SIZE).mean_samp,
force_median=smpfunc.extract_samples(dist, scaled_force_median, density_df.relative_height.values, CUTTER_SIZE).mean_samp)
# +
layer_thickness_scaled = L_RESAMPLE + (min_scaling_coeff * L_RESAMPLE)
layer_height_scalled = layer_thickness_scaled.cumsum()
layer_thickness = np.repeat(L_RESAMPLE, num_sections)
layer_height = layer_thickness.cumsum()
# -
# Change in thickness
print((depth_array.max() - layer_thickness_scaled.sum())/depth_array.max())
density_k2020 = coeffs[0] + coeffs[1] * np.log(scaled_force_median) \
+ coeffs[2] * np.log(scaled_force_median) * scaled_l \
+ coeffs[3] * scaled_l
# #### Figure 3 with caption
#
# <img src="./output/figures/Fig03_matching_lowres.png" alt="Figure 3" style="width: 500px;"/>
#
# #### Example of the SMP processing workflow to align first guess estimates of ρ_smp (Black lines) and snow pit measurements (Red lines). Profiles are divided in arbitrary layers of 5 cm and randomly scaled in thickness. A best fit candidate is selected where RMSE between the snow density estimates and observations are minimized. The matching process is used to account for differences in the target snowpack between the two methods. The example shown is for Eureka site 5 on MYI.
# +
f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(10,8))
ax1.tick_params(axis='both', which='major', labelsize=axis_value_size)
ax2.tick_params(axis='both', which='major', labelsize=axis_value_size)
xmax = 500
xmin = 100
for l in layer_height:
ax1.axhline(y=l, color = 'k', alpha = 0.5, ls = 'dashed')
ax1.step(result.RHO, result.relative_height-15, color = 'r')
ax2.step(result.RHO, result.relative_height-15, color = 'r')
ax3.step(result.RHO, result.relative_height-15, color = 'r',
label = r'$\rho_{\mathrm{pit}}$')
ax1.plot(density_array, depth_array, color = 'k')
for l in layer_height_scalled:
ax2.axhline(y=l, color = 'k', alpha = 0.5, ls = 'dashed')
ax3.axhline(y=l, color = 'k', alpha = 0.5, ls = 'dashed')
ax2.plot(scaled_profiles[int(min_scaling_idx)][1],
scaled_profiles[int(min_scaling_idx)][0], color = 'k')
for i in np.arange(0, len(layer_height)-1):
xy = (xmin, layer_height_scalled[i])
xy1 = (xmax,layer_height[i])
con = ConnectionPatch(xyA=xy, xyB=xy1, coordsA="data", coordsB="data",
axesA=ax2, axesB=ax1, color="k", alpha = 0.5, ls = 'dashed')
ax2.add_artist(con)
ax3.plot(density_k2020 ,scaled_profiles[int(min_scaling_idx)][0],
color = 'k', label = r'$\rho_{\mathrm{smp}}$')
ax1.set_ylim(0,600)
ax1.set_xlim(xmin,xmax)
ax2.set_xlim(xmin,xmax)
ax3.set_xlim(xmin,xmax)
ax3.axhline(y=l, color = 'k', alpha = 0.5, ls = 'dashed', label = 'Layer')
ax1.set_ylabel('Depth below air-snow interface [mm]', fontsize=axis_label_size)
ax2.set_xlabel('Snow density [kg m$\mathregular{^{-3}}$]', fontsize=axis_label_size)
ax1.set_title('(a) First guess')
ax2.set_title('(b) Layer scaled')
ax3.set_title('(c) Calibrated')
ax1.invert_yaxis()
ax2.invert_yaxis()
ax3.invert_yaxis()
ax3.legend(fontsize=12, facecolor='white', framealpha=1)
f.savefig('./output/figures/Fig03_matching_lowres.png', format='png')
f.savefig('./output/figures/Fig03_matching_production.pdf', format='pdf', dpi = 300)
# -
# Correlation after alignment
np.corrcoef(result.RHO, result.mean_samp)[1][0]
# RMSE after alignment
np.sqrt(np.mean(result.RHO-result.mean_samp)**2)
| Annex_Matching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="sR4CPheu1U0y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292} outputId="21c23e14-0629-42ce-8a77-58022c799dcc"
# !pip install --upgrade pyswarm
# !pip install pymc3
# !pip install --upgrade pactools
# + id="j4uN6iwZ1jZk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 92} outputId="d2e276d6-6971-4273-c483-8d0f2af77173"
from sklearn.model_selection import train_test_split
from pyswarm import pso
from os import path
import os
import requests
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy
import sys
import numpy as np
from numpy import loadtxt
from numpy import array
from numpy.random import choice
import pandas as pd
import time
import random
import statistics
import pandas
import math
import csv
import random
import logging
from pymc3 import *
import pymc3 as pm
from functools import reduce
from operator import add
from tqdm import tqdm
import geopy.distance
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
from theano import shared
from sklearn import preprocessing
print('Running on PyMC3 v{}'.format(pm.__version__))
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Activation
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow
from tensorflow.keras import datasets, layers, models
from keras.utils import np_utils
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.ticker import MaxNLocator
#TESNORFOW
import tensorflow as tf
from tensorflow import keras
from keras import datasets, layers, models
#KERAS LIBRARIES
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout , Flatten,BatchNormalization,Conv2D,MaxPooling2D, Activation,LSTM,Embedding,Input,GlobalAveragePooling2D
from keras.regularizers import l1, l2, l1_l2
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import backend
from keras.utils import np_utils
from keras.utils import to_categorical
from numpy import savetxt
# + id="H4SEXPN_1meX" colab_type="code" colab={}
def data1():
train1 = np.load('/content/drive/My Drive/NumpyArrayCovidx/train.npy',allow_pickle=True)
train_labels1 = np.load('/content/drive/My Drive/NumpyArrayCovidx/train_labels.npy',allow_pickle=True)
train2,test1, train_labels2,test_labels1 = train_test_split(train1, train_labels1, test_size=0.2,random_state=42)
x_train=train2/225.0
y_train = pd.get_dummies(train_labels2)
x_test=test1/225.0
y_test = pd.get_dummies(test_labels1)
return x_train,y_train,x_test,y_test
# + id="bXJdbfa21p1G" colab_type="code" colab={}
x_train, y_train, x_test, y_test = data1()
# + id="vXjAS_W94t3_" colab_type="code" colab={}
IMG_SHAPE1=(64,64,3) # θα το προσαρμόσουμε
vgg19 = keras.applications.vgg19.VGG19(input_shape=IMG_SHAPE1,
include_top=False,
weights='imagenet')
# + id="03YkeZPiCivC" colab_type="code" colab={}
#fine_tuning,lstm_units,dropout,learning_rate
lb=[0,1,0.0,0.001]
ub=[19,10,0.6,0.2]
# + id="2NyzfLZC-EZk" colab_type="code" colab={}
def create_model_lstm_newvgg(x):
print(x[0],x[1],x[2],x[3])
IMG_SHAPE1=(64,64,3) # θα το προσαρμόσουμε
vgg19 = keras.applications.vgg19.VGG19(input_shape=IMG_SHAPE1,
include_top=False,
weights='imagenet')
tempmod=vgg19
for layer in tempmod.layers[:(-1)*int(round(x[0]))]:
layer.trainable = False
model = tf.keras.Sequential()
model.add(tempmod)
layer_2 =layers.Flatten()
model.add(layer_2)
model.add(layers.Reshape((layer_2.output_shape[1],1)))
model.add(layers.LSTM(int(round(x[1])),return_sequences=True))
model.add(layers.Dropout(x[2]))
model.add(layers.Flatten())
model.add(keras.layers.Dense(3,activation="softmax"))
if x[3]< 0.003:
learning_rate = 0.001
elif x[3]< 0.0075:
learning_rate = 0.005
elif x[3]< 0.015:
learning_rate = 0.01
elif x[3]< 0.035:
learning_rate = 0.02
elif x[3]< 0.075:
learning_rate = 0.05
elif x[3]< 0.125:
learning_rate = 0.1
elif x[3]< 0.175:
learning_rate = 0.15
else:
learning_rate = 0.2
opt = keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"])
return model
# + id="jswZ25s7Irzx" colab_type="code" colab={}
EarlyStopper = EarlyStopping(patience=4, monitor='val_loss', mode='min')
count = 0
# + id="vSsRgXBuIvNB" colab_type="code" colab={}
def apple(x):
model = create_model_lstm_newvgg(x)
model.fit(x_train, y_train, epochs=20, batch_size=1000, verbose=1,validation_data=(x_test, y_test),callbacks=[EarlyStopper])
loss, acc = model.evaluate(x_test, y_test, verbose=1)
if acc>0.9:
global count
count = count+1
model.save(f"/content/drive/My Drive/saved_models/pso_vgg_lstm/model-{count}-{round(acc, 3)}-{round(loss, 3)}")
savetxt(f"/content/drive/My Drive/saved_models/pso_vgg_lstm/data-{count}.csv", x, delimiter=',')
return loss
# + id="hxPygje5I8wS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1247fe28-b9ae-4b08-81e2-02a226531146"
xopt, fopt = pso(apple, lb, ub, swarmsize=10, omega=0.5, phip=0.5, phig=1.0, maxiter=30, minstep=1)
print ("Best position"+str(xopt))
print ("Loss:" + str(fopt))
| Transfer_Learning/ParticleSwarmOpt LSTM/pso_vgg_lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PythonData]
# language: python
# name: conda-env-PythonData-py
# ---
# +
# Dependencies
import csv
import matplotlib.pyplot as plt
import requests
import pandas as pd
#import api_key
from config import api_key
# -
# Save config information.
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "metric"
# Build partial query URL
query_url = f"{url}appid={api_key}&units={units}&q="
# +
# list of cities to query
cities = ["London", "Paris", "Las Vegas", "Stockholm", "Sydney", "Hong Kong"]
# list for response results
lon = []
pressure = []
# loop through cities, make API request, and append desired results
for city in cities:
response = requests.get(query_url + city).json()
lon.append(response['coord']['lon'])
pressure.append(response['main']['pressure'])
print(f"Longitude: {lon}")
print(f"Pressure: {pressure}")
# +
# build a dataframe from the cities, lon,and pressure lists
weather_data = {"city": cities, "pressure": pressure, "lon": lon}
weather_data = pd.DataFrame(weather_data)
weather_data
# +
# Build a scatter plot for each data type
plt.scatter(weather_data["lon"], weather_data["pressure"], marker="o")
# Incorporate the other graph properties
plt.title("Pressure in World Cities")
plt.ylabel("Pressure (Celsius)")
plt.xlabel("Longitude")
plt.grid(True)
# Save the figure
plt.savefig("PressureInWorldCities.png")
# Show plot
plt.show()
| 2/Extra_Content/Stu_CityPressure/Solved/Stu_CityPressure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
PATH = r'..\data\raw\dialect_dataset.csv'
PATH_TO_SAVE = r'C..\data\processed\01_preprocced.csv'
URL = r'https://recruitment.aimtechnologies.co/ai-tasks'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import requests
import string
import re
df = pd.read_csv(PATH)
df.head()
id_,text = [],[]
for i in range(0,(len(df)//1000)+1):
myobj = df.iloc[i*1000:(i+1)*1000,:1].values
x = requests.post(URL, json = list(map(str,list(myobj[:,0]))))
id_.extend(eval((x.text)).keys())
text.extend(eval((x.text)).values())
df_tmp = pd.DataFrame({'id':id_, 'text':text})
df_tmp['id'] = df_tmp['id'].apply(int)
df_tmp = pd.merge(df,df_tmp, on='id')
df_tmp.head()
df_tmp.to_csv(PATH_TO_SAVE)
| notebooks/00_Data_fetching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regular Expressions
# ## Lesson Objectives
# By the end of this lesson, you will be able to:
# - Understand the relationship between regular expressions and Python
# - Compile regular expressions using the `re` module
# - Use regular expressions to find pattern matches in strings
# - Use regular expressions to transform strings based on pattern matches
# ## Table of Contents
# - [Regular Expressions](#regular expressions)
# - [The `re` Module](#re)
# - [Compiling Patterns and Querying Match Objects](#compiling)
# - [The Regular Expression Syntax](#syntax)
# - [Special Metacharacters](#meta)
# - [Special Sequences](#special)
# - [Quantifiers](#quant)
# - [Modifying Strings](#mod)
# - [Takeaways](#takeaways)
# - [Applications](#applications)
# <a id='regular expressions' ></a>
# ## Regular Expressions
# Regular expressions are a powerful language for matching text patterns. They provide a lot of flexibility to help you find and/or replace a sequence of characters. Mastering regular expressions is a stepping stone to [Natural Language Processing](https://en.wikipedia.org/wiki/Natural-language_processing).
#
# This lesson will introduce you to regular expressions as well as the Python `re` module, which provides regular expression support.
#
# I also want to quickly point you to https://pythex.org/, a quick and easy way to test your Python regular expressions.
# <a id='re' ></a>
# ## The `re` Module
# Regular expressions are made available through the `re` module in Python. Think of the module as access to another language through Python.
#
# You use regular expressions to specify patterns that you want to find within a string. Your pattern can be just about anything, like proper names or e-mail addresses, and the `re` module has a lot of tools that will let you find these patterns and even modify strings based on these patterns.
#
# > Behind the scenes, regular expression patterns are compiled into a series of bytecodes and then executed in C. These details won't be covered in this lesson (but maybe in a future one!).
# <a id='compiling' ></a>
# ## Compiling Patterns and Querying Match Objects
# Before we dive into the regular expression language, we need to see how the `re` module works. We'll see here how to compile some basic patterns and then query the *match objects* that they return after searching a string.
import re
# ### Compiling Regular Expressions
# The `re` module is an interface to the regular expression language. Since regular expressions aren't baked into Python's syntax, you often have to use the `compile()` method to compile regular expressions into *pattern objects*. These objects then have methods for various operations, such as searching for matches or performing string substitutions.
#
# Here's how we compile a pattern for a simple character match (looking for the single character `"a"`):
a = re.compile('a') #the single character `a` is our pattern
a
# Above, we passed our regular expression (the single character `"a"`) to `re.compile()` as a string. We had to pass our regular expression as a string because regular expressions aren’t a part of the core Python language. This keeps the Python language simpler for all other purposes, but creates a little headache when it comes to compiling the backslash character.
#
# #### A quick note on escape characters and Python's raw string notation
# > Regular expressions use the backslash character (`\`) to indicate special forms or to escape special characters. This collides with Python’s usage of the same character for the same purpose in string literals. For example, to match a literal backslash with a regular expression, one would have to write `\\\\` as the pattern string, because the regular expression must be `\\`, and each backslash must be expressed as `\\` inside a regular Python string literal.
#
# > The solution is to use Python’s raw string notation for regular expression patterns; backslashes are not handled in any special way in a string literal prefixed with `'r'`. So `r"\n"` is a two-character string containing `'\'` and `'n'`, while `"\n"` is a one-character string containing a newline. **Thus it's best to type regular expression patterns using this raw string notation.**
#
# So let's re-compile our character using Python's raw string notation.
a = re.compile(r'a') # Good practice to use the raw string notation
a
# ### Finding Matches
# Now that we have an object representing a *compiled* regular expression, there's an array of object methods and attributes at our disposal. Here we'll cover just the most common/useful ones:
#
# <table border="1">
# <colgroup>
# <col width="28%" />
# <col width="72%" />
# </colgroup>
# <thead valign="bottom">
# <tr class="row-odd"><th>Method</th>
# <th class="head">Purpose</th>
# </tr>
# </thead>
# <tbody valign="top">
# <tr><td><code><span>match()</span></code></td>
# <td>Determines if the regular expression matches at the beginning
# of the string, returning a match object.</td>
# </tr>
# <tr><td><code><span>search()</span></code></td>
# <td>Scans through a string, looking for any
# location where the regular expression matches.</td>
# </tr>
# <tr><td><code><span>findall()</span></code></td>
# <td>Finds all substrings where the regular expression matches,
# returning them as a list.</td>
# </tr>
# <tr><td><code><span>finditer()</span></code></td>
# <td>Finds all substrings where the regular expression matches,
# returniing them as an <a href="https://docs.python.org/3.4/glossary.html#term-iterator"><span>iterator</span></a>.</td>
# </tr>
# </tbody>
# </table>
# #### `match()`
# > Determines if the regular expression matches at *the beginning of the string*, returning a `match object`.
apple = a.match('apple')
banana = a.match('banana')
type(apple)
type(banana)
# > #### Match Objects return `None` if there's no match.
# > Since the `match()` method didn't find an `"a"` at the beginning of `"banana"`, it returned `None`. This is handy for conditional tests, as `None` always evaluates to the boolean `False`:
if banana:
print("banana!")
else:
print("no banana :(")
# > Above, we used `match()` to search the strings `apple` and `banana` using the single character `"a"` as our regular expression. We assigned the results to two match objects, one of which (`apple`) actually contained a match. We can now query that match object using some methods. A few of the most common ones are:
#
# <table border="1">
# <colgroup>
# <col width="29%" />
# <col width="71%" />
# </colgroup>
# <thead valign="bottom">
# <tr><th>Method/Attribute</th>
# <th>Purpose</th>
# </tr>
# </thead>
# <tbody>
# <tr><td><code><span>group()</span></code></td>
# <td>Return the string matched by the RE</td>
# </tr>
# <tr><td><code><span>start()</span></code></td>
# <td>Return the starting position of the match</td>
# </tr>
# <tr><td><code><span>end()</span></code></td>
# <td>Return the ending position of the match</td>
# </tr>
# <tr><td><code><span>span()</span></code></td>
# <td>Return a tuple containing the (start, end)
# positions of the match</td>
# </tr>
# </tbody>
# </table>
#
# Let's try these out!
apple.group()
apple.start()
apple.end()
apple.span()
# <a id='syntax' ></a>
# ## The Regular Expression Syntax
# Regular Expression Syntax can be roughly divided into 3 categories:
# 1. Special Metacharacters
# 2. Special Sequences
# 3. Quantifiers
#
# We'll run through several, but not all, techniques within each of these categories. Full documentation can be found [here](https://docs.python.org/3.4/library/re.html#regular-expression-syntax).
# <a id='meta' ></a>
# ### Special Metacharacters
# As we saw with the letter `"a"` above, most letters and characters match themselves. Naturally, however, there are exceptions to this rule. Some characters are **special metacharacters** and so they don’t match themselves. Instead, they signal that something special should happen.
#
# Here's a table listing some of the most common special metacharacters.
# <table>
# <tr><th>Special Metacharacters</th>
# <th>Purpose</th>
# </tr>
# <tr>
# <td><code>[ ]</code></td>
# <td>matches any chars placed within the brackets</td>
# </tr>
# <tr>
# <td ><code>\</code></td>
# <td>escape special characters</td>
# </tr>
# <tr>
# <td><code>.</code></td>
# <td>matches any character</td>
# </tr>
# <tr>
# <td><code>^</code></td>
# <td>matches beginning of string</td>
# </tr>
# <tr>
# <td><code>$</code></td>
# <td>matches end of string</td>
# </tr>
# <tr>
# <td><code>|</code></td>
# <td>the OR operator matches either the left or right operand</td>
# </tr>
# <tr>
# <td><code>()</code></td>
# <td>creates capture groups and indicates precedence</td>
# </tr>
# </table>
# #### Character Classes: [ ]
# > Square brackets are used to specify a *character class*, which is a set of characters that you wish to match.
# +
a = re.compile(r'[a]')
apple = a.match('apple')
# Using a conditional check to reinforce this practice
if apple:
print(apple.group())
else:
print("No match.")
# -
# > Inside the class, you can either list characters individually or specify a range of characters using a dash (`'-'`). For example, `[abc]` will match any of the characters `a`, `b`, or `c`. If you were to to use the range approach, you would have typed `[a-c]`. These character classes are case sensitive, so if you wanted to match only uppercase letters, your regular expression would be [A-Z].
# +
az = re.compile(r'[a-z]') #any lowercase letters
alpha_num = az.search('01234w56789')
if alpha_num:
print(alpha_num.group())
else:
print("No match.")
# -
# > The special metacharacters - except the caret ('^') - do not work inside classes. For example, `[abc*]` will match any of the characters `'a'`, `'b'`, `'c'`, or `'*'`.
# +
abc_star = re.compile(r'[abc*]')
star = abc_star.search('01234*56789')
if star:
print(star.group())
else:
print("No match.")
# -
# #### The Escape Character: `\`
# > As with Python string literals, the backslash escapes special characters (including itself!).
# +
slash = re.compile(r'\\') #the first backslash is to escape the second backslash, which itself is a metacharacter
slash_match = slash.search(r"....path\file")
if slash_match:
print(slash_match.group())
else:
print("No match.")
# -
# #### The Wildcard: `.`
# > Another useful metacharacter is the dot (`.`). It'll match anything except a newline character, unless you use the `re.DOTALL` flag, in which case it will also match a newline.
# +
after_dash = re.compile(r'[-].')
after_dash_match = after_dash.search(r"a-b")
if after_dash_match:
print(after_dash_match.group())
else:
print("No match.")
# -
# #### The Complementing Caret: `^`
# > You can match the characters *not* listed within a character class by complementing the set with the caret character (`^`) as the first character of the class. For example, `[^a]` will match any character *except* 'a'.
# +
not_a = re.compile(r'[^a]')
non_a = not_a.search('aaaabaaaaa')
if non_a :
print(non_a.group())
else:
print("No match.")
# -
# > When `^` is inside `[]` but not at the start, it means the actual `^` character. When it's escaped with the backslash (`\^`), it also means the actual `^` character.
# +
caret = re.compile(r'[\^]')
c = caret.search('aaaa^aaaaa')
if c :
print(c.group())
else:
print("No match.")
# -
# > In all other cases, the caret `'^'` matches the start of the string. If your string spans multiple lines, you can use the `MULTILINE` flag, which also matches immediately after each newline, which the `match()` method doesn't do (we'll see this further down).
s1 = "Kate is the most common spelling of that name"
s2 = "Another spelling is Cate, but that's rarer."
print(re.search(r"^[CK]ate", s1))
print(re.search(r"^[CK]ate", s2)) #this will return None because Cate isn't at the
#beginning of the string
# > If we concatenate the two strings `s2` and `s1`, the resulting string won't start with `Kate` or `Cate`. But, the name will be following a newline character. In this case, we can use the `re.MULTILINE` flag.
s = s2 + "\n" + s1
print(re.search(r"^[CK]ate", s, re.MULTILINE))
print(re.match(r"^[CK]ate", s, re.MULTILINE))
# > The last example using `match()` shows that the MULTILINE flag doesn't work with the `match()` method since `match()` only checks the beginning of a string for a match.
# #### The Dollar Sign: `$`
# > The dollar sign ($) matches at the end of a line, which is defined as either the end of the string, or any location followed by a newline character.
#
# > Here's it in action:
endsWithExclamation = re.compile(r'!$')
m = endsWithExclamation.search("Find the end of this sentence!")
m.group()
# > And note how it won't find the exclamation point that's in the middle of a word.
endsWithExclamation = re.compile(r'!$')
m = endsWithExclamation.search("Find the end of this sent!ence")
m.group()
# > You can use the `^` and `$` together to indicate that the entire string must match the regex:
number = re.compile(r'^\d+$')
n = number.search('1234567890')
n.group()
#this won't work!
number = re.compile(r'^\d+$')
n = number.search('1234a567890')
n.group()
# #### The Pipe: `|`
# > You use the pipe character when you want to match one of many expressions. For example, if your expression is `A|B`, where `A` and `B` are regular experessions themselves, you'll have a regular expression that will match either `A` or `B`. The regular expressions separated by `'|'` are tried from left to right. When one of the patterns completely matches, that pattern is accepted and the search is over. This means that once `A` matches, `B` will not be tested further, even if it would produce a longer overall match (although you can use the `findall()` method to circumvent this). Practically speaking, this means the `'|'` operator is never *greedy*, which is a concept we'll cover later.
country = re.compile (r'America|France')
m = country.search('America and France are both countries.')
m.group()
m_all = country.findall('America and France are both countries.')
m_all
# #### Grouping with Parentheses: `( )`
# > Regular expressions are useful for dissecting strings into several subgroups based on whether or not they match different regular expressions. Adding parentheses to your regex will create *groups* in the expression, which then allow you to use the `group()` and `groups()` match object methods to return the matches from just one group or all of the groups.
g = re.compile(r'(ab)-(cd)')
g.match('ab-cd').groups() #return all the groups as a tuple
# > By passing integers to `group()`, you can return different groups of the match. Passing 0 or nothing to `group()` returns the whole match as a single string.
# +
g = re.compile(r'(ab)-(cd)')
for i in range(3):
print(g.match('ab-cd').group(i))
# -
# <a id='special' ></a>
# ### Special Sequences
# As we saw above, the backslash escapes special characters. But it can also be followed by certain characters to signal a special sequence. These special sequences represent some pretty useful predefined sets of characters, such as the set of digits, the set of letters, or the set of anything that isn’t whitespace.
#
# Here's a list of some of the special sequences:
#
# <table>
# <tr><th>Special Sequence</th>
# <th>Purpose</th>
# </tr>
# <tbody>
# <tr><td><code><span>\d</span></code></td>
# <td>Matches any decimal digit; this is equivalent to the class [0-9]</td>
# </tr>
# <tr><td><code><span>\D</span></code></td>
# <td>Matches any non-digit character; this is equivalent to the class [^0-9]</td>
# </tr>
# <tr><td><code><span>\s</span></code></td>
# <td>Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v]</td>
# </tr>
# <tr><td><code><span>\S</span></code></td>
# <td>Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v]</td>
# </tr>
# <tr><td><code><span>\w</span></code></td>
# <td>Matches any alphanumeric character; this is equivalent to the class [a-zA-Z0-9]</td>
# </tr>
# <tr><td><code><span>\W</span></code></td>
# <td>Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9]</td>
# </tr>
# </tbody>
# </table>
#
# Additionally, these special sequences can be included inside a character class. For example, `[\s,.]` is a character class that will match any whitespace character, or ',' or '.'.
# <a id='quant' ></a>
# ### Quantifiers
#
# <table>
# <tr><th>Quantifiers</th>
# <th>Purpose</th>
# </tr>
# <tr>
# <td><code>*</code></td>
# <td>0 or more (append <code>?</code> for non-greedy)</td>
# </tr>
# <tr>
# <td><code>+</code></td>
# <td>1 or more (append <code>?</code> for non-greedy)</td>
# </tr>
# <tr>
# <td><code>?</code></td>
# <td>0 or 1; i.e. to mark something as being optional (append <code>?</code> for non-greedy)</td>
# </tr>
# <tr>
# <td><code>{m}</code></td>
# <td>exactly <code>m</code> occurrences</td>
# </tr>
# <tr>
# <td><code>{m, n}</code></td>
# <td>from <code>m</code> to <code>n</code>. <code>m</code> defaults to 0, <code>n</code> to infinity</td>
# </tr>
# <tr>
# <td><code>{m, n}?</code></td>
# <td>from <code>m</code> to <code>n</code>, as few as possible</td>
# </tr>
# </table>
# #### Matching Zero or More with `*`
# > The group that precedes an asterisk will match a regex zero or more times.
ab_infinite = re.compile(r'(ab)*')
m = ab_infinite.search('abababababababababababababababababababababababababababab')
m.group()
# #### Optional Matching with `?`
# > The question mark indicates that the group that precedes it is an optional part of the pattern.
person = re.compile(r'(wo)?man')
m = person.search('man')
m.group()
person = re.compile(r'(wo)?man')
m = person.search('woman')
m.group()
# #### Matching One or More with `+`
# > Unlike the asterisk, which does not require its group to appear in the matched string, the group preceding a plus must appear at least once.
person = re.compile(r'(wo)+man')
m = person.search('woman')
m.group()
person = re.compile(r'(wo)+man')
m = person.search('man')
m.group()
# #### Matching Specific Repetitions with {}
# > If you have a group that you want to repeat a specific number of times, you can append the group in your regex with a number in curly brackets. For example, the regex `(ab){2}` will match the string `'abab'` and nothing else since it has only two repetitions of the `(ab)` group.
two_matches = re.compile(r'(ab){2}')
m = two_matches.search('abababababababababababab')
m.group()
# > You can also specify a range by placing a minimum and maximum within the curly brackets. For example, the `(ab){2,5}` will match `'abab'`, `'ababab'`, `'abababab'`, and `'ababababab'`.
two_matches = re.compile(r'(ab){2,5}')
m = two_matches.search('abababababababababababab')
m.group()
# > You can also leave out the first or second number to make the minimum or maximum unbounded. The following example will find at least two but up to infinite repetitions of `'ab'`.
two_matches = re.compile(r'(ab){2,}')
m = two_matches.search('abababababababababababab')
m.group()
# #### Greedy vs Nongreedy Matching
# Above, we saw how `(ab){2,5}` could match two, three, four, or five instances of `'ab'`. But then why did `group()` return `'ababababab'` - the maximum match - instead of all of the shorter matches?
#
# This happened because Python’s regular expressions are **greedy** by default. This means regex patterns involving repetition will match the longest string possible. To make patterns non-greedy - i.e. to match the shortest string possible - you can append a question mark.
two_matches = re.compile(r'(ab){2,5}?')
m = two_matches.search('abababababababababababab')
m.group()
# <a id='mod' ></a>
# ## Modifying Strings
# So far we’ve just been searching for patterns in a string. But we can also use regular expressions to modify strings. Here's a few methods that'll help us with this:
#
# <table>
# <colgroup>
# <col width="28%" />
# <col width="72%" />
# </colgroup>
# <thead valign="bottom">
# <tr><th>Method/Attribute</th>
# <th>Purpose</th>
# </tr>
# </thead>
# <tbody valign="top">
# <tr><td><code><span>split()</span></code></td>
# <td>Split the string into a list, splitting it
# wherever the RE matches</td>
# </tr>
# <tr><td><code><span>sub()</span></code></td>
# <td>Find all substrings where the RE matches, and
# replace them with a different string</td>
# </tr>
# <tr><td><code><span>subn()</span></code></td>
# <td>Does the same thing as <code><span>sub()</span></code>, but
# returns the new string and the number of
# replacements</td>
# </tr>
# </tbody>
# </table>
# #### Splitting Strings
# > `split(string[, maxsplit=0])`
#
# > Split string by the matches of the regular expression. If capturing parentheses are used in the regex, then their contents will also be returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits are performed.
p = re.compile(r'\W+')
p.split('This is a test, short and sweet, of split().')
p.split('This is a test, short and sweet, of split().', 3)
['This', 'is', 'a', 'test, short and sweet, of split().']
# #### Search and Replace
# Another common task is to find all the matches for a pattern, and replace them with a different string. The `sub()` method takes a replacement value, which can be either a string or a function, and the string to be processed.
# > `sub(replacement, string[, count=0])`
#
# >Returns the string obtained by replacing the leftmost non-overlapping occurrences of the regex in `string` by the replacement `replacement`. If the pattern isn’t found, `string` is returned unchanged.
#
# >The optional argument `count` is the maximum number of pattern occurrences to be replaced; `count` must be a non-negative integer. The default value of 0 means to replace all occurrences.
#
# >Here’s a simple example of using the `sub()` method. It replaces colour names with the word colour:
# +
p = re.compile('(blue|white|red)')
for i in range(3):
print(p.sub('colour', 'blue socks, red shoes, and white pants', count = i))
# -
# <a id='takeaways' ></a>
# ## Takeaways
# Regular expressions let you specify and find character patterns. They're super helpful in Python but are also ubiquitous across and beyond programming languages. For example, Google Sheets has a regular exression find-and-replace feature that allows you to search using regular expressions:
# <img src="assets/gs_regex.png">
# You can read more about regular expressions in general [here](http://www.regular-expressions.info/).
# <a id='applications' ></a>
# ## Applications
# [Phone Number Extractor](https://stackoverflow.com/questions/16699007/regular-expression-to-match-standard-10-digit-phone-number)
#
# [Email Address Extractor](http://www.regular-expressions.info/email.html)
| Learn Python/4. Advanced Topics/1. Regular Expressions/Regular Expressions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Alpha - Beta calculation
# This notebook allows you to calculate the basic dimensionless alpha - beta parameters for a set of fireball data after [Gritsevich 2012](https://doi.org/???). This use the eponential atmosphere simplification. To use a complete atmosphere model for your fireball, please see [Lyytinen et al. 2016](https://doi.org/10.1016/j.pss.2015.10.012).
# ### Inputs:
# csv file with following column headers:
# + velocity (or as indicated below)
# + height (or as indicated below)
#
# ### Outputs:
# ecsv file with:
# + normalised height
# + normalised velocity
# + alpha and beta in metadata
# ###########################################################
# ## DO NOT change this section
# Please just run the cells as they are. To run code cells, select and press shift + enter
# ## Code imports
# Let's start with code imports. To run code cells, select and press shift + enter
# import astropy
import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from astropy.table import Table
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import FileLinks, FileLink, DisplayObject
matplotlib inline
plt.rcParams['figure.dpi'] = 100
# ## Function definitions
def Q4_min(Vvalues, Yvalues):
""" initiates and calls the Q4 minimisation given in Gritsevich 2007 -
'Validity of the photometric formula for estimating the mass of a fireball projectile'
"""
params = np.vstack((Vvalues, Yvalues))
b0 = 1.
a0 = np.exp(Yvalues[-1])/(2. * b0)
x0 = [a0, b0]
xmin = [0.001, 0.00001]
xmax = [10000., 50.]
bnds = ((xmin[0], xmax[0]), (xmin[1], xmax[1]))
res = minimize(min_fun, x0, args=(Vvalues, Yvalues),bounds=bnds)
return res.x
def min_fun(x, vvals, yvals):
"""minimises equation 7 using Q4 minimisation given in equation 10 of
Gritsevich 2007 - 'Validity of the photometric formula for estimating
the mass of a fireball projectile'
"""
res = 0.
for i in range(len(vvals)):
res += pow(2 * x[0] * np.exp(-yvals[i]) - (scipy.special.expi(x[1]) - scipy.special.expi(x[1]* vvals[i]**2) ) * np.exp(-x[1]) , 2)
# #sum...alpha*e^-y*2 |__________________-del______________________________________| *e^-beta
# res += (np.log(2 * x[0]) -yvals[i] - np.log(scipy.special.expi(x[1]) - scipy.special.expi(x[1]* vvals[i]**2) ) -x[1]) * 2
return res
# ###########################################################
# ## Upload data
# Now provide the path to the csv file
# @interact
# def show_files(dir=os.listdir('/home/ellie/Desktop/Data')):
# f =FileLinks(dir, included_suffixes=".csv")
f = 'DN150417.csv'
slope = 15.17
# If you would like to define an initial velocity, insert below. Otherwise, an average of first 10 data points will be used.
v0 = []
# If you would like to change the default header names, insert here:
vel_col = "D_DT_geo"
h_col = "height"
# ######################################################
# ## Just run the below!
data = Table.read(f, format='ascii.csv', guess=False, delimiter=',')
slope = np.deg2rad(slope)
# ## Normalising data
# This is where we create the dimensionless data. We create separate columns to mask nan/zero values.
#
# Height is normalised using the scale height of the homogeneous atmosphere (h0=7160 km). Velocity is normalised using the initial velocity. Here we crudly use the average of the first 10 data points. For more sophisticated v0 determination, you may hardcode in the v0 value you wish to use here:
# +
alt = []#np.asarray(data['height'])
vel = []#np.asarray(data['D_DT_geo'])
# remove any nan values
for v in range(len(data[vel_col])):
if data[vel_col][v] >1.:
vel.append(data[vel_col][v])
alt.append(data[h_col][v])
# define initial velocity, if not already
if v0 == []:
v0 = np.nanmean(vel[0:10])
# normalise velocity
vel = np.asarray(vel)
alt = np.asarray(alt)
Vvalues = vel/v0 #creates a matrix of V/Ve to give a dimensionless parameter for velocity
# normalise height - if statement accounts for km vs. metres data values.
if alt[0]<1000:
h0 = 7.160 # km
else:
h0 = 7160. # metres
Yvalues = alt/h0
# -
# ## Calculate alpha and beta
# This calls the Q4_min function from below. Make sure you have compile
# +
Gparams= Q4_min(Vvalues, Yvalues)
alpha = Gparams[0]
beta = Gparams[1]
# -
# Alpha and Beta values are (respectively):
print(alpha, beta)
# ## Plotting
plt.close()
# plt.rcParams['figure.dpi'] = 10
plt.rcParams['figure.figsize'] = [5, 5]
x = np.arange(0.1,1, 0.00005); #create a matrix of x values
fun = lambda x:np.log(alpha) + beta - np.log((scipy.special.expi(beta) - scipy.special.expi(beta* x**2) )/2)
y = [fun(i) for i in x]
plt.scatter(Vvalues, Yvalues,marker='x', label=None)
plt.xlabel("normalised height")
plt.ylabel("normalised velocity")
plt.plot(x, y, color='r')
# plt.xlim(0.4, 1.23)
# plt.ylim(6, 12)
plt.show()
# ## Using alpha and beta to estimate masses
# if your point is:
# right of the _grey_ line --> unlikely meteorite
# left of the _black_ line --> likely meteorite
# in between two lines --> possible meteorite
plt.close()
plt.rcParams['figure.figsize'] = [7, 7]
# +
# define x values
x_mu = np.arange(0,10, 0.00005)
# function for mu = 0, 50 g possible meteorite:
fun_mu0 = lambda x_mu:np.log(13.2 - 3*x_mu)
y_mu0 = [fun_mu0(i) for i in x_mu]
# function for mu = 2/3, 50 g possible meteorite:
fun_mu23 = lambda x_mu:np.log(4.4 - x_mu)
y_mu23 = [fun_mu23(i) for i in x_mu]
# plot mu0, mu2/3 lines and your poit:
plt.plot(x_mu, y_mu0, color='grey')
plt.plot(x_mu, y_mu23, color='k')
plt.scatter([np.log(alpha * np.sin(slope))], [np.log(beta)], color='r')
# defite plot parameters
plt.xlim((-1, 7))
plt.ylim((-3, 4))
plt.xlabel("ln(alpha x sin(slope))")
plt.ylabel("ln(beta)")
plt.axes().set_aspect('equal')
plt.show()
# -
# ## Have a play with parameters!
# masses are in grams
plt.close()
plt.rcParams['figure.figsize'] = [7, 7]
# +
plt.close()
def f(mf, mu, cd, rho, A):
rho = float(rho)
A = float(A)
mf = mf/1000.
m0 = (0.5 * cd * 1.29 * 7160. * A / pow(rho, 2/3))**3.
x = np.arange(0,10, 0.00005)
y = [np.log((mu - 1) * (np.log(mf/m0) + 3 * i)) for i in x]
plt.plot(x, y, color='k')
plt.scatter([np.log(alpha * np.sin(slope))], [np.log(beta)], color='r')
m_txt=pow(0.5 * cd * 1.29 * 7160. * A / (alpha * np.sin(slope) *rho**(2/3.0)), 3.0) *np.exp(-beta/(1-mu))
print(m_txt)
plt.xlim((-1, 7))
plt.ylim((-3, 4))
plt.axes().set_aspect('equal')
plt.text(0, 3.5, "mass given above (slider) parameters: %.1f g" %(m_txt * 1000) )#, ha='center', va='center', transform=ax.transAxes)
plt.xlabel("ln(alpha x sin(slope))")
plt.ylabel("ln(beta)")
plt.show()
interact(f, mf=(0, 2000, 500), mu=(0, 2/3., 1/3.), cd=(0.9, 1.5), rho=[1500,2700,3500,7000], A=[1.21, 1.5, 2.0, 3.0])
# +
## Assumeing values:
# atmospheric density at sea level
sea_level_rho = 1.29
# AERODYNAMIC drag coefficient (not Gamma)
cd = 1.3
# Possible shape coefficients
A = [1.21, 1.3, 1.55]
# possible meteoroid densities
m_rho = [2700, 3500, 7000]
# trajectory slope
gamma = slope
sin_gamma = np.sin(gamma)
# shape change coefficient
mu = 2./3.
me_sphere = [pow(0.5 * cd * 1.29 * 7160 * A[0] / (alpha * sin_gamma *i**(2/3.0)), 3.0) for i in m_rho]
me_round_brick = [pow(0.5 * cd * 1.29 * 7160 * A[1] / (alpha * sin_gamma *i**(2/3.0)), 3.0) for i in m_rho]
me_brick = [pow(0.5 * cd * 1.29 * 7160 * A[2] / (alpha * sin_gamma * i**(2/3.0)), 3.0) for i in m_rho]
mf_sphere = [i * np.exp(-beta / (1-mu) *(1-Vvalues[-1]**2)) for i in me_sphere]
mf_round_brick = [i * np.exp(-beta / (1-mu) *(1-Vvalues[-1]**2)) for i in me_round_brick]
mf_brick = [i * np.exp(-beta / (1-mu) *(1-Vvalues[-1]**2)) for i in me_brick]
# -
# ### Spherical body:
print("Entry mass of spherical body with 3500 density =\n", me_sphere[1])
print("\n")
print("Final mass of spherical body with 3500 density =\n",mf_sphere[1])
# ### Rounded brick body (typical):
print("Entry mass of typical shape with 3500 density =\n", me_round_brick[1])
print("\n")
print("Final mass of typical shape with 3500 density =\n",mf_round_brick[1])
# ### Brick body:
print("Entry mass of brick shape with 3500 density =\n", me_brick[1])
print("\n")
print("Final mass of brick shape with 3500 density =\n",mf_brick[1])
out = astropy.table.Table(names=['alt', 'vels'], data=[alt, vel])
out.write('/tmp/test.csv', format='csv', delimiter=',')
# ################################
| alpha_beta_fun.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cartopy.crs as ccrs
import cosima_cookbook as cc
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cmocean as cm
from dask.distributed import Client
import matplotlib.path as mpath
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
# Avoid the Runtime errors in true_divide encountered when trying to divide by zero
import warnings
warnings.filterwarnings('ignore', category = RuntimeWarning)
# matplotlib stuff:
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib
from mpl_toolkits.mplot3d import axes3d
# %matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
matplotlib.rcParams['lines.linewidth'] = 2.0
# +
db = '/scratch/x77/db6174/access-om2/archive/databases/1deg_jra55_ryf/cc_database_control.db'
session_1deg = cc.database.create_session(db)
db = '/g/data/x77/db6174/access-om2/archive/databases/cc_database_paramKPP.db'
session_025deg = cc.database.create_session(db)
expt = ['1deg_jra55_ryf_control', '1deg_jra55_ryf_param_KPP', '025deg_jra55_ryf_nostress_cont_kpp', '025deg_jra55_ryf_param_kpp3']
session = [session_1deg, session_1deg, session_025deg, session_025deg]
name = ['Control_1deg', 'Param_1deg', 'Control_025deg', 'Param_025deg']
# -
from dask.distributed import Client
client = Client()
client
# +
ncoarse = 12
basin = ['NA', 'NP', 'SA', 'SP']
x_min = [-100, -250, -70, -250]
x_max = [ 10 , -100, 20, -80 ]
y_min = [ 20 , 20 , -80, -80 ]
y_max = [ 75 , 75 , -55, -55 ]
start_time = '1900-01-01'
end_time = '1914-12-31'
# -
# ## KPP Mixing Layer
# +
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (20, 12))
for i, j in enumerate(session):
hblt = cc.querying.getvar(expt = expt[i], session = j, variable = 'hblt', frequency = '1 monthly').sel(time = slice(start_time, end_time))
finite_variable = xr.ufuncs.isfinite(hblt)
for k, l in enumerate(basin):
area_t = cc.querying.getvar(expt = expt[i], variable = 'area_t', session = j, frequency = 'static', n = 1)
area_t = area_t.sel(xt_ocean = slice(x_min[k], x_max[k])).sel(yt_ocean = slice(y_min[k], y_max[k]))
area_t_basin = (finite_variable * area_t).mean('time')
hblt_basin = (hblt*area_t_basin).sum(dim = ['yt_ocean','xt_ocean'])/area_t_basin.sum(dim = ['yt_ocean','xt_ocean'])
hblt_basin = hblt_basin.coarsen({"time": ncoarse}, boundary = "trim").mean()
hblt_basin.sel(time = slice(start_time, end_time)).plot(ax = axes[int(k/2)][int(k%2)],label = name[i])
del area_t
axes[int(k/2)][int(k%2)].legend()
axes[int(k/2)][int(k%2)].set_title('KPP mixing layer - ' + basin[k])
# -
# ## Animations
hblt_cont = cc.querying.getvar(expt = expt[0], session = session_1deg, variable = 'hblt', frequency = '1 monthly').sel(time = slice(start_time, end_time))
hblt_pram = cc.querying.getvar(expt = expt[1], session = session_1deg, variable = 'hblt', frequency = '1 monthly').sel(time = slice(start_time, end_time))
# +
import netCDF4 as nc
import datetime, time, os, sys
from glob import glob
import matplotlib.gridspec as gridspec
import matplotlib.animation as animation
from matplotlib.collections import LineCollection
# matplotlib stuff:
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib
from mpl_toolkits.mplot3d import axes3d
# %matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
matplotlib.rcParams['lines.linewidth'] = 2.0
# -
dir = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_control/output002/ocean'
file = os.path.join(dir,'ocean-2d-hblt-1-daily-mean-ym_1914_01.nc')
data = xr.open_dataset(file)
filename = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_control/output002/ocean/ocean-2d-hblt-1-daily-mean-ym_1914_01.nc'
particles = nc.Dataset(filename)
y = particles.variables['yt_ocean'][:]
x = particles.variables['xt_ocean'][:]
time1 = particles.variables['time'][:]
X,Y = np.meshgrid(x,y)
iter1 = 'ocean-2d-hblt-1-daily-mean-ym_1914_01.nc'
dir1 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_control/output002/ocean'
dir2 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_param_KPP/output002/ocean'
# +
iter1 = 'ocean-2d-hblt-1-daily-mean-ym_1914_01.nc'
dir1 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_control/output002/ocean'
dir2 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_param_KPP/output002/ocean'
file1 = os.path.join(dir1,iter1)
file2 = os.path.join(dir2,iter1)
data1 = xr.open_dataset(file1)
data2 = xr.open_dataset(file2)
data1.hblt.time[364]
# +
nframes = 364
startframe = 1
nt = 0
fig = plt.figure(figsize=(12,6))
rho = 1025
def updatefig(nt):
plt.clf()
currentframe = startframe + nt
year = 1914
day = nt
iter1 = 'ocean-2d-hblt-1-daily-mean-ym_1914_01.nc'
dir1 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_control/output002/ocean'
dir2 = '/scratch/x77/db6174/access-om2/archive/1deg_jra55_ryf_param_KPP/output002/ocean'
file1 = os.path.join(dir1,iter1)
file2 = os.path.join(dir2,iter1)
data1 = xr.open_dataset(file1)
data2 = xr.open_dataset(file2)
hblt_pram_day = data1.hblt.sel(time = slice(data1.hblt.time[nt].values,data1.hblt.time[nt+1].values)).mean('time')
hblt_cont_day = data2.hblt.sel(time = slice(data1.hblt.time[nt].values,data1.hblt.time[nt+1].values)).mean('time')
plt.title('Day %4d' % day)
p1 = plt.contourf(X, Y, (hblt_pram_day - hblt_cont_day)/(hblt_cont_day), cmap = cm.cm.curl, levels = np.linspace(-1, 1, 21), extend = 'both')
p1 = plt.colorbar(p1,orientation = 'vertical', shrink = 0.8)
filestr = '/home/156/db6174/x77/1deg_test_runs/Parameterising_KPP_shear/Animations/KPP_param_comp/Figures/image%04d.png' % nt
plt.savefig(filestr,dpi = 900)
print(nt)
return p1
anim = animation.FuncAnimation(fig, updatefig, frames=nframes, interval = 1, blit=False)
metadata = dict(title='MOC_Yearly', artist='G<NAME>',comment='Animation made using matplotlib and ffmpeg')
ffwriter = animation.FFMpegWriter(fps=8, codec='libx264', bitrate=4000, extra_args=['-pix_fmt','yuv420p'], metadata=metadata)
anim.save('Animations/KPP_param_comp/hblt_diff.m4v', writer=ffwriter)
plt.show()
# -
| Parameterising_KPP_shear/KPP_comparisons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import torch
from torch.utils.data import DataLoader
from models.wgan_camel import Discriminator, Generator
from tqdm.auto import trange, tqdm
import matplotlib.pyplot as plt
# %matplotlib inline
# + pycharm={"name": "#%%\n"}
real_images = np.load("../data/camel/full_numpy_bitmap_camel.npy").reshape((-1, 1, 28, 28)).astype(np.float32) / 255
plt.imshow(real_images[0][0], cmap='gray')
plt.show()
# + pycharm={"name": "#%%\n"}
generator = Generator().cuda()
critic = Discriminator().cuda()
optimizer_G = torch.optim.RMSprop(generator.parameters(), lr=5e-5)
optimizer_C = torch.optim.RMSprop(critic.parameters(), lr=5e-5)
# + pycharm={"name": "#%%\n"}
N_EPOCHS = 10
BATCH_SIZE = 64
N_CRITIC_UPDATES = 5
# + pycharm={"name": "#%%\n"}
image_loader = DataLoader(real_images, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
g_losses = []
c_losses = []
for epoch in range(N_EPOCHS):
t = tqdm(image_loader, desc=f"Epoch {epoch}. g_loss {0.0:.2f} c_loss {0.0:.2f}")
for i, real_imgs in enumerate(t):
z = torch.randn(BATCH_SIZE, 100, requires_grad=True)
gen_imgs = generator(z.cuda())
if i % N_CRITIC_UPDATES == 0:
# train generator
critic.eval()
generator.train()
optimizer_G.zero_grad()
g_loss = -torch.mean(critic(gen_imgs))
g_losses.append(g_loss.item())
g_loss.backward()
optimizer_G.step()
# train discriminator
generator.eval()
critic.train()
optimizer_C.zero_grad()
c_loss = -torch.mean(critic(real_imgs.cuda()) - critic(gen_imgs.detach())) / 2
c_losses.append(c_loss)
c_loss.backward()
optimizer_C.step()
for p in critic.parameters():
p.data.clamp_(-0.01, 0.01)
if i % 10 == 0:
t.set_description(f"Epoch {epoch}. g_loss {g_loss.item():.2f} c_loss {c_loss.item():.2f}")
plt.figure()
plt.title(f"{epoch+1} {g_loss.item():.2f} {c_loss.item():.2f}")
plt.axis("off")
generator.eval()
plt.imshow(generator(torch.randn(1, 100).cuda()).cpu().detach().squeeze(), cmap='gray')
plt.show()
torch.save(generator.state_dict(), "models/generator_wgan_camel.pt")
torch.save(critic.state_dict(), "models/critic_wgan_camel.pt")
# + pycharm={"name": "#%%\n"}
plt.plot(np.arange(0, len(g_losses))*5, g_losses, label='generator', alpha=0.7, linewidth=0.2)
plt.plot(c_losses, label='critic', alpha=0.7, linewidth=0.2)
leg = plt.legend()
for i in range(2):
leg.get_lines()[i].set_linewidth(2)
plt.show()
# + pycharm={"name": "#%%\n"}
generator.eval()
plt.imshow(generator(torch.randn(1, 100).cuda()).cpu().detach().squeeze(), cmap='gray')
# + pycharm={"name": "#%%\n"}
| GAN/wgan_camel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:TF]
# language: python
# name: conda-env-TF-py
# ---
# +
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import sys
import math
import random
import numpy as np
import cv2
sys.path.append('../..')
from mrcnn.utils import command_line_parser, Paths
from mrcnn.visualize import display_images
from mrcnn.dataset import Dataset
from mrcnn.shapes import ShapesConfig
from mrcnn.datagen import load_image_gt
from mrcnn.newshapes import NewShapesConfig, NewShapesDataset
import mrcnn.visualize as visualize
import mrcnn.utils as utils
import pprint
import mrcnn.prep_notebook as prep
pp = pprint.PrettyPrinter(indent=2, width=100)
from mrcnn.newshapes import prep_newshape_dataset
pp = pprint.PrettyPrinter(indent=2, width=100)
np.set_printoptions(linewidth=100,precision=4,threshold=1000, suppress = True)
# input_parms +="--fcn_model /home/kbardool/models/train_fcn_adagrad/shapes20180709T1732/fcn_shapes_1167.h5"
##------------------------------------------------------------------------------------
## Parse command line arguments
##------------------------------------------------------------------------------------
parser = command_line_parser()
input_parms = " --batch_size 1 "
input_parms +=" --mrcnn_logs_dir train_mrcnn_newshapes "
input_parms +=" --mrcnn_model last "
input_parms +=" --scale_factor 1"
input_parms +=" --sysout screen "
input_parms +=" --new_log_folder "
# input_parms +="--fcn_logs_dir train_fcn8_newshapes "
# input_parms += "--epochs 2 --steps_in_epoch 32 --last_epoch 0 --lr 0.00001 --val_steps 8 "
# input_parms +="--fcn_model init "
# input_parms +="--opt adam "
# input_parms +="--fcn_arch fcn8 "
# input_parms +="--fcn_layers all "
print(input_parms)
args = parser.parse_args(input_parms.split())
mrcnn_config = prep.build_newshapes_config(model = 'mrcnn', args = args, mode = 'training', verbose= 1)
# + hideCode=false hideOutput=true
dataset_test = NewShapesDataset(mrcnn_config)
dataset_test.load_shapes(20)
dataset_test.prepare()
# test_config = NewShapesConfig()
dataset_test.image_ids
# -
# import pickle
# with open('newshapes_dataset.pkl', 'wb') as outfile:
# pickle.dump(dataset_test, outfile)
del dataset_test
with open("newshapes_dataset.pkl", 'rb') as infile:
dataset_test = pickle.load(infile)
# +
### Display some images from dataset
image_list = list(range(0,10))
image_titles = [str(i) for i in image_list]
images = prep.get_image_batch(dataset_test, image_list)
visualize.display_images(images, titles = image_titles, cols = 8, width = 24)
# -
visualize.display_image_gt(dataset_test,mrcnn_config,8)
# ### Display Images
# + hideCode=false hideOutput=true
for image_id in range(len(test.image_ids)):
# print('Classes (1: circle, 2: square, 3: triangle ): ',class_ids)
image, image_meta, gt_class_ids, gt_boxes, gt_masks = \
load_image_gt(test, test_config, image_id, augment=False, use_mini_mask=False)
# print(test.image_info[image_id])
# for shape, color, dims in test.image_info[image_id]['shapes']:
# x, y, sx, sy = dims
# print(' Shape : {:20s} Cntr (x,y): ({:3d} , {:3d}) Size_x: {:3d} Size_y: {:3d}'.format(shape,x,y,sx, sy))
# print(gt_class_ids.shape, gt_boxes.shape, gt_masks.shape)
# print(gt_boxes)
print(gt_class_ids)
# visualize.display_images([image], cols = 1, width = 8)
visualize.display_instances(image, gt_boxes, gt_masks, gt_class_ids, test.class_names, figsize=(8, 8))
# visualize.display_top_masks(image, gt_masks, gt_class_ids, test.class_names)
# -
# ### Construct a semi-random image
# + hideCode=false hideOutput=true
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
color = tuple([random.randint(0, 255) for _ in range(3)])
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(5):
shape , color, dims = semi_random_shape(img_h, img_w )
image = semi_draw_shape(image , shape, dims ,color)
display_images([image], cols = 1, width = 6)
# -
# ### Display one image
# + hideCode=false hideOutput=true
image_index = 3
image_id = test.image_ids[image_index]
image, image_meta, gt_class_ids, gt_boxes, gt_masks = \
load_image_gt(test, test_config, image_id, augment=False, use_mini_mask=False)
print(gt_class_ids.shape, gt_boxes.shape, gt_masks.shape)
print(gt_boxes)
print(gt_class_ids)
display_images([image], cols = 1, width = 6)
# draw_boxes(image, gt_boxes)
# -
# ### Experiemnt building shapes
# + hideCode=false hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 10
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(4):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer
max_range_x = width - buffer - 1
min_range_y = height //3
max_range_y = 3 * height //4 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 10
max_height = 30
sy = np.interp([y],[min_range_y, max_range_y], [min_height, max_height])
# sy = random.randint(min_height, max_height)
# sx = random.randint(5,15)
sx = sy //2 + 5
image = cv2.rectangle(image, (x - sx, y - sy), (x + sx, y + sy), color, -1)
display_images([image], cols = 1, width = 8)
# -
# ### Automobile
# + hideCode=false hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 10
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(4):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer
max_range_x = width - buffer - 1
min_range_y = height //2
max_range_y = height - buffer - 1
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_width = 12
max_width = 26
## scale width based on location on the image. Images closer to the bottom
## will be larger
sx = np.interp([y],[min_range_y, max_range_y], [min_width, max_width])
## old method
## sx = random.randint(min_width , max_width)
sy = sx //2
print('X :', x, 'y:', y , ' sx: ',sx , 'sy: ', sy)
body_y = sy //3
wheel_x = sx //2
wheel_r = sx //5
top_x = sx //4
bot_x = 3*sx //4
image = cv2.rectangle(image, (x - sx, y - body_y), (x + sx, y + body_y), color, -1)
image = cv2.circle(image, (x - wheel_x , y + body_y), wheel_r, color, -1)
image = cv2.circle(image, (x + wheel_x , y + body_y), wheel_r, color, -1)
points = np.array([[(x - top_x , y - sy), (x + top_x, y - sy),
(x + bot_x, y - body_y),(x - bot_x, y - body_y), ]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
display_images([image], cols = 1, width = 8)
# -
# ### Trees
# + hideCode=true hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 20
sin60 = math.sin(math.radians(60))
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(7):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer
max_range_x = width - (buffer) - 1
min_range_y = height // 3
max_range_y = width - (buffer) - 1 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 8
max_height = 24
sy = np.interp([y],[min_range_y, max_range_y], [min_height, max_height])
# sy = random.randint(min_height, max_height)
sx = sy
ty = sy //3 # trunk length - 1/3 total length
by = sy - ty # body length ~ 2/3 total length
tx = int((by /sin60)//5) # trunk width
# print('sx: ',sx , 'sy: ', sy, 'tx/ty :', tx, ' bx: ',bx)
# orde of points: top, left, right
points = np.array([[(x, y - by),
(x - (by / sin60), y + by),
(x + (by / sin60), y + by),
]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
image = cv2.rectangle(image,(x-tx,y+by), (x+tx, y+by+ty),color, -1)
# for i in range(5):
# shape , color, dims = semi_random_shape(img_h, img_w )
# image = semi_draw_shape(image , shape, dims ,color)
display_images([image], cols = 1, width = 6)
# -
# ### Airplane
# + hideCode=true hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 20
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(7):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer //3
max_range_x = width - (buffer//3) - 1
min_range_y = height //3
max_range_y = 2* height // 3 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 8
max_height = 24
sy = random.randint(min_height, max_height)
sx = sy
### DRAW ------------------------------------------------
tx = sx //3 # trunk length - 1/3 of total length
bx = sx - tx # body length
by = (bx/sin60) # body width
ty = int(by//5) # trunk width
# print('sx: ',sx , 'sy: ', sy, 'tx/ty :', tx, ' bx: ',bx)
sin60 = math.sin(math.radians(60))
# orde of points: top, left, right
# points = np.array([[(x, y - by),
# (x - (by / sin60), y + by),
# (x + (by / sin60), y + by),
# ]], dtype=np.int32)
points = np.array([[(x - bx , y),
(x + bx , y - by),
(x + bx , y + by),
]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
image = cv2.rectangle(image,(x+bx,y-ty), (x+bx+tx, y+ty),color, -1)
# for i in range(5):
# shape , color, dims = semi_random_shape(img_h, img_w )
# image = semi_draw_shape(image , shape, dims ,color)
display_images([image], cols = 1, width = 6)
# -
# ### person
# + hideCode=true hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 10
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(7):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer
max_range_x = width - buffer - 1
min_range_y = (height //2)
max_range_y = height - buffer - 1
min_height = 10
max_height = 22
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
# sy = random.randint(min_height, max_height)
sy = np.interp([y],[min_range_y, max_range_y], [min_height, max_height])
sx = sy // 5 # body width
### DRAW ------------------------------------------------
hy = sy // 4 # head height
by = sy - hy # body height
print('X :', x, 'y:', y , 'sx: ',sx , 'sy: ', sy)
# torso
image = cv2.rectangle(image, (x - sx, y - by), (x + sx, y + by//4), color, -1)
# legs
image = cv2.rectangle(image, (x - sx, y + by//4), (x - sx +sx//4, y + by), color, -1)
image = cv2.rectangle(image, (x + sx - sx//4, y + by//4), (x + sx, y + by), color, -1)
#head
image = cv2.circle(image, (x , y -(by+hy) ), sx, color, -1)
display_images([image], cols = 1, width = 6)
# -
# ### Ellipse
# + hideCode=true hideOutput=true
height , width = 128, 128
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
buffer = 10
image = np.ones([img_h, img_w, 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for i in range(3):
color = tuple([random.randint(0, 255) for _ in range(3)])
min_range_x = buffer//2
max_range_x = width - (buffer//2) - 1
min_range_y = buffer
max_range_y = height //4
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_width , max_width = 15, 40
# sx = random.randint(min_width, max_width)
sx = np.interp([y],[min_range_y, max_range_y], [min_width, max_width])
# min_height ,max_height = 10, 20
# sy = random.randint(min_height, max_height)
sy = sx // random.randint(3, 5)
### DRAW ------------------------------------------------
print('sx: ',sx , 'sy: ', sy, 'tx/ty :', tx)
image = cv2.ellipse(image,(x,y),(sx, sy),0,0,360,color,-1)
display_images([image], cols = 1, width = 6)
# -
# ### Routines that accept shape type and dimensions as inputs
# + hideCode=false hideOutput=true
'''
-------------------------------------------------------------------------------
'''
def semi_random_image(self, height, width):
'''
Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
'''
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
shapes = []
boxes = []
N = random.randint(1, 4)
for _ in range(N):
shape, color, dims = self.random_shape(height, width)
shapes.append((shape, color, dims))
x, y, sx, sy = dims
boxes.append([y - sy, x - sx, y + sy, x + sx])
# Suppress occulsions more than 0.3 IoU
# Apply non-max suppression with 0.3 threshold to avoid shapes covering each other
keep_ixs = utils.non_max_suppression(np.array(boxes), np.arange(N), 0.3)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
# print('Original number of shapes {} # after NMS {}'.format(N, len(shapes)))
return bg_color, shapes
def semi_random_shape(height, width, shape = None, x = 0, y = 0, sx = 0, sy = 0):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
if shape is None:
shape = random.choice(["square", "circle", "triangle", "rectangle"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
# Center x, y
buffer = 20
if y == 0:
y = random.randint(buffer, height - buffer - 1)
if x == 0:
x = random.randint(buffer, width - buffer - 1)
# Size
if sx == 0 :
sx = random.randint(buffer, width // 4)
if shape == "rectangle" and sy == 0 :
sy = random.randint(buffer, height // 4)
else:
sy = sx
# print(' Shape : {} Cntr (x,y) : ({} , {}) Size_x: {} Size_y: {}'.format(shape,x,y,sx, sy))
return shape, color, (x, y, sx, sy)
def semi_draw_shape(image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, sx, sy = dims
print(' Shape : {} Cntr (x,y) : ({} , {}) Size_x: {} Size_y: {}'.format(shape,x,y,sx, sy))
if shape == 'square':
image = cv2.rectangle(image, (x - sx, y - sy), (x + sx, y + sy), color, -1)
elif shape == 'rectangle':
image = cv2.rectangle(image, (x - sx, y - sy), (x + sx, y + sy), color, -1)
elif shape == "circle":
image = cv2.circle(image, (x, y), sx, color, -1)
elif shape == "triangle":
sin60 = math.sin(math.radians(60))
points = np.array([[(x, y - sx),
(x - (sx / sin60), y + sx),
(x + (sx / sin60), y + sx),
]], dtype=np.int32)
# print(' points.shape is : ',points.shape)
# print(points)
image = cv2.fillPoly(image, points, color)
return image
# -
# ### Non Max Suppression
# + hideCode=true hideOutput=true
from mrcnn.utils import compute_iou
def non_max_suppression(boxes, scores, threshold):
"""Performs non-maximum supression and returns indicies of kept boxes.
boxes: [N, (y1, x1, y2, x2)]. Notice that (y2, x2) lays outside the box.
scores: 1-D array of box scores.
threshold: Float. IoU threshold to use for filtering.
"""
assert boxes.shape[0] > 0
if boxes.dtype.kind != "f":
boxes = boxes.astype(np.float32)
# print(' non_max_suppression ')
# Compute box areas
y1 = boxes[:, 0]
x1 = boxes[:, 1]
y2 = boxes[:, 2]
x2 = boxes[:, 3]
area = (y2 - y1) * (x2 - x1)
# Get indicies of boxes sorted by scores (highest first)
ixs = scores.argsort()[::-1]
pick = []
print('====> Initial Ixs: ', ixs)
while len(ixs) > 0:
# Pick top box and add its index to the list
i = ixs[0]
cy = y1[i] + (y2[i]-y1[i])//2
cx = x1[i] + (x2[i]-x1[i])//2
print(' ix : ', ixs, 'ctr (x,y)', cx,' ',cy,' box:', boxes[i], ' compare ',i, ' with ', ixs[1:])
pick.append(i)
# Compute IoU of the picked box with the rest
iou = compute_iou(boxes[i], boxes[ixs[1:]], area[i], area[ixs[1:]])
print(' ious:', iou)
# Identify boxes with IoU over the threshold. This
# returns indicies into ixs[1:], so add 1 to get
# indicies into ixs.
tst = np.where(iou>threshold)
remove_ixs = np.where(iou > threshold)[0] + 1
print(' np.where( iou > threshold) : ' ,tst, 'tst[0] (index into ixs[1:]: ', tst[0],
' remove_ixs (index into ixs) : ',remove_ixs)
# Remove indicies of the picked and overlapped boxes.
ixs = np.delete(ixs, remove_ixs)
ixs = np.delete(ixs, 0)
print(' ending ixs (after deleting ixs[0]): ', ixs, ' picked so far: ',pick)
print('====> Final Picks: ', pick)
return np.array(pick, dtype=np.int32)
# -
# ### NewShape Class Definition
# + hideCode=true
class NewShapesConfig(ShapesConfig):
'''
Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
'''
# Give the configuration a recognizable name
# NAME = "shapes"
# Number of classes (including background)
NUM_CLASSES = 1 + 4 # background + 3 shapes
class NewShapesDataset(Dataset):
'''
Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
'''
def load_shapes(self, count, height, width):
'''
Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
'''
# Add classes
self.add_class("shapes", 1, "circle") # used to be class 2
self.add_class("shapes", 2, "square") # used to be class 1
self.add_class("shapes", 3, "triangle")
self.add_class("shapes", 4, "rectangle")
self.add_class("shapes", 5, "person")
self.add_class("shapes", 6, "car")
self.add_class("shapes", 7, "sun")
self.add_class("shapes", 8, "building")
self.add_class("shapes", 9, "tree")
self.add_class("shapes",10, "cloud")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
for i in range(count):
bg_color, shapes = self.random_image(height, width)
self.add_image("shapes", image_id=i, path=None,
width=width, height=height,
bg_color=bg_color, shapes=shapes)
def load_image(self, image_id):
'''
Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but in this case it
generates the image on the fly from the specs in image_info.
'''
info = self.image_info[image_id]
bg_color = np.array(info['bg_color']).reshape([1, 1, 3])
image = np.ones([info['height'], info['width'], 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
print(" Load Image ")
pp.pprint(info['shapes'])
#--------------------------------------------------------------------------------
# rearrange the shapes by ascending Y, so that items closer to bottom of
# image overlay items further up
#--------------------------------------------------------------------------------
sort_lst = [itm[2][1] for itm in info['shapes']]
sorted_shape_ind = np.argsort(np.array(sort_lst))
for shape_ind in sorted_shape_ind:
# print(' shape ind :', shape_ind, 'shape', shape, ' color:', color,' dims ',dims)
shape, color, dims = info['shapes'][shape_ind]
image = self.draw_shape(image, shape, dims, color)
return image
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
'''
Generate instance masks for shapes of the given image ID.
'''
# print(' Loading shapes obj mask infofor image_id : ',image_id)
info = self.image_info[image_id]
shapes = info['shapes']
# print('\n Load Mask information (shape, (color rgb), (x_ctr, y_ctr, size) ): ')
# pp.pprint(info['shapes'])
count = len(shapes)
mask = np.zeros([info['height'], info['width'], count], dtype=np.uint8)
print(' Shapes obj mask shape is :',mask.shape)
for i, (shape, _, dims) in enumerate(info['shapes']):
mask[:, :, i:i + 1] = self.draw_shape(mask[:, :, i:i + 1].copy(), shape, dims, 1)
# Handle occlusions
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
for i in range(count - 2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(
occlusion, np.logical_not(mask[:, :, i]))
# Map class names to class IDs.
class_ids = np.array([self.class_names.index(s[0]) for s in shapes])
return mask, class_ids.astype(np.int32)
def draw_shape(self, image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, sx, sy = dims
print(' Shape : {:20s} Cntr (x,y): ({:3d} , {:3d}) Size_x: {:3d} Size_y: {:3d} {}'.format(shape,x,y,sx, sy,color))
if shape == "square":
image = cv2.rectangle(image, (x - sx, y - sy), (x + sx, y + sy), color, -1)
elif shape in ["rectangle", "building"]:
image = cv2.rectangle(image, (x - sx, y - sy), (x + sx, y + sy), color, -1)
# print('X :', x, 'y:', y , ' sx: ',sx , 'sy: ', sy, 'hs:', hs)
elif shape == "car":
body_y = sy //3
wheel_x = sx //2
wheel_r = sx //5
top_x = sx //4
bot_x = 3*sx //4
image = cv2.rectangle(image, (x - sx, y - body_y), (x + sx, y + body_y), color, -1)
image = cv2.circle(image, (x - wheel_x , y + body_y), wheel_r, color, -1)
image = cv2.circle(image, (x + wheel_x , y + body_y), wheel_r, color, -1)
points = np.array([[(x - top_x , y - sy), (x + top_x, y - sy),
(x + bot_x, y - body_y),(x - bot_x, y - body_y), ]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
elif shape == "person":
# hy = sy // 4 # head height
# by = sy - hy # body height
# print('X :', x, 'y:', y , 'sx: ',sx , 'sy: ', sy, 'hs:', hs)
# image = cv2.rectangle(image, (x - sx, y - by), (x + sx, y + by), color, -1)
# image = cv2.circle(image, (x , y -(by+hy) ), sx, color, -1)
hy = sy // 4 # head height
by = sy - hy # body height
# print('X :', x, 'y:', y , 'sx: ',sx , 'sy: ', sy, 'hs:', hs)
# torso
image = cv2.rectangle(image, (x - sx, y - by), (x + sx, y + by//4), color, -1)
# legs
image = cv2.rectangle(image, (x - sx, y + by//4), (x - sx +sx//4, y + by), color, -1)
image = cv2.rectangle(image, (x + sx - sx//4, y + by//4), (x + sx, y + by), color, -1)
#head
image = cv2.circle(image, (x , y -(by+hy) ), sx, color, -1)
elif shape in ["circle", "sun"]:
image = cv2.circle(image, (x, y), sx, color, -1)
elif shape in ["cloud", "ellipse"]:
image = cv2.ellipse(image,(x,y),(sx, sy),0,0,360,color,-1)
elif shape == "triangle":
sin60 = math.sin(math.radians(60))
# orde of points: top, left, right
points = np.array([[(x, y - sx),
(x - (sx / sin60), y + sx),
(x + (sx / sin60), y + sx),
]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
elif shape == "tree":
sin60 = math.sin(math.radians(60))
ty = sy //3 # trunk length
by = sy - ty # body length
tx = int((by /sin60)//5) # trunk width
# print('sx: ',sx , 'sy: ', sy, 'tx/ty :', tx, ' bx: ',bx)
sin60 = math.sin(math.radians(60))
# orde of points: top, left, right
points = np.array([[(x, y - by),
(x - (by / sin60), y + by),
(x + (by / sin60), y + by),
]], dtype=np.int32)
image = cv2.fillPoly(image, points, color)
image = cv2.rectangle(image,(x-tx,y+by), (x+tx, y+by+ty),color, -1)
return image
def random_shape(self, shape, height, width):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
# shape = random.choice(["square", "circle", "triangle", "rectangle", "person", "car"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
buffer = 20
if shape == "person":
min_range_x = buffer
max_range_x = width - buffer - 1
# min_range_y = (height //3) * 2
min_range_y = (height //2)
max_range_y = height - buffer - 1
min_height = 10
max_height = 22
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
# sy = random.randint(min_height, max_height)
sy = int(np.interp([y],[min_range_y, max_range_y], [min_height, max_height]))
sx = sy //5 # body width
elif shape == "car":
# min_range_x = buffer
# max_range_x = width - buffer - 1
# min_range_y = height //2
# max_range_y = height - buffer - 1
# min_width = 12
# max_width = 15
# x = random.randint(min_range_x, max_range_x)
# y = random.randint(min_range_y, max_range_y)
# sx = random.randint(min_width , max_width)
# sy = sx //3
min_range_x = buffer
max_range_x = width - buffer - 1
min_range_y = height //2
max_range_y = height - buffer - 1
min_width = 15
max_width = 30
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
## scale width based on location on the image. Images closer to the bottom
## will be larger
sx = int(np.interp([y],[min_range_y, max_range_y], [min_width, max_width]))
## old method
## sx = random.randint(min_width , max_width)
sy = sx //2
elif shape == "building":
min_range_x = buffer
max_range_x = width - buffer - 1
min_range_y = height //3
max_range_y = 3 * height //4 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 10
max_height = 30
sy = int(np.interp([y],[min_range_y, max_range_y], [min_height, max_height]))
# sy = random.randint(min_height, max_height)
# sx = random.randint(5,15)
sx = sy //2 + 5
elif shape == "sun":
min_range_x = buffer //3
max_range_x = width - (buffer//3) - 1
min_range_y = buffer //3
max_range_y = height //5 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 4
max_height = 10
sx = int(np.interp([y],[min_range_y, max_range_y], [min_height, max_height]))
# sx = random.randint(min_height, max_height)
sy = sx
elif shape == "tree":
min_range_x = buffer
max_range_x = width - (buffer) - 1
min_range_y = height // 3
max_range_y = width - (buffer) - 1 ##* min_range_y
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_height = 8
max_height = 24
sy = int(np.interp([y],[min_range_y, max_range_y], [min_height, max_height]))
# sy = random.randint(min_height, max_height)
sx = sy
elif shape == "cloud":
min_range_x = buffer//2
max_range_x = width - (buffer//2) - 1
min_range_y = buffer
max_range_y = height //4
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
min_width , max_width = 15, 40
# sx = random.randint(min_width, max_width)
sx = int(np.interp([y],[min_range_y, max_range_y], [min_width, max_width]))
# min_height ,max_height = 10, 20
# sy = random.randint(min_height, max_height)
sy = sx // random.randint(3, 5)
else :
min_range_x = buffer
min_range_y = buffer
max_range_x = width - buffer - 1
max_range_y = height - buffer - 1
min_size_x = buffer
max_size_x = width // 4
min_size_y = buffer
max_size_y = height //4
x = random.randint(min_range_x, max_range_x)
y = random.randint(min_range_y, max_range_y)
sx = random.randint(min_size_x, max_size_x)
if shape == "rectangle":
sy = random.randint(min_size_y, max_size_y)
else:
## other shapes have same sx and sy
sy = sx
return color, (x, y, sx, sy)
def random_image(self, height, width):
'''
Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
'''
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
tmp_shapes = []
shapes = []
boxes = []
N = random.randint(1, 7)
shape_choices = ["person", "car", "sun", "building", "tree", "cloud"]
for _ in range(N):
shape = random.choice(shape_choices)
color, dims = self.random_shape(shape, height, width)
tmp_shapes.append((shape, color, dims))
if shape == "sun":
shape_choices.remove("sun")
x, y, sx, sy = dims
# boxes.append([y - sy, x - sx, y + sy, x + sx])
#--------------------------------------------------------------------------------
# order shape objects based on closeness to bottom of image
# this will result in items closer to the viewer have higher priority in NMS
#--------------------------------------------------------------------------------
print(" Random Image Routine ")
pp.pprint(tmp_shapes)
sort_lst = [itm[2][1] for itm in tmp_shapes]
print(sort_lst)
sorted_shape_ind = np.argsort(np.array(sort_lst))[::-1]
print(sorted_shape_ind)
for i in sorted_shape_ind:
shapes.append(tmp_shapes[i])
x, y, sx, sy = tmp_shapes[i][2]
boxes.append([y - sy, x - sx, y + sy, x + sx])
print('=== Sahpes after sorting ===')
pp.pprint(shapes)
pp.pprint(boxes)
# Suppress occulsions more than 0.3 IoU
# Apply non-max suppression with 0.3 threshold to avoid shapes covering each other
# object scores (which dictate the priority) are assigned in the order they were created
print('===== non-max-suppression =====')
keep_ixs = non_max_suppression(np.array(boxes), np.arange(N), 0.29)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
print('===> Original number of shapes {} # after NMS {}'.format(N, len(shapes)))
return bg_color, shapes
# -
| notebooks/Shapes_NewShapes/dev - NewShapes Class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.kernel_ridge import KernelRidge
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_diabetes
regression_inputs, regression_targets = load_diabetes(return_X_y=True)
inputs_train, inputs_test, targets_train, targets_test = train_test_split(regression_inputs, regression_targets, test_size=0.2, random_state=50)
# This is all duplicated from the Basic Regression package, so the confirmation that it works has been omitted
KRR_Linear = KernelRidge(alpha=1.0, kernel='linear')
KRR_Linear.fit(inputs_train, targets_train)
#Note that RBF is the
KRR_RBF = KernelRidge(alpha=1.0, kernel='rbf')
KRR_RBF.fit(inputs_train, targets_train)
print(KRR_Linear.score(inputs_test,targets_test))
print(KRR_RBF.score(inputs_test,targets_test))
print(KRR_Linear.predict(inputs_test)[0:6,0])
print(KRR_RBF.predict(inputs_test)[0:6,0])
print(targets_test[0:6,0])
| 09_Kernel_Ridge_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 머신러닝 프로젝트
# --------------
# ## Step 1. 문제를 정확하게 정의
# ## Step 2. 데이터 구하기
# ## Step 3. 데이터 탐색 및 시각화
# ## Step 4. 데이터 가공
# ## Step 5. 모델 선택 및 모델 훈련
# ## Step 6. 모델의 하이퍼파라미터 튜닝 및 성능 고도화
# ## Step 7. 솔루션 제시
# ## Step 8. 모델 배포 및 서비스 적용
# -----
# ## 1. 문제를 정확하게 정의
# - 해결하고자 하는 문제가 무엇인가?
# - Input? Output?
#
# ### 1.1 문제 정의 : 미국 캘리포니아 지역내 블록의 중간 주택가격 `median_house_value`를 예측
# - 블록 : 미국 인구조사국에서 샘플 데이터를 발표하는 데 사용하는 최소한의 지리적 단위 (보통 600~3000명의 인구를 나타냄)
#
# ## 2. 데이터 불러오기
# California 집 값 예측에 사용할 데이터입니다. 이 데이터는 [Kaggle](https://www.kaggle.com/harrywang/housing) 에서 얻을 수 있습니다.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# -
# 데이터를 로드하여 DataFrame에 저장합니다.
housing_data = pd.read_csv('./input/housing.csv')
housing_data.shape
# 데이터를 살펴봅니디.
housing_data.head()
housing_data.describe()
# 25번째 백분위수(제1사분위수), 중간값, 75번째 백분위수(제3사분위수)
#
# ex) 3, 1, 5, 3, 6, 7, 2, 9 => 1, 2, 3, 3, 5, 6, 7, 9 => 제1사분위수 2.5, 제3사분위수 6.5
# 간단하게 데이터의 분포를 시각화합니다.
housing_data.hist(bins=50, figsize=(20, 15))
plt.show()
# - 스케일을 살펴보면, `median_income` 특성이 달러로 표현되지 않았습니다.
# - `housing_median_age`, `median_house_value`는 최대값과 최소값을 한정하였습니다.
# - Target value의 최대값이 한정된 경우는 심각한 문제가 발생할 수 있습니다. (\$500,000을 넘어가는 데이터에 대해서는 예측이 불가능)
# - Sol 1 : \$500,000이 넘어가는 데이터에 대해서 정확한 label을 얻습니다.
# - Sol 2 : 강제적으로 상한선이 \$500,000으로 된 데이터는 모두 삭제하고 훈련합니다.
# ### 2.1. Training set, test set 분리
# 데이터를 제대로 살펴보기 전에,
# - <U>Train set과 test set을 분리</U>합니다. **데이터 스누핑**을 방지하기 위하여, test set을 train set과 명확하게 분리합니다.
# - **데이터 스누핑** : 이미 모델에 노출된 test set을 이용하여 모델을 선택하여, 매우 낙관적인 추정이 되고 기대한 성능이 나오지 않는 것
# - Test set은 전체 data set에서 <U>중요한 특성의 카테고리를 잘 대표</U>해야 합니다. 즉 `median_income` 특성이 집 값 예측에 중요한 특성이라면, 해당 특성의 카테고리가 test set에 골고루 분포하도록 해야 합니다.
# Median income에서 골고루 test set을 뽑기 위하여, median income을 1~5의 카테고리로 나눕니다.
housing_data['income_cat'] = np.ceil(housing_data['median_income'] / 1.5)
housing_data['income_cat'].where(housing_data['income_cat'] < 5, 5.0, inplace=True)
housing_data['income_cat'].hist()
# Train set과 test set을 나눕니다. `StratifiedShuffleSplit`을 사용하면 특성의 카테고리 별 비율을 반영하여 data set을 나눌 수 있습니다.
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_idx, test_idx in split.split(housing_data, housing_data['income_cat']):
strat_train_set = housing_data.loc[train_idx]
strat_test_set = housing_data.loc[test_idx]
# Data set에서 income category의 비율
housing_data['income_cat'].value_counts() / len(housing_data)
# Train set에서 income category의 비율
strat_train_set['income_cat'].value_counts() / len(strat_train_set)
# Test set에서 income category의 비율
strat_train_set['income_cat'].value_counts() / len(strat_train_set)
for _set in ([strat_train_set, strat_test_set]):
_set.drop('income_cat', axis=1, inplace=True)
# ## 3. 데이터 탐색(EDA)
# 데이터 탐색과 시각화(**Exploratory data analysis**)입니다. 훈련 세트가 매우 크면 조작을 간단하고 빠르게 하기 위해 탐색을 위한 세트를 별도로 샘플링할 수 있습니다.
eda_train_set = strat_train_set.copy()
# ### 3.1 Plotting
# 지리 정보(위도, 경도)가 있으므로 산점도 데이터 시각화를 합니다.
eda_train_set.plot(kind='scatter', x='longitude', y='latitude', figsize=(8, 6), fontsize=10, )
# Scatter의 형태는 캘리포니아를 잘 나타내지만, 특별한 패턴을 찾기는 힘듭니다.
# 밀집된 지역을 나타냅니다.
eda_train_set.plot(kind='scatter', x='longitude', y='latitude', figsize=(8, 6), fontsize=10, alpha=0.1)
# 인구수는 scatter의 반지름 크기로 나타내고(s), 집 값은 scatter의 색깔을 나타낸다.
eda_train_set.plot(kind='scatter', x='longitude', y='latitude', figsize=(10, 8), fontsize=10, alpha=0.4,
s=eda_train_set['population']/100, label='population', c='median_house_value',
cmap=plt.get_cmap('jet'), colorbar=True, sharex=False)
plt.legend()
# 주택 가격은 위치(바다와 가까운 곳일 수록 높다)와 인구 밀도에 관련이 있다고 볼 수 있습니다.
# ### 3.2 상관관계
# **상관관계(피어슨의 r)**를 조사합니다.
corr_matrix = eda_train_set.corr()
# > 상관관계를 수치로 살펴보겠습니다.
# 중간 주택 가격과 다른 특성 사이의 상관관계를 살펴본다.
corr_matrix['median_house_value'].sort_values(ascending=False)
# `median_income`이 올라갈때 `median_house_value`이 올라가는 경향이 있습니다. 위도(`latitude`)가 올라갈 때 즉 북쪽으로 갈 때 집 값은 감소하는 경향을 보이고 있습니다.
# > 상관관계를 시각화 해보겠습니다.
# +
from pandas.plotting import scatter_matrix
attrs = ['median_house_value', 'median_income', 'total_rooms', 'housing_median_age']
scatter_matrix(eda_train_set[attrs], figsize=(12, 8))
plt.show()
# -
# 예측에 가장 유용할 것 같은 `median_income`만 따로 확대해 봅니다.
eda_train_set.plot(kind='scatter', x='median_income', y='median_house_value', alpha=0.1)
# > 상관 관계 분석을 통해 알 수 있는 것
# - 상관관계가 매우 강합니다. 위쪽으로 향하는 경향이 있으며, 포인트들이 널리 퍼져 있지 않습니다.
# - \\$500,000은 가격 제한 값이라서 수평선이 생성되는 것이 문제가 없지만, \\$450,000, \\$350,000, \\$280,000 에도 수평선이 존재합니다. 이러한 형태를 학습하지 않도록 해당 구역을 제거하는 것이 좋습니다.
# #### 3.2.1 특성들을 조합하고, 조합한 특성의 상관관계 관찰
# 개별 특성에 대한 탐색 뿐 만이 아니라, **특성의 조합을 시도**해봅니다. 이것을 통하여, 특성 자체만으로는 의미가 없었지만 조합을 통하여 의미가 있도록 할 수 있습니다.
# - 한 가구당 방 개수 (방 개수 / 가구 수)
# - 방의 개수 대비 침대의 개수 (침대 수 / 방 개수)
# - 한 가구당 인원 수 (인구 수 / 가구 수)
eda_train_set['room_per_household'] = eda_train_set['total_rooms'] / eda_train_set['households']
eda_train_set['bedrooms_per_room'] = eda_train_set['total_bedrooms'] / eda_train_set['total_rooms']
eda_train_set['population_per_household'] = eda_train_set['population'] / eda_train_set['households']
corr_matrix = eda_train_set.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
# `bedrooms_per_room`은 `total_bedrooms`이나 `total_rooms`보다 상관관계가 더 높습니다. (당연히, 더 큰 집이 비쌉니다.)
#
# 이 탐색 단계는 완벽하지 않습니다. <U>우선 처음에 얻은 통찰을 이용하여 프로토타입을 빠르게 생성하고 결과를 분석하여 더 많은 통찰을 얻은 후에, 이 탐색 단계로 돌아와서 결과를 바탕으로 데이터 탐색을 또 진행합니다.</U> 이 과정을 빠르게 반복하는 것이 필요합니다.
# ## 4. 데이터 준비 (가공 및 정제)
train_x = strat_train_set.drop('median_house_value', axis=1)
train_y = strat_train_set['median_house_value'].copy()
test_x = strat_test_set.drop('median_house_value', axis=1)
test_y = strat_test_set['median_house_value'].copy()
# ### 4.1 누락된 값에 대한 처리 방법
# - 해당 구역을 제거합니다. (해당하는 데이터만 제거)
# - 전체 특성을 삭제합니다. (해당하는 특성 자체를 사용하지 않음)
# - 어떤 값으로 채웁니다. (0, 평균, 중간값 등)
# 누락된 값을 확인합니다.
for col in train_x.columns:
print(col, sum(train_x[col].isnull()))
# `Imputer` 클래스는 누락된 값을 손쉽게 다루도록 해줍니다. 누락된 값을 중간값(median)으로 대체하겠습니다.
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy='median')
# Numeric data를 가지고 와서 imputer에 fitting 합니다.
num_train = train_x._get_numeric_data()
imputer.fit(num_train)
imputer.statistics_ # 중간값들
train_x_num = imputer.transform(num_train)
train_x_num = pd.DataFrame(train_x_num, columns=num_train.columns, index=train_x.index.values)
train_x_num.head(2)
# Test set도 train set과 동일하게, imputer를 사용합니다. 단! <U>**train set에서의 스무딩값(여기서는 median)을 사용하여 test set을 스무딩한다**</U>는 것에 주의해야 합니다.
test_x_num = pd.DataFrame(imputer.transform(test_x._get_numeric_data()),
columns=test_x._get_numeric_data().columns,
index=test_x.index.values)
test_x_num.head(2)
# ### 4.2 텍스트 feauture 또는 Categorical feature(범주형 특성) 다루기
# - 텍스트는 feature는 sklearn 모델에서 학습할 수 없습니다. 정수 또는 실수의 형태로 바꾸어야 합니다. `LabelBinarizer`를 사용할 수 있습니다.
# - Pandas의 `get_dummy`를 간단하게 사용할 수 있습니다.
# +
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
ocean_proximity = encoder.fit_transform(strat_train_set['ocean_proximity'])
# -
# Sklearn의 labelbinarizer를 사용한 결과입니다.
ocean_proximity
# Pandas를 사용하겠습니다.
train_ocean_proximity_dummies = pd.get_dummies(strat_train_set['ocean_proximity'])
train_ocean_proximity_dummies.head()
# Numerical data와 one hot vector data를 합칩니다.
train_x = pd.concat([train_x_num, train_ocean_proximity_dummies], axis=1)
train_x.head()
# Test set도 동일하게 처리합니다.
test_ocean_proximity_dummies = pd.get_dummies(strat_test_set['ocean_proximity'])
test_ocean_proximity_dummies.head()
test_x = pd.concat([test_x_num, test_ocean_proximity_dummies], axis=1)
test_x.head()
# Feature의 리스트를 따로 저장합니다.
feature_list = train_x.columns.values
# ### 4.3 Feature scaling
# - 머신러닝 알고리즘은 입력 숫자 특성들의 스케일이 많이 다르면 잘 작동하지 않습니다. 최적값으로 수렴하는데에 특정 feature의 영향이 너무 클 수 있습니다.
# - **min-max scaling**
# - 데이터에서 최소값을 뺀 후 최대값과 최솟값의 차이로 나눕니다. `MinMaxSclaer`
# - **표준화(standardization)**
# - 데이터에서 평균을 뺀 후 표준편차를 나누어 결과 분포의 분산이 1이 되도록 합니다. `StandardScaler`
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(train_x)
train_x = scaler.transform(train_x)
test_x = scaler.transform(test_x)
# 위의 과정을 자동화하기 위하여 `Pipeline`을 사용할 수 있습니다. 이것은 나중에 배우도록 하겠습니다.
# ## 5. 모델 선택과 훈련
# ### 5.1 Linear Regression
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(train_x, train_y)
x_samples = train_x[:5]
y_samples = train_y[:5]
print('Predictions: ', lin_reg.predict(x_samples))
print('Labels: ', list(y_samples))
# 회귀모형의 성능 평가 지표중의 하나인 **RMSE**를 사용하여 모델의 성능을 살펴보겠습니다.
# +
from sklearn.metrics import mean_squared_error
predictions = lin_reg.predict(train_x)
lin_mse = mean_squared_error(train_y, predictions)
lin_rmse = np.sqrt(lin_mse)
print(lin_rmse)
# -
train_y.plot(kind='hist', title='Range of home prices')
# 집 값의 분포와 비교해보면 RMSE는 좋은 값이 아닙니다.
# #### Linear regression의 cross validation scores
# > **교차 검증(Cross validation)**
# - 여러 모델의 성능을 비교하여 모델을 평가하거나, 모델의 파라미터를 결정하기 위해 교차 검증을 사용합니다.
# - 학습 셋을 더 작은 학습 셋과 검증 셋으로 나누어, 작은 학습 셋에서 모델을 학습하고 검증 셋에서 모델을 검증합니다.
from sklearn.model_selection import cross_val_score
lin_scores = cross_val_score(lin_reg, train_x, train_y, scoring='neg_mean_squared_error', cv=10)
lin_rmse_cv_scores = np.sqrt(-lin_scores)
# Cross validation score를 확인하기 위한 함수를 구현합니다.
def display_scores(scores):
print('Scores', scores)
print('Mean', scores.mean())
print('Std deviation', scores.std())
display_scores(lin_rmse_cv_scores)
# 성능을 비교하기 위하여 결정트리 모델의 cross validation scores를 사용합니다.
# ### 5.2 DecisionTree
# #### DecisionTree의 cross validation score
# +
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_scores = cross_val_score(tree_reg, train_x, train_y, scoring='neg_mean_squared_error', cv=10)
tree_rmse_cv_scores = np.sqrt(-tree_scores)
display_scores(tree_rmse_cv_scores)
# -
# ### 5.3 RandomForestClassifier
# 5.3.1 RandomForestClassifier의 cross validation score
# +
from sklearn.ensemble import RandomForestRegressor
rf_reg = RandomForestRegressor()
rf_scores = cross_val_score(rf_reg, train_x, train_y, scoring='neg_mean_squared_error', cv=10)
rf_rmse_cv_scores = np.sqrt(-rf_scores)
# 검증 셋에 대한 RMSE 값입니다.
display_scores(rf_rmse_cv_scores)
# -
# 랜덤 포레스트 모델이 선형 회귀 모델과 비교했을때 모델 성능 지표가 더 높게 나왔습니다.
# +
rf_reg.fit(train_x, train_y)
pred = rf_reg.predict(train_x)
# 학습 셋에 대한 RMSE 값입니다.
np.sqrt(mean_squared_error(train_y, pred))
# -
# 하지만 학습 셋에 대한 RMSE의 값이 검증 셋에 대한 RMSE 보다 많이 낮습니다. 그 이유는 Random forest 모델이 **과대적합(overfitting)** 되었기 때문입니다. 과대적합을 피하기 위해서는 다음과 같은 방법이 있습니다.
# - 모델을 간단히 한다.
# - 제한을 한다 즉 규제(Regularization)를 한다.
# - 더 많은 훈련 데이터를 모은다.
#
# 모델을 선택하기 전에, 여러 종류의 머신러닝 알고리즘으로 하이퍼파라미터 조정에 너무 많은 시간을 들이지 않으면서 <U>다양한 모델들을 시도</U>해야 합니다. 가능성 있는 2~5개의 모델을 선정하는것이 목적입니다.
# ## 6. 모델 세부 튜닝
# 가능성 있는 모델들을 추렸다고 가정합시다. 이 모델들을 세부 튜닝해야 합니다. 튜닝 방법에는 대표적으로 아래와 같은 방법들이 있습니다.
# - 그리드 탐색
# - 랜덤 선택
# - 앙상블 방법
# ### 6.1. 그리드 탐색
# - sklearn의 `GridSearchCV`
# +
from sklearn.model_selection import GridSearchCV
# 탐색할 parameter set : 전체 훈련 횟수는 18 * 5 = 90 입니다.
params = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]}
]
# 모델
forest_reg = RandomForestRegressor()
# 그리드 탐색
grid_search = GridSearchCV(forest_reg, params, cv=5, scoring='neg_mean_squared_error', return_train_score=True)
grid_search.fit(train_x, train_y)
# -
# 가장 좋은 파라미터
grid_search.best_params_
# 가장 좋은 Tree model
grid_search.best_estimator_
# 각 파라미터에 대한 평가 점수
cv_results = grid_search.cv_results_
for mean_score, params in zip(cv_results['mean_test_score'], cv_results['params']):
print(np.sqrt(-mean_score), params)
# 앞의 예에서 봤던 RMSE는 평균 52247.025765264174 이었지만 파라미터 튜닝 결과 얻은 RMSE는 평균 50057.411415517876 입니다. 8, 30은 각 파라미터의 최대값이었기 때문에 모델의 성능을 더 향상시킬 수 있습니다.
# ### 6.2 랜덤 탐색
# sklearn의 `RandomizedSearchCV`를 사용합니다. 후보 중에서 파라미터를 랜덤하게 선택하여 모델을 생성합니다.
# ## 7. 모델 분석 및 최종 모델 평가 (솔루션 제시)
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
sorted(zip(feature_importances, feature_list), reverse=True)
# +
final_model = grid_search.best_estimator_
final_pred = final_model.predict(test_x)
final_rmse = np.sqrt(mean_squared_error(test_y, final_pred))
final_rmse
# -
# 테스트 셋을 평가할 때, 하이퍼파라미터 튜닝을 많이 했으면 교차 검증을 사용해 측정한 것보다 조금 성능이 낮은 경우가 많습니다. 검증 데이터에서 좋은 성능을 내도록 세밀하게 튜닝되었기 때문에(검증 데이터에 과대적합 될 가능성이 높습니다.) 새로운 데이터셋에서는 잘 작동하지 않을 수 있다는 것을 명심해야 합니다.
| machine-learning-lecture-notes-master/Lecture01_Machine_Learning_Simple_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas import DataFrame
# %matplotlib inline
# +
path = 'C:/Users/Pippo/Desktop/python_examples/Projecto_Final/life_expectancy/'
df = pd.read_csv(path + 'UNdata_Life_Expectancy_males.csv')
df1 = pd.read_csv(path + 'UNdata_Life_Expectancy_females.csv')
# -
df = DataFrame(df, columns=['Country or Area', 'Year(s)', 'Value', 'Gender', 'Variant'])
df['Gender'] = 'Male'
df = df.drop('Variant', axis=1)
df.rename(columns={'Country or Area': 'country', 'Year(s)': 'year', 'Value': 'age', 'Gender': 'sex'}, inplace = True)
df
df1 = DataFrame(df1, columns=['Country or Area', 'Year(s)', 'Value', 'Gender', 'Variant'])
df1['Gender'] = 'Female'
df1 = df1.drop('Variant', axis=1)
df1.rename(columns={'Country or Area': 'country', 'Year(s)': 'year', 'Value': 'age_w', 'Gender': 'sex_f'}, inplace = True)
df1
dfw = df1[['age_w', 'sex_f']]
dfw.head()
df_all = pd.concat([df, dfw], axis=1)
df_all.head()
df_bolivia = df_all.loc[df['country'] == 'Bolivia']
df_bolivia = df_bolivia.reset_index(drop=True)
df_bolivia
df_all.to_csv('UNdata_Life_Expectancy.csv')
# +
def average_year(y):
start, end = y.split('-')
avg = (int(start) + int(end)) / 2
return avg
df_all['avg_year'] = df_all['year'].apply(average_year)
df_all
# -
df_all.shape
df_all = df_all.dropna(how='any',axis=0)
df_all
df_all.shape
# # Example Germany Female and Male
df_Germany = df_all.loc[df['country'] == 'Germany']
df_Germany = df_Germany.reset_index(drop=True)
df_Germany
df_Germany.dtypes
type(df_all)
df_Germany.plot.scatter('avg_year', 'age')
df_Germany.plot.scatter('avg_year', 'age_w')
df_all['age'].hist(bins=10)
df_all.plot.scatter('age', 'avg_year', s=0.8)
# +
#plot unique boy/girl names over time
plt.plot(df_Germany['avg_year'], df_Germany['age'],label='boys')
plt.plot(df_Germany['avg_year'], df_Germany['age_w'],label='girls')
plt.legend()
plt.title('Life expectancy boys vs girls')
plt.ylabel('age average')
plt.xlabel('year')
plt.show()
# -
# # Linear Regression
X = df_Germany['avg_year'].values
y = df_Germany['age']
y_w = df_Germany['age_w']
from sklearn.linear_model import LinearRegression
X = df_Germany['avg_year'].values
y = df_Germany['age']
m = LinearRegression()
m
X.shape, y.shape
X = np.array(X).reshape(-1, 1)
y = np.array(y)
X.shape, y.shape
m.fit(X,y)
m.coef_
m.intercept_
ypred = m.predict(X)
ypred
plt.figure()
plt.plot(X, y, 'bo', label='years')
plt.plot(X, ypred, 'r-', label ='Age per year')
plt.xlabel('avg_year')
plt.ylabel('age')
plt.legend()
plt.title('Life Expectancy Germany')
plt.show()
# # Girls Linear Regression
X = df_Germany['avg_year'].values
y_w = df_Germany['age_w']
m_w = LinearRegression()
m_w
X.shape, y_w.shape
X = np.array(X).reshape(-1,1)
y_w = np.array(y_w)
X.shape, y_w.shape
m_w.fit(X,y_w)
m_w.coef_
m_w.intercept_
y_wpred = m_w.predict(X)
y_wpred
plt.figure()
plt.plot(X, y_w, 'bo', label='Years')
plt.plot(X, y_wpred, 'r-', label ='Age per year')
plt.xlabel('avg_year')
plt.ylabel('age_w')
plt.legend()
plt.title('Life Expectancy Germany')
plt.show()
current_year = time.localtime().tm_year
# # First calculation
def contact_1(name, gender, age, country, df):
df_Germany = df.loc[df['country'] == country]
X = df_Germany['avg_year'].values
X = np.array(X).reshape(-1,1)
if gender == 'male':
y = df_Germany['age'].values
else:
y = df_Germany['age_w'].values
m = LinearRegression()
m.fit(X,y)
p1 = ( age - current_year - m.intercept_) / (m.coef_[0] - 1)
return p1
# # PERSON 1
# +
name = input('What is your name? ')
gender = input('Gender M or W? ')
age = int(input('How old are you? '))
country = input('Where do you live? ')
person_1 = contact_1(name, gender, age, country, df_all)
# -
# # PERSON 2
# +
name_2 = input('What is her/his name? ')
gender = input('Gender M or W? ')
age = int(input('How old is she/he? '))
country = input('Where does he/she live? ')
person_2 = contact_1(name, gender, age, country, df_all)
# -
years_left = min([person_1, person_2]) - current_year
print(years_left)
# # General Questions
# +
input('What is your relationship with ' + name_2 + ': ')
# choose option : Friends - Family - Couple
# +
# I need button options for week - months or year, where the user can choose time in days per:
# a week : 1 to 7 days
# a month : 1 to 30 days
# a year: 1 to 60 days
times_seen_p1_p2 = int(input('How many times do ' + name_2 + ' and you see each other? '))
# -
hours_p1_p2 = int(input('Each time you see ' + name_2 + ', how many hours do you spend together? '))
# from 1 to 24 hours?
time_left = (hours_p1_p2 * 0.000114155) * times_seen_p1_p2 * years_left * 365
print(time_left)
# this is the time left in days.
days = int(time_left)
days
# This is the float translated in hours!!
hours = round((time_left - days) * 24, 1)
hours
print(name + ', if you carry on seeing ' + name_2 + ' with the same frequency as you have done so far, this is the time you have left to spend together.')
print('According to your data left time is', days, 'days and', hours, 'hours')
# +
"""
####################################################
# should be below the first general question
# This could be to gather more indformation and DATA
input('Do you go on Holiday together?')
# Yes - NO
# IF YES :
input('How long do you go for?')
"""
# -
| Projecto_Final/TEST M & W all_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Assignment 1 - Images
# The assignments in this course will be done using python. If you are not familiar with python, I recommend that you take a look at a book like this one https://www.safaribooksonline.com/library/view/python-for-data/9781491957653/.
# In this first assignment, you will be guided through some useful commands that will be used throughout the assignment. There will also always be some demonstrations helping you on the way.
#
# Before we start, we need to load some libraries.
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
# this line is needed to display the plots and images inline on the notebook
# %matplotlib inline
# ## Arrays and images
# Arrays are the basic data structures we will use in all assignments. We will in particular use the ndarray which is provided by the numpy library. Arrays can be created in different ways.
# By initialization
# +
a=np.array([1,2,3,4])
b=np.array([[1,2,3,4],[5,6,7,8]])
print(a)
print("2D")
print(b)
# -
# Special initialization
z=np.zeros([2,3])
o=np.ones([3,2])
print(z)
print(o)
# Mesh generation
r=np.arange(0,6)
print(r)
x,y=np.meshgrid(np.arange(-3,5),np.arange(-5,5))
print(x)
print(y)
# Random initialization
g=np.random.normal(0,1,[2,3]) # Normal distribution m=0.0, s=1.0
u=np.random.uniform(0,1,[2,3]) # Uniform distribution [0,1]
p=np.random.poisson(3*np.ones([2,3])) # Poisson distribution
print(g)
print(u)
print(p)
# ### Elementwise arithmetics
b=np.array([[1,2,3,4],[5,6,7,8]])
c=np.array([[2,2,3,3],[8,7,7,8]])
print(c+b)
print(c*b)
# ### Exercise 1
# Create three matrices
# $\begin{array}{ccc}
# A=\left[\begin{array}{ccc}1 & 2 & 3\\2 & 1 & 2\\ 3 & 2 & 1\end{array}\right] &
# B=\left[\begin{array}{ccc}2 & 4 & 8\end{array}\right] &
# C=\left[\begin{array}{ccc}1 & 2 & 3\\1 & 4 & 9\\ 1 & 8 & 27\end{array}\right]
# \end{array}$
#
# 1. Compute elementwise $A+C$, $B*B$
#
# 2. Add a Gaussian random matrix ($\mu$=4, $\sigma$=2) to $A$
#
# +
# Your code here
# -
# ## Visualization
# Visualization of the results is a frequently recurring task when you work with images. Here, you will use Matplot lib for plots and image displays. There are diffent purposes of the visualization and MatPlot lib offers many ways to present and decorate the plots. A good starting point if you want to create beautiful plot is the book https://www.packtpub.com/big-data-and-business-intelligence/matplotlib-plotting-cookbook.
x=np.arange(0,10,0.01)
y=np.sin(x)
plt.plot(x,y,x,-y)
plt.title('Sine plot')
plt.xlabel('Angle')
plt.ylabel('Amplitude')
plt.legend({'Positive','Negative'})
# You can also use subplots
# +
x=np.arange(0,10,0.01)
y=np.sin(x)
fig,ax = plt.subplots(2,2,figsize=(15,10)) # with subplots it makes sense to increase the plot area
ax=ax.ravel() # converting 2x2 array to a 1x4
ax[0].plot(x,y,x,-y)
ax[0].set_title('Sine plot')
ax[0].set_xlabel('Angle')
ax[0].set_ylabel('Amplitude')
ax[0].legend({'Positive','Negative'})
ax[1].plot(x,2*y,x,-y)
ax[1].set_title('Sine plot')
ax[1].set_xlabel('Angle')
ax[1].set_ylabel('Amplitude')
ax[1].legend({'Positive','Negative'})
ax[2].plot(x,y,x,-2*y)
ax[2].set_title('Sine plot')
ax[2].set_xlabel('Angle')
ax[2].set_ylabel('Amplitude')
ax[2].legend({'Positive','Negative'})
ax[3].plot(x,2*y,x,-2*y)
ax[3].set_title('Sine plot')
ax[3].set_xlabel('Angle')
ax[3].set_ylabel('Amplitude')
ax[3].legend({'Positive','Negative'});
# -
# ### Display images
img=np.random.normal(0,1,[100,100])
plt.imshow(img, cmap='gray')
# colormaps can be found on https://matplotlib.org/examples/color/colormaps_reference.html
# ### Save result
# You can save the resulting plot using. The file type is given by the file extension, e.g. png, svg, pdf.
plt.savefig('random.pdf')
# ### Exercise 2a
# * Create two matrices, one containing x values and one containing $y=\exp{\left(-\frac{x^2}{\sigma^2}\right)}$
# * Plot x and y in the first panel of 1 by 2 panel-figure
# * Plot x and y with a logarithmic y-axis in the second panel of the same figure
#
# Useful commands:
# * plt.semilogy(x,y), plots with logarithmic y-axis
# +
# your code here
# -
# ### Exercise 2b
# * Create x and y coordinate matrices using meshgrid (interval -10:0.1:10)
# * Compute $z=sinc\left(\sqrt{x^2+y^2}\right)$, $sinc(x)=\frac{\sin(x)}{x}$ is a predefined function numpy
# * Display z in a figure with correct axis-numbering
# * Add a colorbar
# * Change the colormap to pink
#
# Useful commands:
# * plt.imshow(img,cmap='mapname',extents=[]), colormaps can be found on https://matplotlib.org/examples/color/colormaps_reference.html
# * plt.colorbar()
#
# your code here
# ## Images
# ### Load and save images
# Mostly you want to load images to process. There are many options to load and save images. It depends on the loaded libraries and the file types which you chose. Here, we will use the functions provided by matplotlib.
img1=plt.imread('brick_lo.png')
img2=plt.imread('sand_bilevel.png')
plt.subplot(1,2,1)
plt.imshow(img1)
plt.subplot(1,2,2)
plt.imshow(img2)
# ## Programming
# Sooner of later you will have the need to create functions avoid repeated sequences of the same commands. Functions have the following basic structure:
# ### Functions
def functionname(arg1, arg2) :
#
# Do some stuff here with the arguments
#
return result
# #### Example
def rms(x) :
res=sqrt(np.mean(x**2))
return res
# ### Loops
# Iterations are often needed. They can be done using for-loops. There are however often optimized array operations that can be used instead.
# +
sum = 0.0
for x in np.arange(0,6) :
sum = sum + x
print(sum)
# -
# You also loop over the contents of an array
# +
sum = 0.0
for x in [1,2,4,8,16] :
sum = sum + x
print(sum)
# -
# ### Branches
# Sometimes you have to control the behavior depending on the results. This is done by branching
a=1
b=2
if (a<b) : # compare something
print('less') # do this if true
else :
print('greater') # otherwise this
# ### Exercise 3a
# Write a function 'expsq' that returns $y=\exp{\left(-\frac{x^2}{\sigma^2}\right)}$ when $x$ and $\sigma$ are provided as arguments.
# +
# you code here
# -
# ### Exercise 3b
# Write a loop over the values 1,3,5,7 and prints the results from function 'expsq' with $\sigma$=2
# +
# your code here
| Exercises/01-Images/Assignment_01_Images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda]
# language: python
# name: conda-env-anaconda-py
# ---
# # Genoa: A brief review of genetic operators
# ---
# Genetic Algorithms (GA) are based on the Darwinian concept of natural selection and can be used to solve search and optimisation problems. The algorithm can be described in the following steps:
#
# - Create a population of individuals whose genes encode a random solution to the problem.
# - Evaluate each individual's _fitness_ to solve the problem.
# - Replace a proportion of the worst performing section of the population. The replacement children are created by selecting one or two parents, based on their fitness, and their genes are mixed or modified using genetic operators.
# - Genetic operators come in two main flavours - mutators and crossovers. Mutator take one parent and change a small section of its genes. Crossovers take two (or more?) parents and splice their genes together.
# - The new children are evaluated and the process repeated by evolving the population over several generations.
#
# ## genoa
#
# [genoa](https://github.com/cdragun/genoa) is a GA package written in Python which implements the following two encodings of the genotype:
#
# - FloatIndividual: the chromosome is a list of floats. This can be used in numerical optimisation and regression type of applications
# - OrderedIndividual: the genes encode an ordered set of items - used in schedule or sequence optimisation problems
#
# ---
#
# Author: <NAME>, 2018
#
# ---
#
# The below analysis is based on the OrderedIndividual encoding which has been applied to the Travelling Salesman Problem. To start, let us import some modules we need...
#
# +
# %matplotlib inline
import random
import re
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.set_style('white')
# -
# ## The Travelling Salesman Problem
# ***
# [Wikipedia](https://en.wikipedia.org/wiki/Travelling_salesman_problem) describes the travelling salesman problem (TSP) as:
# > Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?
#
# Given N cities, there are (N-1)! possible tours through them - assuming asymmetry in paths i.e. for a pair of cities A & B, the distance from A to B might not be the same as distance from B to A. If the paths are symmetric, there are only half the number of possible tours.
#
# So for a 10-city problem, any algorithm has to find the shortest among ~363,000 possible routes; which quickly grows to 9.3e+155 for a 100-city problem.
#
# A number of algorithms and heuristics to find the solution to the TSP are described in the Wikipedia article and some of these have been explored by [<NAME>](http://norvig.com) - from whom I have borrowed a few concepts.
#
# The [University of Waterloo](http://www.math.uwaterloo.ca/tsp/index.html) site has a large collection of problem sets from which I have taken the following for this analysis:
#
# | Dataset | Region | Num. Cities | Optimal tour | Possible routes |
# |:---------|---------------:|------------:|-------------:|----------------:|
# | wi29.tsp | Western Sahara | 29 | 27,603 | 3.0e+29 |
# | qa194.tsp| Qatar | 194 | 9,352 | 6.9e+358 |
# | uy734.tsp| Uruguay | 734 | 79,114 | 4.0e+1783 |
#
#
# Let us define each city as a Point and load in the Western Sahara dataset...
#
# +
class Point(complex):
"""2-D point"""
x = property(lambda self: self.real)
y = property(lambda self: self.imag)
def __repr__(self):
return 'Point({}, {})'.format(self.x, self.y)
def __iter__(self):
yield self.x
yield self.y
def distance(a, b):
"""Return the distance between two points (rounded as per TSP datafile)"""
return int(abs(a - b) + 0.5)
def load_tsp_data(filename):
"""Load tsp datafile and return a list of points"""
pts = []
tag_found = False
with open(filename, "r") as fh:
for line in fh:
if re.match('EOF', line):
break
if re.match('EDGE_WEIGHT_TYPE', line):
_, t = line.strip().split(': ')
if t != 'EUC_2D':
raise TypeError('TSP format not supported: {}'.format(t))
continue
if re.match('NODE_COORD_SECTION', line):
tag_found = True
continue
if not tag_found:
continue
# load the coordinates
_, x, y = line.strip().split()
pts.append(Point(float(x), float(y)))
return pts
def tour_length(cities, tour):
"""Return the total length of a given tour"""
return sum(distance(cities[tour[i - 1]], cities[tour[i]])
for i in range(len(tour)))
def plot_lines(points, style='bo-'):
"""Plot lines to connect a series of points."""
plt.plot([p.x for p in points], [p.y for p in points], style)
plt.xticks([]); plt.yticks([]);
def plot_tour(cities, tour):
"""Plot the cities as circles and the tour as lines between them. Start city is red square."""
points = [cities[c] for c in tour]
start = cities[tour[0]]
plot_lines(points + [start])
# Mark the start city with a red square
plot_lines([start], 'rs')
# -
# Load in the Western Sahara dataset and plot it
wi29 = load_tsp_data('wi29.tsp')
plt.figure(figsize=(6,6)); plt.title('Western Sahara'); plot_lines(wi29, 'bo');
# Create a random tour, calculate its length and then plot it
random_tour = list((range(len(wi29))))
random.shuffle(random_tour)
tour_length(wi29, random_tour)
plt.figure(figsize=(6,6)); plt.title('Random Tour'); plot_tour(wi29, random_tour)
# ## Genetic operators in the OrderedIndividual
# ***
# The OrderedIndividual's chromosome is constructed as an ordered set of numbers in the range 0..N-1, each of which represent the index a city in the dataset and consequently the route taken by the salesman.
#
# ### Mutation operators
# * Random Mutation: Randomly shuffle all the items in the list. This is never used in a search but is useful as a null-hypothesis to prove that the other operators do improve the search.
# > e.g. 123456789 ==> **967248513**
# * Position Mutation: Pick two random items in the list and move them next to each other while maintaining the order of these two items
# > e.g. 123**x**56**y**89 ==> 123**xy**5689 or 12356**xy**89
# * Order Mutation: Switch the order of two random items while maintaining their position.
# > e.g. 123**x**56**y**89 ==> 123**y**56**x**89
# * Scramble Mutation: Take a random section of the chromosome and scramble its order.
# > e.g. 123**4567**89 ==> 123**6475**89
# * Reverse Mutation: Similar to scramble mutation except that it reverses a section.
# > e.g. 123**4567**89 ==> 123**7654**89
#
# ### Crossover operators
# * Position Crossover: Take two parents and crossover genes while maintaining the position in each parent's chromosome.
# > e.g. Given, p1 = beagfdc & p2 = agdbfec, let m = 1000110 be a mask to represent which items are crossed over. **'afe'** from p2 (corresponding to the mask) is inserted in p1 in the same position and all other items in p1 are shifted down in their original order. So child1 = abgdfec and similarly child2 = bagefdc.
#
# * Order Crossover: Similar to Position Crossover except that the order is maintained in each parent.
# > e.g. Given, p1 = beagfdc & p2 = agdbfec, let m = 1000110 be a mask to represent which items are crossed over. **'afe'** from p2 (corresponding to the mask) replace **'eaf'** p1 while all other items in p1 keep their original order ==> child1 = bafgedc and similarly child2 = agbfdec.
# * Edge Recombination Crossover: Assumes that the edges between two items (i.e. a pair of vertices) hold the key information and tries to maintain that relationship in the propogated genes. See [Wikipedia](https://en.wikipedia.org/wiki/Edge_recombination_operator) for details.
#
#
# In _genoa_, for each operator, it is possible to fine tune the probability of it being chosen for reproduction. This can be varied over the life of the simulation and it is also possible to turn off any given operator completely.
#
# Here are some graphs of how these operators perform in isolation and together...
#
#
progress = pd.read_csv('./results/wi29_progress.csv')
# +
plt.figure(figsize=(18, 6))
ax = progress.plot(ax=plt.subplot(131), x='Generation', y=[1,2,3,4,5], title='Mutation operators')
ax.set_ylabel('Tour Length')
ax = progress.plot(ax=plt.subplot(132), x='Generation', y=[6,7,8], title='Crossover operators' )
ax = progress.plot(ax=plt.subplot(133), x='Generation', y=[9, 10, 11], title='Combination of operators')
# -
#
# ## Analysis of operator performance
# ---
# All the above experiments were run for 500 generatations with an population size of 300 and a 25% replacement rate per generation, i.e. a total of **37,800** solutions (including duplicates) were evaluated out of a possible **3.0e+29** different tours.
#
# Each operator was run in isolation (the first two graphs) followed by three combinations runs (all mutators, all crossovers and all operators). The same random seed was used, which guaranteed the same initial population as a starting point. Ideally, a Monte Carlo simulation should be run to get an expected value...
#
# The optimal solution for this dataset is 27,603 and was found by two of the runs - the All Mutators (by the 240th generation) and the All Operators (by the 360th generation).
#
# Some observations, questions and points for further analysis:
#
# * All operators beat the Random Mutator... by some distance. Which proves that natural selection works?
# * The Position and Order based mutators performed similarly. As did the the Position and Order based Crossovers. Is this an artefact of the TSP or this dataset?
# * The crossover operators converge much faster than the mutators. Is there much **diversity** left in the population after ~100 generations in these runs?
# * The Reverse and Scramble Mutators both operate on a random section of the chromosome - but the Reverse Mutator maintains most of the edge information (between each vertex pair), and outperforms the Scramble Mutator!
# * The **edge information is important** - as also proved by the performance of the Edge Recombination Crossover
#
# In fact a run using just the two operators - Reverse Mutation & Edge Recombination Operator - performed as well as the All Operators run.
#
# **Which operators were most successful in improving the solution?** In a run where all operators had the same probability of being chosen, the number of times each operator produced an improved solution is shown below (the labels are abbreviated, but should be obvious)...
opdata = pd.read_csv('./results/wi29_opdist.csv')
# +
c = Counter(opdata.operator)
opdist = sorted(c.items(), key=lambda x: x[1])
labels = [x[0] for x in opdist]
vals = [x[1] for x in opdist]
plt.figure(figsize=(12, 6))
plt.barh(range(len(vals)), vals, tick_label=labels)
plt.title('Western Sahara - improved solutions produced by operator')
plt.show()
# -
# ## Solution to the Western Sahara dataset
# ---
#
# So what does the optimal tour look like?
optimal_tour = [28,22,21,20,16,17,18,14,11,10,9,5,1,0,4,7,3,2,6,8,12,13,15,23,26,24,19,25,27]
tour_length(wi29, optimal_tour)
plt.figure(figsize=(6,6)); plt.title('Optimal Tour'); plot_tour(wi29, optimal_tour)
# ## The Qatar dataset
# ---
# The Qatari dataset has 194 data points and **genoa** didn't find the optimal tour for this dataset. Having run the algorithhm for 9,000 generations with a population size of 500 and a replacement rate of 25%, a total of 1.13 million solutions would have been evaluated (out of a possible 6.9e+358). How well did it do?
#
# The best solution had a length of 9,962 which is within ~7% of the [optimal tour of length 9,352](http://www.math.uwaterloo.ca/tsp/world/qatour.html). Let's take a look at the results...
#
# +
qa194 = load_tsp_data('qa194.tsp')
qa_progress = pd.read_csv('./results/qa194_p1.csv')
# the best tour found
qa_best = """193 185 186 189 191 190 188 183 176 180 187 192 184 179 177 167 164 158 151 140 146
150 154 157 161 166 169 170 165 159 147 142 132 128 134 135 130 121 118 113 112
108 101 102 117 120 127 123 122 119 116 115 114 111 109 99 107 106 104 105 95
94 96 91 87 92 90 77 74 71 73 68 59 32 27 21 28 44 56 63 69
76 78 80 82 83 67 65 72 66 60 57 55 42 40 37 39 33 38 46 50
36 26 30 34 43 45 47 52 51 53 54 48 49 41 31 29 18 14 11 9
8 4 17 20 23 25 16 13 10 6 2 1 3 0 5 7 15 12 22 24
70 75 86 79 81 61 58 35 62 19 64 84 85 97 89 88 93 98 100 103
110 129 126 124 125 131 133 136 139 144 155 160 162 163 148 145 141 137 138 143
149 152 153 156 174 172 173 182 178 171 168 175 181"""
# Length of the best tour
qa_tour1 = list(int(i) for i in qa_best.split())
tour_length(qa194, qa_tour1)
# +
# Plot qa194 points
plt.figure(1, figsize=(12, 6))
plt.subplot(121); plt.title('Qatar - cities'); plot_lines(qa194, 'bo')
# genoa progress
ax = plt.subplot(122); ax.set_ylabel('Tour Length')
ax = qa_progress.plot(ax=ax, x='Generation', y=[1, 2], title='GA Progress')
# +
# the best tour
plt.figure(1, figsize=(12, 12))
plot_tour(qa194, qa_tour1)
plt.title('Qatar - best tour found')
plt.show()
# -
# ## The Uruguay dataset
# ---
# The Uruguay dataset has 734 data points; after six hours of number crunching, **genoa** completed 7,857 generations when the experiment was aborted. (Could do with some optimisation, but will leave that for later).
#
# With a population size of 500 and a replacement rate of 25%, a total of ~980k solutions would have been evaluated (out of a possible 4.0e+1783). The best solution had a length of 162,349 which is more than double the [optimal tour length of 79,114](http://www.math.uwaterloo.ca/tsp/world/uytour.html) - clearly not acceptable.
#
# Looking at it from another angle, the shortest tour in the initial generation had a length of 1.56 million and the GA reduced that by 90% to 162k!
#
# Here are the results...
#
# +
uy734 = load_tsp_data('uy734.tsp')
uy_progress = pd.read_csv('./results/uy734_p1.csv')
# the best tour
uy_best = """733 723 682 693 675 683 674 568 419 428 427 444 424 408 375 307 319 281 302 278 248
203 220 277 284 301 296 292 317 308 360 387 384 392 395 396 402 414 467 460 426
381 366 390 382 372 379 376 404 528 529 624 637 650 684 706 680 698 689 685 668
656 672 676 643 645 623 598 591 649 663 670 729 727 726 732 725 648 688 679 655
652 596 582 661 673 681 697 708 704 720 712 709 719 694 696 695 703 707 714 724
715 710 705 701 700 718 730 642 630 604 627 626 653 665 644 677 671 699 702 721
713 716 731 728 717 687 659 537 551 533 434 377 399 368 369 370 330 291 264 233
271 279 329 335 345 328 385 413 436 429 416 405 432 456 504 619 618 606 576 585
587 641 636 561 577 599 556 574 611 620 536 507 492 479 461 489 499 522 535 525
538 512 508 517 518 521 526 543 558 557 516 511 503 493 534 571 612 616 610 600
559 578 583 588 579 629 631 632 621 617 622 613 575 584 601 566 589 423 445 446
415 409 407 417 438 447 435 457 474 609 603 570 580 664 658 662 686 690 654 657
628 509 524 520 505 483 449 431 433 422 412 359 338 276 283 243 242 238 250 141
150 131 133 219 230 223 185 290 309 320 315 275 272 259 231 218 201 182 198 240
303 362 358 350 341 340 343 346 388 332 306 406 451 463 357 318 209 249 257 289
286 285 254 216 144 128 140 137 173 152 145 148 153 166 177 175 263 188 202 225
193 222 224 237 217 210 172 151 197 200 176 147 143 184 205 258 310 298 287 282
274 253 227 221 213 196 192 187 171 157 135 121 98 95 129 154 165 174 136 112
103 118 111 107 87 60 32 31 34 47 56 48 91 235 251 300 339 389 410 400
386 383 374 371 394 452 475 442 425 373 352 351 311 293 244 207 260 119 120 99
42 38 39 13 12 11 14 94 65 71 52 55 123 122 161 194 190 195 239 321
337 288 280 245 255 246 228 181 208 256 247 241 270 313 325 324 347 316 312 236
252 214 212 262 269 261 180 191 186 183 132 84 83 76 15 17 10 5 2 0
1 4 3 7 6 9 8 16 18 20 43 49 24 21 26 46 85 90 114 78
92 117 109 108 57 164 179 168 170 126 169 211 189 104 73 74 53 62 66 67
79 80 35 45 44 75 72 40 27 25 23 22 59 54 30 28 19 33 50 37
41 70 97 156 167 162 160 142 124 106 58 51 36 86 93 105 77 96 116 113
110 149 138 134 159 146 125 82 68 139 158 199 206 229 232 234 178 226 266 267
204 155 130 89 88 64 69 61 29 63 81 102 100 101 115 127 163 215 349 398
393 397 365 361 348 326 314 353 355 403 455 443 465 481 530 466 494 439 515 539
552 545 544 607 592 548 593 527 595 667 660 692 678 647 635 634 638 669 646 625
597 586 562 573 560 549 608 540 572 605 614 615 639 569 554 532 496 502 458 450
437 441 421 506 485 477 510 487 488 491 478 497 547 563 555 541 513 498 484 464
469 418 453 401 411 459 542 550 590 519 514 500 480 472 482 471 454 448 462 420
378 356 336 295 294 268 265 305 342 364 380 391 367 363 331 304 344 334 273 299
327 323 297 322 333 354 430 440 473 468 470 495 501 486 476 490 581 640 602 567
553 546 531 523 564 594 565 633 666 651 691 711 722"""
# Length of the best tour found
uy_tour1 = list(int(i) for i in uy_best.split())
tour_length(uy734, uy_tour1)
# +
# Plot points
plt.figure(1, figsize=(14, 6))
plt.subplot(121); plt.title('Uruguay - cities'); plot_lines(uy734, 'bo')
# genoa progress
ax = plt.subplot(122); ax.set_ylabel('Tour Length')
ax = uy_progress.plot(ax=ax, x='Generation', y=[1, 2], title='GA Progress')
plt.show()
# +
# the best tour
plt.figure(1, figsize=(12, 12))
plot_tour(uy734, uy_tour1)
plt.title('Uruguay - best tour found')
plt.show()
# -
# ## Next steps...
# ---
# For the Qatari dataset the algorithm made decent progress; but was way off the optimal solution for the Uruguayan. Can we try and improve this?
#
# In **genoa** it is possible to start with a seeded initial population, instead of a total random one. It should be relatively easy to create a **hybrid** methodology to seed the population with some sub-optimal tours - but that is for another time (and another notebook!)
#
#
| Genoa1_operators.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="05rycybwtrgI" executionInfo={"status": "ok", "timestamp": 1638283291233, "user_tz": -330, "elapsed": 537, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
import numpy as np
import pandas as pd
from datetime import datetime, timezone, timedelta
import os
import pickle
import time
import math
# + colab={"base_uri": "https://localhost:8080/"} id="hHGT6El6SsEt" executionInfo={"status": "ok", "timestamp": 1638283275052, "user_tz": -330, "elapsed": 656, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="e6a17915-e3a2-4b5e-c54f-157795fb6d9a"
# !wget -q --show-progress https://github.com/RecoHut-Projects/US969796/raw/main/datasets/sample_train-item-views.csv
# + colab={"base_uri": "https://localhost:8080/"} id="GIFIDP9GcmJK" executionInfo={"status": "ok", "timestamp": 1638285868468, "user_tz": -330, "elapsed": 472, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="6dae6a5c-c4c6-44b9-c5df-94d86243889e"
# !head sample_train-item-views.csv
# + [markdown] id="B7ZtDP_ObNIM"
# ## Preprocessing
# + id="TfaZ1Uw7twYw" executionInfo={"status": "ok", "timestamp": 1638283293383, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
raw_path = 'sample_train-item-views'
save_path = 'processed'
# + [markdown] id="ZKNqm1XWbPZ1"
# ### Unaugmented
# + id="NSVO6oQHt_FD" executionInfo={"status": "ok", "timestamp": 1638285935298, "user_tz": -330, "elapsed": 446, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
def load_data(file):
print("Start load_data")
# load csv
data = pd.read_csv(file+'.csv', sep=';', header=0, usecols=[0, 2, 4], dtype={0: np.int32, 1: np.int64, 3: str})
# specify header names
data.columns = ['SessionId', 'ItemId', 'Eventdate']
# convert time string to timestamp and remove the original column
data['Time'] = data.Eventdate.apply(lambda x: datetime.strptime(x, '%Y-%m-%d').timestamp())
print(data['Time'].min())
print(data['Time'].max())
del(data['Eventdate'])
# output
data_start = datetime.fromtimestamp(data.Time.min(), timezone.utc)
data_end = datetime.fromtimestamp(data.Time.max(), timezone.utc)
print('Loaded data set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}\n\tSpan: {} / {}\n\n'.
format(len(data), data.SessionId.nunique(), data.ItemId.nunique(),
data_start.date().isoformat(), data_end.date().isoformat()))
return data
def filter_data(data, min_item_support=5, min_session_length=2):
print("Start filter_data")
# # y?
session_lengths = data.groupby('SessionId').size()
data = data[np.in1d(data.SessionId, session_lengths[session_lengths > 1].index)]
# filter item support
item_supports = data.groupby('ItemId').size()
data = data[np.in1d(data.ItemId, item_supports[item_supports >= min_item_support].index)]
# filter session length
session_lengths = data.groupby('SessionId').size()
data = data[np.in1d(data.SessionId, session_lengths[session_lengths >= min_session_length].index)]
print(data['Time'].min())
print(data['Time'].max())
# output
data_start = datetime.fromtimestamp(data.Time.astype(np.int64).min(), timezone.utc)
data_end = datetime.fromtimestamp(data.Time.astype(np.int64).max(), timezone.utc)
print('Filtered data set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}\n\tSpan: {} / {}\n\n'.
format(len(data), data.SessionId.nunique(), data.ItemId.nunique(),
data_start.date().isoformat(), data_end.date().isoformat()))
return data
def split_train_test(data):
print("Start split_train_test")
tmax = data.Time.max()
session_max_times = data.groupby('SessionId').Time.max()
session_train = session_max_times[session_max_times < tmax-7*86400].index
session_test = session_max_times[session_max_times >= tmax-7*86400].index
train = data[np.in1d(data.SessionId, session_train)]
test = data[np.in1d(data.SessionId, session_test)]
test = test[np.in1d(test.ItemId, train.ItemId)]
tslength = test.groupby('SessionId').size()
test = test[np.in1d(test.SessionId, tslength[tslength >= 2].index)]
print('Full train set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}'.format(len(train), train.SessionId.nunique(), train.ItemId.nunique()))
print('Test set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}'.format(len(test), test.SessionId.nunique(), test.ItemId.nunique()))
return train, test
def get_dict(data):
print("Start get_dict")
item2idx = {}
pop_scores = data.groupby('ItemId').size().sort_values(ascending=False)
pop_scores = pop_scores / pop_scores[:1].values[0]
items = pop_scores.index
for idx, item in enumerate(items):
item2idx[item] = idx+1
return item2idx
def process_seqs(seqs, shift):
start = time.time()
labs = []
index = shift
for count, seq in enumerate(seqs):
index += (len(seq) - 1)
labs += [index]
end = time.time()
print("\rprocess_seqs: [%d/%d], %.2f, usetime: %fs, " % (count, len(seqs), count/len(seqs) * 100, end - start),
end='', flush=True)
print("\n")
return seqs, labs
def get_sequence(data, item2idx, shift=-1):
start = time.time()
sess_ids = data.drop_duplicates('SessionId', 'first')
print(sess_ids)
sess_ids.sort_values(['Time'], inplace=True)
sess_ids = sess_ids['SessionId'].unique()
seqs = []
for count, sess_id in enumerate(sess_ids):
seq = data[data['SessionId'].isin([sess_id])]
# seq = data[data['SessionId'].isin([sess_id])].sort_values(['Timeframe'])
seq = seq['ItemId'].values
outseq = []
for i in seq:
if i in item2idx:
outseq += [item2idx[i]]
seqs += [outseq]
end = time.time()
print("\rGet_sequence: [%d/%d], %.2f , usetime: %fs" % (count, len(sess_ids), count/len(sess_ids) * 100, end - start),
end='', flush=True)
print("\n")
# print(seqs)
out_seqs, labs = process_seqs(seqs, shift)
# print(out_seqs)
# print(labs)
print(len(out_seqs), len(labs))
return out_seqs, labs
def preprocess(train, test, path=save_path):
print("--------------")
print("Start preprocess cikm16")
# print("Start preprocess sample")
item2idx = get_dict(train)
train_seqs, train_labs = get_sequence(train, item2idx)
test_seqs, test_labs = get_sequence(test, item2idx, train_labs[-1])
train = (train_seqs, train_labs)
test = (test_seqs, test_labs)
if not os.path.exists(path):
os.makedirs(path)
pickle.dump(test, open(path+'/unaug_test.txt', 'wb'))
pickle.dump(train, open(path+'/unaug_train.txt', 'wb'))
print("finished")
# + id="tl9tQIl0uhF_" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1638285941796, "user_tz": -330, "elapsed": 3657, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="fdce7982-7d5a-4753-cbec-70d74eb21400"
data = load_data(raw_path)
data = filter_data(data)
train, test = split_train_test(data)
preprocess(train, test)
# + [markdown] id="UvhADxQ2bdUj"
# ### Augmented
# + id="GLhfWYTvb4yB" executionInfo={"status": "ok", "timestamp": 1638286015021, "user_tz": -330, "elapsed": 641, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
def load_data(file):
print("Start load_data")
# load csv
data = pd.read_csv(file+'.csv', sep=';', header=0, usecols=[0, 2, 3, 4], dtype={0: np.int32, 1: np.int64, 2: str, 3: str})
# specify header names
data.columns = ['SessionId', 'ItemId', 'Timeframe', 'Eventdate']
# convert time string to timestamp and remove the original column
data['Time'] = data.Eventdate.apply(lambda x: datetime.strptime(x, '%Y-%m-%d').timestamp())
print(data['Time'].max())
del(data['Eventdate'])
# output
data_start = datetime.fromtimestamp(data.Time.min(), timezone.utc)
data_end = datetime.fromtimestamp(data.Time.max(), timezone.utc)
print('Loaded data set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}\n\tSpan: {} / {}\n\n'.
format(len(data), data.SessionId.nunique(), data.ItemId.nunique(),
data_start.date().isoformat(), data_end.date().isoformat()))
return data
def filter_data(data, min_item_support=5, min_session_length=2):
print("Start filter_data")
# # y?
session_lengths = data.groupby('SessionId').size()
data = data[np.in1d(data.SessionId, session_lengths[session_lengths > 1].index)]
# filter item support
item_supports = data.groupby('ItemId').size()
data = data[np.in1d(data.ItemId, item_supports[item_supports >= min_item_support].index)]
# filter session length
session_lengths = data.groupby('SessionId').size()
data = data[np.in1d(data.SessionId, session_lengths[session_lengths >= min_session_length].index)]
print(data['Time'].min())
print(data['Time'].max())
# output
data_start = datetime.fromtimestamp(data.Time.astype(np.int64).min(), timezone.utc)
data_end = datetime.fromtimestamp(data.Time.astype(np.int64).max(), timezone.utc)
print('Filtered data set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}\n\tSpan: {} / {}\n\n'.
format(len(data), data.SessionId.nunique(), data.ItemId.nunique(),
data_start.date().isoformat(), data_end.date().isoformat()))
return data
def split_train_test(data):
print("Start split_train_test")
tmax = data.Time.max()
session_max_times = data.groupby('SessionId').Time.max()
session_train = session_max_times[session_max_times < tmax-7*86400].index
session_test = session_max_times[session_max_times >= tmax-7*86400].index
train = data[np.in1d(data.SessionId, session_train)]
test = data[np.in1d(data.SessionId, session_test)]
test = test[np.in1d(test.ItemId, train.ItemId)]
tslength = test.groupby('SessionId').size()
test = test[np.in1d(test.SessionId, tslength[tslength >= 2].index)]
print('Full train set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}'.format(len(train), train.SessionId.nunique(), train.ItemId.nunique()))
print('Test set\n\tEvents: {}\n\tSessions: {}\n\tItems: {}'.format(len(test), test.SessionId.nunique(), test.ItemId.nunique()))
return train, test
def get_dict(data):
print("Start get_dict")
item2idx = {}
pop_scores = data.groupby('ItemId').size().sort_values(ascending=False)
pop_scores = pop_scores / pop_scores[:1].values[0]
items = pop_scores.index
for idx, item in enumerate(items):
item2idx[item] = idx+1
return item2idx
def process_seqs(seqs):
start = time.time()
out_seqs = []
labs = []
for count, seq in enumerate(seqs):
for i in range(1, len(seq)):
tar = seq[i]
labs += [tar]
out_seqs += [seq[:i]]
end = time.time()
print("\rprocess_seqs: [%d/%d], %.2f, usetime: %fs, " % (count, len(seqs), count/len(seqs) * 100, end - start),
end='', flush=True)
print("\n")
return out_seqs, labs
def get_sequence(data, item2idx):
start = time.time()
sess_ids = data.drop_duplicates('SessionId', 'first')
print(sess_ids)
sess_ids.sort_values(['Time'], inplace=True)
sess_ids = sess_ids['SessionId'].unique()
seqs = []
for count, sess_id in enumerate(sess_ids):
seq = data[data['SessionId'].isin([sess_id])].sort_values(['Timeframe'])
seq = seq['ItemId'].values
outseq = []
for i in seq:
if i in item2idx:
outseq += [item2idx[i]]
seqs += [outseq]
end = time.time()
print("\rGet_sequence: [%d/%d], %.2f , usetime: %fs" % (count, len(sess_ids), count/len(sess_ids) * 100, end - start),
end='', flush=True)
print("\n")
out_seqs, labs = process_seqs(seqs)
print(len(out_seqs), len(labs))
return out_seqs, labs
def preprocess(train, test, path=save_path):
print("--------------")
print("Start preprocess cikm16")
item2idx = get_dict(train)
train_seqs, train_labs = get_sequence(train, item2idx)
test_seqs, test_labs = get_sequence(test, item2idx)
train = (train_seqs, train_labs)
test = (test_seqs, test_labs)
if not os.path.exists(path):
os.makedirs(path)
print("Start Save data")
pickle.dump(test, open(path+'/test.txt', 'wb'))
pickle.dump(train, open(path+'/train.txt', 'wb'))
print("finished")
# + id="9XXa7eK-br--" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1638286021175, "user_tz": -330, "elapsed": 4096, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="eff6254c-cbbd-487e-c44b-0eab4668c2bb"
data = load_data(raw_path)
data = filter_data(data)
train, test = split_train_test(data)
preprocess(train, test)
# + [markdown] id="zB8v252abXBo"
# ## Neighborhood Retrieval
# + id="CiE8LV3MaoHS" executionInfo={"status": "ok", "timestamp": 1638286104087, "user_tz": -330, "elapsed": 417, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
class KNN:
def __init__(self, k, all_sess, unaug_data, unaug_index, threshold=0.5, samples=10):
self.k = k
self.all_sess = all_sess
self.threshold = threshold
self.samples = samples
self.item_sess_map = self.get_item_sess_map(unaug_index, unaug_data)
self.no_pro_data = unaug_data
self.no_pro_index = unaug_index
def get_item_sess_map(self, unaug_index, unaug_data):
item_sess_map = {}
for index, sess in zip(unaug_index, unaug_data):
items = np.unique(sess[:-1])
for item in items:
if item not in item_sess_map.keys():
item_sess_map[item] = []
item_sess_map[item].append(index)
print("get_item_sess_map over")
return item_sess_map
def jaccard(self, first, second):
intersection = len(set(first).intersection(set(second)))
union = len(set(first).union(set(second)))
res = intersection / union
return res
def cosine(self, first, second):
li = len(set(first).intersection(set(second)))
la = len(first)
lb = len(second)
result = li / (math.sqrt(la) * math.sqrt(lb))
return result
def vec(self, first, second, pos_map):
a = set(first).intersection(set(second))
sum = 0
for i in a:
sum += pos_map[i]
result = sum / len(pos_map)
return result
def find_sess(self, sess, item_sess_map):
items = np.unique(sess)
sess_index = []
for item in items:
sess_index += item_sess_map[item]
return sess_index
def calc_similarity(self, target_session, all_data, sess_index):
neighbors = []
session_items = np.unique(target_session)
possible_sess_index = self.find_sess(session_items, self.item_sess_map)
possible_sess_index = [p_index for p_index in possible_sess_index if p_index < sess_index]
possible_sess_index = sorted(np.unique(possible_sess_index))[-self.samples:]
possible_sess_index = sorted(np.unique(possible_sess_index))
pos_map = {}
length = len(target_session)
count = 1
for item in target_session:
pos_map[item] = count / length
count += 1
for index in possible_sess_index:
session = all_data[index]
session_items_test = np.unique(session)
similarity = np.around(self.cosine(session_items_test, session_items), 4)
if similarity >= self.threshold:
neighbors.append([index, similarity])
return neighbors
def get_neigh_sess(self, index):
all_sess_neigh = []
start = time.time()
all_sess = self.all_sess[index:]
for sess in all_sess:
possible_neighbors = self.calc_similarity(sess, self.all_sess, index)
possible_neighbors = sorted(possible_neighbors, reverse=True, key=lambda x: x[1])
if len(possible_neighbors) > 0:
possible_neighbors = list(np.asarray(possible_neighbors)[:, 0])
if len(possible_neighbors) > self.k:
all_sess_neigh.append(possible_neighbors[:self.k])
elif len(possible_neighbors) > 0:
all_sess_neigh.append(possible_neighbors)
else:
all_sess_neigh.append(0)
index += 1
end = time.time()
if index % (len(self.all_sess) // 100) == 0:
print("\rProcess_seqs: [%d/%d], %.2f, usetime: %fs, " % (index, len(self.all_sess), index/len(self.all_sess) * 100, end - start),
end='', flush=True)
return all_sess_neigh
# + id="QpwTWoZsdR0z"
org_test_data = pickle.load(open(save_path + '/test.txt', 'rb'))
org_train_data = pickle.load(open(save_path + '/train.txt', 'rb'))
unaug_test_data = pickle.load(open(save_path + '/unaug_test.txt', 'rb'))
unaug_train_data = pickle.load(open(save_path + '/unaug_train.txt', 'rb'))
test_data = org_test_data[0]
train_data = org_train_data[0]
all_data = np.concatenate((train_data, test_data), axis=0)
unaug_data = np.concatenate((unaug_train_data[0], unaug_test_data[0]), axis=0)
unaug_index = np.concatenate((unaug_train_data[1], unaug_test_data[1]), axis=0)
del org_test_data, org_train_data
del test_data, train_data
del unaug_train_data, unaug_test_data
k_num = [20,40,60,100,140, 160, 180, 200]
for k in k_num:
knn = KNN(k, all_data, unaug_data, unaug_index)
all_sess_neigh = knn.get_neigh_sess(0)
pickle.dump(all_sess_neigh, open(save_path+"/neigh_data_"+str(k)+".txt", "wb"))
lens = 0
for i in all_sess_neigh:
if i != 0:
lens += len(i)
print(lens / len(all_sess_neigh))
# + id="p7pFAf1Qhh_Y"
def print_txt(base_path, args, results, epochs, top_k, note=None, save_config=True):
path = base_path + "\Best_result_top-"+str(top_k)+".txt"
outfile = open(path, 'w')
if note is not None:
outfile.write("Note:\n"+note+"\n")
if save_config:
outfile.write("Configs:\n")
for attr, value in sorted(args.__dict__.items()):
outfile.write("{} = {}\n".format(attr, value))
outfile.write('\nBest results:\n')
outfile.write("Mrr@{}:\t{}\tEpoch: {}\n".format(top_k, results[1], epochs[1]))
outfile.write("Recall@{}:\t{}\tEpoch: {}\n".format(top_k, results[0], epochs[0]))
outfile.close()
# + [markdown] id="G6TKOLdRhl7_"
# ## Model
# + id="ToOgW5Crhl1a"
import torch
from torch.nn import Module, Parameter
import torch.nn.functional as F
from torch_geometric.nn.conv import MessagePassing
from torch_geometric.utils import remove_self_loops, add_self_loops, softmax
from torch_geometric.data import InMemoryDataset, Data, Dataset
from torch import Tensor
from torch.nn import Parameter as Param
from torch_geometric.nn.inits import uniform
import torch.nn as nn
from torch_geometric.nn import GATConv, SGConv, GCNConv, GatedGraphConv
import math
import collections
# + id="TaKUuC68jRIj"
class MultiSessionsGraph(InMemoryDataset):
"""Every session is a graph."""
def __init__(self, root, phrase, knn_phrase, transform=None, pre_transform=None):
"""
Args:
root: 'sample', 'yoochoose1_4', 'yoochoose1_64' or 'diginetica'
phrase: 'train' or 'test'
"""
assert phrase in ['train', 'test']
self.phrase = phrase
self.knn_phrase = knn_phrase
super(MultiSessionsGraph, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return [self.phrase + '.txt']
@property
def processed_file_names(self):
return [self.phrase + '.pt']
def download(self):
pass
def find_neighs(self, index, knn_data):
sess_neighs = knn_data[index]
if sess_neighs == 0:
return []
else:
return list(np.asarray(sess_neighs).astype(np.int32))
def multi_process(self, train_data, knn_data, sess_index, y):
# find neigh
neigh_index = self.find_neighs(sess_index, knn_data)
# neigh_index = []
neigh_index.append(sess_index)
temp_neighs = train_data[neigh_index]
neighs = []
# append y
for neigh, idx in zip(temp_neighs, neigh_index):
if idx != sess_index:
neigh.append(y[idx])
neighs.append(neigh)
nodes = {} # dict{15: 0, 16: 1, 18: 2, ...}
all_senders = []
all_receivers = []
x = []
i = 0
for sess in neighs:
senders = []
for node in sess:
if node not in nodes:
nodes[node] = i
x.append([node])
i += 1
senders.append(nodes[node])
receivers = senders[:]
if len(senders) != 1:
del senders[-1] # the last item is a receiver
del receivers[0] # the first item is a sender
all_senders += senders
all_receivers += receivers
sess = train_data[sess_index]
sess_item_index = [nodes[item] for item in sess]
# num_count = [count[i[0]] for i in x]
sess_masks = np.zeros(len(nodes))
sess_masks[sess_item_index] = 1
pair = {}
sur_senders = all_senders[:]
sur_receivers = all_receivers[:]
i = 0
for sender, receiver in zip(sur_senders, sur_receivers):
if str(sender) + '-' + str(receiver) in pair:
pair[str(sender) + '-' + str(receiver)] += 1
del all_senders[i]
del all_receivers[i]
else:
pair[str(sender) + '-' + str(receiver)] = 1
i += 1
node_num = len(x)
# num_count = torch.tensor(num_count, dtype=torch.float)
edge_index = torch.tensor([all_senders, all_receivers], dtype=torch.long)
x = torch.tensor(x, dtype=torch.long)
node_num = torch.tensor([node_num], dtype=torch.long)
sess_item_idx = torch.tensor(sess_item_index, dtype=torch.long)
sess_masks = torch.tensor(sess_masks, dtype=torch.long)
return x, edge_index, node_num, sess_item_idx, sess_masks
def single_process(self, sequence, y):
# sequence = [1, 2, 3, 2, 4]
count = collections.Counter(sequence)
i = 0
nodes = {} # dict{15: 0, 16: 1, 18: 2, ...}
senders = []
x = []
for node in sequence:
if node not in nodes:
nodes[node] = i
x.append([node])
i += 1
senders.append(nodes[node])
receivers = senders[:]
num_count = [count[i[0]] for i in x]
sess_item_index = [nodes[item] for item in sequence]
if len(senders) != 1:
del senders[-1] # the last item is a receiver
del receivers[0] # the first item is a sender
pair = {}
sur_senders = senders[:]
sur_receivers = receivers[:]
i = 0
for sender, receiver in zip(sur_senders, sur_receivers):
if str(sender) + '-' + str(receiver) in pair:
pair[str(sender) + '-' + str(receiver)] += 1
del senders[i]
del receivers[i]
else:
pair[str(sender) + '-' + str(receiver)] = 1
i += 1
count = collections.Counter(senders)
out_degree_inv = [1 / count[i] for i in senders]
count = collections.Counter(receivers)
in_degree_inv = [1 / count[i] for i in receivers]
in_degree_inv = torch.tensor(in_degree_inv, dtype=torch.float)
out_degree_inv = torch.tensor(out_degree_inv, dtype=torch.float)
edge_count = [pair[str(senders[i]) + '-' + str(receivers[i])] for i in range(len(senders))]
edge_count = torch.tensor(edge_count, dtype=torch.float)
# senders, receivers = senders + receivers, receivers + senders
edge_index = torch.tensor([senders, receivers], dtype=torch.long)
x = torch.tensor(x, dtype=torch.long)
y = torch.tensor([y], dtype=torch.long)
num_count = torch.tensor(num_count, dtype=torch.float)
sequence = torch.tensor(sequence, dtype=torch.long)
sequence_len = torch.tensor([len(sequence)], dtype=torch.long)
sess_item_idx = torch.tensor(sess_item_index, dtype=torch.long)
return x, y, num_count, edge_index, edge_count, sess_item_idx, sequence_len, in_degree_inv, out_degree_inv
def process(self):
start = time.time()
train_data = pickle.load(open(self.raw_dir + '/' + 'train.txt', 'rb'))
test_data = pickle.load(open(self.raw_dir + '/' + 'test.txt', 'rb'))
# knn_data = np.load(self.raw_dir + '/' + self.knn_phrase + '.npy')
knn_data = pickle.load(open(self.raw_dir + '/' + self.knn_phrase + '.txt', "rb"))
data_list = []
if self.phrase == "train":
sess_index = 0
data = train_data
total_data = np.asarray(train_data[0])
total_label = np.asarray(train_data[1])
else:
sess_index = len(train_data[0])
data = test_data
total_data = np.concatenate((train_data[0], test_data[0]), axis=0)
total_label = np.concatenate((train_data[1], test_data[1]), axis=0)
for sequence, y in zip(data[0], data[1]):
mt_x, mt_edge_index, mt_node_num, mt_sess_item_idx, sess_masks = \
self.multi_process(total_data, knn_data, sess_index, total_label)
x, y, num_count, edge_index, edge_count, sess_item_idx, sequence_len, in_degree_inv, out_degree_inv = \
self.single_process(sequence, y)
session_graph = Data(x=x, y=y, num_count=num_count, sess_item_idx=sess_item_idx,
edge_index=edge_index, edge_count=edge_count, sequence_len=sequence_len,
in_degree_inv=in_degree_inv, out_degree_inv=out_degree_inv,
mt_x=mt_x, mt_edge_index=mt_edge_index, mt_node_num=mt_node_num,
mt_sess_item_idx=mt_sess_item_idx, sess_masks=sess_masks)
data_list.append(session_graph)
sess_index += 1
end = time.time()
if sess_index % (len(data[0]) // 1000) == 0:
print("\rProcess_seqs: [%d/%d], %.2f, usetime: %fs, " % (sess_index, len(data[0]), sess_index/len(data[0]) * 100, end - start),
end='', flush=True)
print('\nStart collate')
data, slices = self.collate(data_list)
print('\nStart save')
torch.save((data, slices), self.processed_paths[0])
# + id="W4yM19tPh6o7"
def uniform(size, tensor):
bound = 1.0 / math.sqrt(size)
if tensor is not None:
tensor.data.uniform_(-bound, bound)
def kaiming_uniform(tensor, fan, a):
if tensor is not None:
bound = math.sqrt(6 / ((1 + a**2) * fan))
tensor.data.uniform_(-bound, bound)
def glorot(tensor):
if tensor is not None:
stdv = math.sqrt(6.0 / (tensor.size(-2) + tensor.size(-1)))
tensor.data.uniform_(-stdv, stdv)
def zeros(tensor):
if tensor is not None:
tensor.data.fill_(0)
def ones(tensor):
if tensor is not None:
tensor.data.fill_(1)
def normal(tensor, mean, std):
if tensor is not None:
tensor.data.normal_(mean, std)
def reset(nn):
def _reset(item):
if hasattr(item, 'reset_parameters'):
item.reset_parameters()
if nn is not None:
if hasattr(nn, 'children') and len(list(nn.children())) > 0:
for item in nn.children():
_reset(item)
else:
_reset(nn)
class InOutGATConv(MessagePassing):
r"""The graph attentional operator from the `"Graph Attention Networks"
<https://arxiv.org/abs/1710.10903>`_ paper
.. math::
\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} +
\sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j},
where the attention coefficients :math:`\alpha_{i,j}` are computed as
.. math::
\alpha_{i,j} =
\frac{
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j]
\right)\right)}
{\sum_{k \in \mathcal{N}(i) \cup \{ i \}}
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k]
\right)\right)}.
Args:
in_channels (int): Size of each input sample.
out_channels (int): Size of each output sample.
heads (int, optional): Number of multi-head-attentions.
(default: :obj:`1`)
concat (bool, optional): If set to :obj:`False`, the multi-head
attentions are averaged instead of concatenated.
(default: :obj:`True`)
negative_slope (float, optional): LeakyReLU angle of the negative
slope. (default: :obj:`0.2`)
dropout (float, optional): Dropout probability of the normalized
attention coefficients which exposes each node to a stochastically
sampled neighborhood during training. (default: :obj:`0`)
bias (bool, optional): If set to :obj:`False`, the layer will not learn
an additive bias. (default: :obj:`True`)
**kwargs (optional): Additional arguments of
:class:`torch_geometric.nn.conv.MessagePassing`.
"""
def __init__(self,
in_channels,
out_channels,
heads=8,
concat=False,
negative_slope=0.2,
dropout=0,
bias=True,
middle_layer=False,
**kwargs):
super(InOutGATConv, self).__init__(aggr='add', **kwargs)
self.in_channels = in_channels
self.out_channels = out_channels
self.heads = heads
self.concat = concat
self.middle_layer = middle_layer
self.negative_slope = negative_slope
self.dropout = dropout
self.weight1 = Parameter(
torch.Tensor(2, in_channels, heads * out_channels))
self.weight2 = Parameter(
torch.Tensor(2, in_channels, heads * out_channels))
self.att = Parameter(torch.Tensor(1, heads, 2 * out_channels))
if bias and concat:
self.bias = Parameter(torch.Tensor(heads * out_channels))
elif bias and not concat:
self.bias = Parameter(torch.Tensor(out_channels))
else:
self.register_parameter('bias', None)
if concat and not middle_layer:
self.rnn = torch.nn.GRUCell(2 * out_channels * heads, in_channels * heads, bias=bias)
elif middle_layer:
self.rnn = torch.nn.GRUCell(2 * out_channels * heads, in_channels, bias=bias)
else:
self.rnn = torch.nn.GRUCell(2 * out_channels, out_channels, bias=bias)
self.reset_parameters()
def reset_parameters(self):
glorot(self.weight1)
glorot(self.weight2)
glorot(self.att)
zeros(self.bias)
def forward(self, x, edge_index, sess_masks):
""""""
edge_index, _ = remove_self_loops(edge_index)
edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
sess_masks = sess_masks.view(sess_masks.shape[0], 1).float()
xs = x * sess_masks
xns = x * (1 - sess_masks)
# self.flow = 'source_to_target'
# x1 = torch.mm(x, self.weight[0]).view(-1, self.heads, self.out_channels)
# m1 = self.propagate(edge_index, x=x1, num_nodes=x.size(0))
# self.flow = 'target_to_source'
# x2 = torch.mm(x, self.weight[1]).view(-1, self.heads, self.out_channels)
# m2 = self.propagate(edge_index, x=x2, num_nodes=x.size(0))
self.flow = 'source_to_target'
x1s = torch.mm(xs, self.weight1[0]).view(-1, self.heads, self.out_channels)
print(x1s.shape())
x1ns = torch.mm(xns, self.weight2[0]).view(-1, self.heads, self.out_channels)
print(x1ns.shape())
x1 = x1s + x1ns
m1 = self.propagate(edge_index, x=x1, num_nodes=x.size(0))
self.flow = 'target_to_source'
x2s = torch.mm(xs, self.weight1[1]).view(-1, self.heads, self.out_channels)
x2ns = torch.mm(xns, self.weight2[1]).view(-1, self.heads, self.out_channels)
x2 = x2s + x2ns
m2 = self.propagate(edge_index, x=x2, num_nodes=x.size(0))
if not self.middle_layer:
if self.concat:
x = x.repeat(1, self.heads)
else:
x = x.view(-1, self.heads, self.out_channels).mean(dim=1)
# x = self.rnn(torch.cat((m1, m2), dim=-1), x)
x = m1 + m2
# x = m1
return x
def message(self, edge_index_i, x_i, x_j, num_nodes):
# Compute attention coefficients.
alpha = (torch.cat([x_i, x_j], dim=-1) * self.att).sum(dim=-1)
alpha = F.leaky_relu(alpha, self.negative_slope)
alpha = softmax(alpha, edge_index_i, num_nodes)
# Sample attention coefficients stochastically.
alpha = F.dropout(alpha, p=self.dropout, training=self.training)
return x_j * alpha.view(-1, self.heads, 1)
def update(self, aggr_out):
if self.concat is True:
aggr_out = aggr_out.view(-1, self.heads * self.out_channels)
else:
aggr_out = aggr_out.mean(dim=1)
if self.bias is not None:
aggr_out = aggr_out + self.bias
return aggr_out
def __repr__(self):
return '{}({}, {}, heads={})'.format(self.__class__.__name__,
self.in_channels,
self.out_channels, self.heads)
class InOutGATConv_intra(MessagePassing):
r"""The graph attentional operator from the `"Graph Attention Networks"
<https://arxiv.org/abs/1710.10903>`_ paper
.. math::
\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} +
\sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j},
where the attention coefficients :math:`\alpha_{i,j}` are computed as
.. math::
\alpha_{i,j} =
\frac{
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j]
\right)\right)}
{\sum_{k \in \mathcal{N}(i) \cup \{ i \}}
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k]
\right)\right)}.
Args:
in_channels (int): Size of each input sample.
out_channels (int): Size of each output sample.
heads (int, optional): Number of multi-head-attentions.
(default: :obj:`1`)
concat (bool, optional): If set to :obj:`False`, the multi-head
attentions are averaged instead of concatenated.
(default: :obj:`True`)
negative_slope (float, optional): LeakyReLU angle of the negative
slope. (default: :obj:`0.2`)
dropout (float, optional): Dropout probability of the normalized
attention coefficients which exposes each node to a stochastically
sampled neighborhood during training. (default: :obj:`0`)
bias (bool, optional): If set to :obj:`False`, the layer will not learn
an additive bias. (default: :obj:`True`)
**kwargs (optional): Additional arguments of
:class:`torch_geometric.nn.conv.MessagePassing`.
"""
def __init__(self,
in_channels,
out_channels,
heads=8,
concat=True,
negative_slope=0.2,
dropout=0,
bias=True,
middle_layer=False,
**kwargs):
super(InOutGATConv_intra, self).__init__(aggr='add', **kwargs)
self.in_channels = in_channels
self.out_channels = out_channels
self.heads = heads
self.concat = concat
self.middle_layer = middle_layer
self.negative_slope = negative_slope
self.dropout = dropout
self.weight = Parameter(
torch.Tensor(2, in_channels, heads * out_channels))
self.weight1 = Parameter(
torch.Tensor(2, in_channels, heads * out_channels))
self.weight2 = Parameter(
torch.Tensor(2, in_channels, heads * out_channels))
self.att = Parameter(torch.Tensor(1, heads, 2 * out_channels))
if bias and concat:
self.bias = Parameter(torch.Tensor(heads * out_channels))
elif bias and not concat:
self.bias = Parameter(torch.Tensor(out_channels))
else:
self.register_parameter('bias', None)
if concat and not middle_layer:
self.rnn = torch.nn.GRUCell(2 * out_channels * heads, in_channels * heads, bias=bias)
elif middle_layer:
self.rnn = torch.nn.GRUCell(2 * out_channels * heads, in_channels, bias=bias)
else:
self.rnn = torch.nn.GRUCell(2 * out_channels, out_channels, bias=bias)
self.reset_parameters()
def reset_parameters(self):
glorot(self.weight1)
glorot(self.weight2)
glorot(self.att)
zeros(self.bias)
def forward(self, x, edge_index, sess_masks):
""""""
edge_index, _ = remove_self_loops(edge_index)
edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
# sess_masks = sess_masks.view(sess_masks.shape[0], 1).float()
# xs = x * sess_masks
# xns = x * (1 - sess_masks)
self.flow = 'source_to_target'
x1 = torch.mm(x, self.weight[0]).view(-1, self.heads, self.out_channels)
m1 = self.propagate(edge_index, x=x1, num_nodes=x.size(0))
self.flow = 'target_to_source'
x2 = torch.mm(x, self.weight[1]).view(-1, self.heads, self.out_channels)
m2 = self.propagate(edge_index, x=x2, num_nodes=x.size(0))
# self.flow = 'source_to_target'
# x1s = torch.mm(xs, self.weight1[0]).view(-1, self.heads, self.out_channels)
# x1ns = torch.mm(xns, self.weight2[0]).view(-1, self.heads, self.out_channels)
# x1 = x1s + x1ns
# m1 = self.propagate(edge_index, x=x1, num_nodes=x.size(0))
# self.flow = 'target_to_source'
# x2s = torch.mm(xs, self.weight1[1]).view(-1, self.heads, self.out_channels)
# x2ns = torch.mm(xns, self.weight2[1]).view(-1, self.heads, self.out_channels)
# x2 = x2s + x2ns
# m2 = self.propagate(edge_index, x=x2, num_nodes=x.size(0))
if not self.middle_layer:
if self.concat:
x = x.repeat(1, self.heads)
else:
x = x.view(-1, self.heads, self.out_channels).mean(dim=1)
# x = self.rnn(torch.cat((m1, m2), dim=-1), x)
x = m1 + m2
return x
def message(self, edge_index_i, x_i, x_j, num_nodes):
# Compute attention coefficients.
alpha = (torch.cat([x_i, x_j], dim=-1) * self.att).sum(dim=-1)
alpha = F.leaky_relu(alpha, self.negative_slope)
alpha = softmax(alpha, edge_index_i, num_nodes)
# Sample attention coefficients stochastically.
alpha = F.dropout(alpha, p=self.dropout, training=self.training)
return x_j * alpha.view(-1, self.heads, 1)
def update(self, aggr_out):
if self.concat is True:
aggr_out = aggr_out.view(-1, self.heads * self.out_channels)
else:
aggr_out = aggr_out.mean(dim=1)
if self.bias is not None:
aggr_out = aggr_out + self.bias
return aggr_out
def __repr__(self):
return '{}({}, {}, heads={})'.format(self.__class__.__name__,
self.in_channels,
self.out_channels, self.heads)
# + id="hG8oOf2wh6mv"
class InOutGGNN(MessagePassing):
r"""The gated graph convolution operator from the `"Gated Graph Sequence
Neural Networks" <https://arxiv.org/abs/1511.05493>`_ paper
.. math::
\mathbf{h}_i^{(0)} &= \mathbf{x}_i \, \Vert \, \mathbf{0}
\mathbf{m}_i^{(l+1)} &= \sum_{j \in \mathcal{N}(i)} \mathbf{\Theta}
\cdot \mathbf{h}_j^{(l)}
\mathbf{h}_i^{(l+1)} &= \textrm{GRU} (\mathbf{m}_i^{(l+1)},
\mathbf{h}_i^{(l)})
up to representation :math:`\mathbf{h}_i^{(L)}`.
The number of input channels of :math:`\mathbf{x}_i` needs to be less or
equal than :obj:`out_channels`.
Args:
out_channels (int): Size of each input sample.
num_layers (int): The sequence length :math:`L`.
aggr (string): The aggregation scheme to use
(:obj:`"add"`, :obj:`"mean"`, :obj:`"max"`).
(default: :obj:`"add"`)
bias (bool, optional): If set to :obj:`False`, the layer will not learn
an additive bias. (default: :obj:`True`)
"""
def __init__(self, out_channels, num_layers, aggr='add', bias=True):
super(InOutGGNN, self).__init__(aggr)
self.out_channels = out_channels
self.num_layers = num_layers
self.weight = Param(Tensor(num_layers, 2, out_channels, out_channels))
self.rnn = torch.nn.GRUCell(2 * out_channels, out_channels, bias=bias)
self.bias_in = Param(Tensor(self.out_channels))
self.bias_out = Param(Tensor(self.out_channels))
self.reset_parameters()
def reset_parameters(self):
size = self.out_channels
uniform(size, self.weight)
self.rnn.reset_parameters()
def forward(self, x, edge_index, edge_weight=[None, None]):
#print(edge_weight[0].size(), edge_weight[1].size)
""""""
h = x if x.dim() == 2 else x.unsqueeze(-1)
if h.size(1) > self.out_channels:
raise ValueError('The number of input channels is not allowed to '
'be larger than the number of output channels')
if h.size(1) < self.out_channels:
zero = h.new_zeros(h.size(0), self.out_channels - h.size(1))
h = torch.cat([h, zero], dim=1)
for i in range(self.num_layers):
self.flow = 'source_to_target'
h1 = torch.matmul(h, self.weight[i, 0])
m1 = self.propagate(edge_index, x=h1, edge_weight=edge_weight[0], bias=self.bias_in)
self.flow = 'target_to_source'
h2 = torch.matmul(h, self.weight[i, 1])
m2 = self.propagate(edge_index, x=h2, edge_weight=edge_weight[1], bias=self.bias_out)
h = self.rnn(torch.cat((m1, m2), dim=-1), h)
return h
def message(self, x_j, edge_weight):
if edge_weight is not None:
return edge_weight.view(-1, 1) * x_j
return x_j
def update(self, aggr_out, bias):
if bias is not None:
return aggr_out + bias
else:
return aggr_out
def __repr__(self):
return '{}({}, num_layers={})'.format(
self.__class__.__name__, self.out_channels, self.num_layers)
# + id="M4EFTsBkh6iI"
class SRGNN(nn.Module):
"""
Args:
hidden_size: the number of units in a hidden layer.
n_node: the number of items in the whole item set for embedding layer.
"""
def __init__(self, hidden_size, n_node, dropout=0.5, negative_slope=0.2, heads=8, item_fusing=False):
super(SRGNN, self).__init__()
self.hidden_size, self.n_node = hidden_size, n_node
self.item_fusing = item_fusing
self.embedding = nn.Embedding(self.n_node, self.hidden_size)
# self.gated = InOutGGNN(self.hidden_size, num_layers=1)
self.gcn = GCNConv(in_channels=hidden_size, out_channels=hidden_size)
self.gcn2 = GCNConv(in_channels=hidden_size, out_channels=hidden_size)
self.gated = SGConv(in_channels=hidden_size, out_channels=hidden_size, K=2)
# self.gated = InOutGATConv_intra(in_channels=hidden_size, out_channels=hidden_size, dropout=dropout,
# negative_slope=negative_slope, heads=heads, concat=True)
# self.gated2 = InOutGATConv(in_channels=hidden_size * heads, out_channels=hidden_size, dropout=dropout,
# negative_slope=negative_slope, heads=heads, concat=True, middle_layer=True)
# self.gated3 = InOutGATConv(in_channels=hidden_size * heads, out_channels=hidden_size, dropout=dropout,
# negative_slope=negative_slope, heads=heads, concat=False)
self.W_1 = nn.Linear(self.hidden_size * 8, self.hidden_size)
self.W_2 = nn.Linear(self.hidden_size * 8, self.hidden_size)
self.q = nn.Linear(self.hidden_size, 1)
self.W_3 = nn.Linear(16 * self.hidden_size, self.hidden_size)
self.loss_function = nn.CrossEntropyLoss()
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def rebuilt_sess(self, session_embedding, batchs, sess_item_index, seq_lens):
sections = torch.bincount(batchs)
split_embs = torch.split(session_embedding, tuple(sections.cpu().numpy()))
sess_item_index = torch.split(sess_item_index, tuple(seq_lens.cpu().numpy()))
rebuilt_sess = []
for embs, index in zip(split_embs, sess_item_index):
sess = tuple(embs[i].view(1, -1) for i in index)
sess = torch.cat(sess, dim=0)
rebuilt_sess.append(sess)
return tuple(rebuilt_sess)
def get_h_s(self, hidden, seq_len):
# split whole x back into graphs G_i
v_n = tuple(nodes[-1].view(1, -1) for nodes in hidden)
v_n_repeat = tuple(nodes[-1].view(1, -1).repeat(nodes.shape[0], 1) for nodes in hidden)
v_n_repeat = torch.cat(v_n_repeat, dim=0)
hidden = torch.cat(hidden, dim=0)
# Eq(6)
# print("v_n_repeat", v_n_repeat.size())
# print("hidden", hidden.size())
alpha = self.q(torch.sigmoid(self.W_1(v_n_repeat) + self.W_2(hidden))) # |V|_i * 1
s_g_whole = alpha * hidden # |V|_i * hidden_size
s_g_split = torch.split(s_g_whole, tuple(seq_len.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
# Eq(7)
# print("torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1)", torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1).size())
h_s = self.W_3(torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1))
# h_s = torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1)
return h_s
def forward(self, data, hidden):
edge_index, batch, edge_count, in_degree_inv, out_degree_inv, num_count, sess_item_index, seq_len = \
data.edge_index, data.batch, data.edge_count, data.in_degree_inv, data.out_degree_inv,\
data.num_count, data.sess_item_idx, data.sequence_len
hidden = self.gated.forward(hidden, edge_index)
# hidden = self.gcn.forward(hidden, edge_index)
# hidden = self.gcn2.forward(hidden, edge_index)
sess_embs = self.rebuilt_sess(hidden, batch, sess_item_index, seq_len)
if self.item_fusing:
return sess_embs
else:
return self.get_h_s(sess_embs, seq_len)
# + id="jO_DQcFgizkN"
class GroupGraph(Module):
def __init__(self, hidden_size, dropout=0.5, negative_slope=0.2, heads=8, item_fusing=False):
super(GroupGraph, self).__init__()
self.hidden_size = hidden_size
self.item_fusing = item_fusing
self.W_1 = nn.Linear(8 * self.hidden_size, self.hidden_size)
self.W_2 = nn.Linear(8 * self.hidden_size, self.hidden_size)
self.q = nn.Linear(self.hidden_size, 1)
self.W_3 = nn.Linear(16 * self.hidden_size, self.hidden_size)
# self.gat = GATConv(in_channels=hidden_size, out_channels=hidden_size, dropout=dropout, negative_slope=negative_slope, heads=heads, concat=True)
# self.gat2 = GATConv(in_channels=hidden_size*heads, out_channels=hidden_size*heads, dropout=dropout, negative_slope=negative_slope, heads=heads, concat=False)
# self.gat3 = GATConv(in_channels=hidden_size*heads, out_channels=hidden_size, dropout=dropout, negative_slope=negative_slope, heads=heads, concat=True)
# self.gat_out = GATConv(in_channels=hidden_size*heads, out_channels=hidden_size, dropout=dropout, negative_slope=negative_slope, heads=heads, concat=False)
# self.gated = InOutGGNN(self.hidden_size, num_layers=2)
self.gcn = GCNConv(in_channels=hidden_size, out_channels=hidden_size)
self.gcn2 = GCNConv(in_channels=hidden_size, out_channels=hidden_size)
self.sgcn = SGConv(in_channels=hidden_size, out_channels=hidden_size, K=2)
# self.gat = InOutGATConv(in_channels=hidden_size, out_channels=hidden_size, dropout=dropout,
# negative_slope=negative_slope, heads=heads, concat=True)
# self.gat2 = InOutGATConv(in_channels=hidden_size * heads, out_channels=hidden_size, dropout=dropout,
# negative_slope=negative_slope, heads=heads, concat=False)
#
def group_att_old(self, session_embedding, node_num, batch_h_s): # hs: # batch_size x latent_size
v_i = torch.split(session_embedding, tuple(node_num)) # split whole x back into graphs G_i
h_s_repeat = tuple(h_s.view(1, -1).repeat(nodes.shape[0], 1) for h_s, nodes in zip(batch_h_s, v_i)) # repeat |V|_i times for the last node embedding
alpha = self.q(torch.sigmoid(self.W_1(torch.cat(h_s_repeat, dim=0)) + self.W_2(session_embedding))) # |V|_i * 1
s_g_whole = alpha * session_embedding # |V|_i * hidden_size
s_g_split = torch.split(s_g_whole, tuple(node_num.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
return torch.cat(s_g, dim=0)
def group_att(self, session_embedding, hidden, node_num, num_count): # hs: # batch_size x latent_size
v_i = torch.split(session_embedding, tuple(node_num)) # split whole x back into graphs G_i
v_n = tuple(nodes[-1].view(1, -1) for nodes in hidden)
v_n_repeat = tuple(sess_nodes[-1].view(1, -1).repeat(nodes.shape[0], 1) for sess_nodes, nodes in zip(hidden, v_i)) # repeat |V|_i times for the last node embedding
alpha = self.q(torch.sigmoid(self.W_1(torch.cat(v_n_repeat, dim=0)) + self.W_2(session_embedding))) # |V|_i * 1
s_g_whole = num_count.view(-1, 1) * alpha * session_embedding # |V|_i * hidden_size
s_g_split = torch.split(s_g_whole, tuple(node_num.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
h_s = self.W_3(torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1))
return h_s
def rebuilt_sess(self, session_embedding, node_num, sess_item_index, seq_lens):
split_embs = torch.split(session_embedding, tuple(node_num))
sess_item_index = torch.split(sess_item_index, tuple(seq_lens.cpu().numpy()))
rebuilt_sess = []
for embs, index in zip(split_embs, sess_item_index):
sess = tuple(embs[i].view(1, -1) for i in index)
sess = torch.cat(sess, dim=0)
rebuilt_sess.append(sess)
return tuple(rebuilt_sess)
def get_h_group(self, hidden, seq_len):
# split whole x back into graphs G_i
v_n = tuple(nodes[-1].view(1, -1) for nodes in hidden)
v_n_repeat = tuple(nodes[-1].view(1, -1).repeat(nodes.shape[0], 1) for nodes in hidden)
v_n_repeat = torch.cat(v_n_repeat, dim=0)
hidden = torch.cat(hidden, dim=0)
# Eq(5)
alpha = self.q(torch.sigmoid(self.W_1(v_n_repeat) + self.W_2(hidden))) # |V|_i * 1
s_g_whole = alpha * hidden # |V|_i * hidden_size
# s_g_whole = hidden
s_g_split = torch.split(s_g_whole, tuple(seq_len.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
# s_g = tuple(torch.mean(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
h_s = self.W_3(torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1))
# h_s = torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1)
return h_s
def h_mean(self, hidden, node_num):
split_embs = torch.split(hidden, tuple(node_num))
means = []
for embs in split_embs:
mean = torch.mean(embs, dim=0)
means.append(mean)
means = torch.cat(tuple(means), dim=0).view(len(split_embs), -1)
return means
def forward(self, hidden, data):
# edge_index, node_num, batch, sess_item_index, seq_lens, sess_masks = \
# data.mt_edge_index, data.mt_node_num, data.batch, data.mt_sess_item_idx, data.sequence_len, data.sess_masks
edge_index, node_num, batch, sess_item_index, seq_lens = \
data.mt_edge_index, data.mt_node_num, data.batch, data.mt_sess_item_idx, data.sequence_len
# edge_count, in_degree_inv, out_degree_inv = data.mt_edge_count, data.mt_in_degree_inv, data.mt_out_degree_inv
# hidden = self.gat.forward(hidden, edge_index, sess_masks)
# hidden = self.gat2.forward(hidden, edge_index)
# hidden = self.gat3.forward(hidden, edge_index)
# hidden = self.gat.forward(hidden, edge_index, sess_masks)
hidden - self.sgcn(hidden, edge_index)
# hidden = self.gcn.forward(hidden, edge_index)
# hidden = self.gcn2.forward(hidden, edge_index)
# hidden = self.gat.forward(hidden, edge_index)
# hidden = self.gated.forward(hidden, edge_index, [edge_count * in_degree_inv, edge_count * out_degree_inv])
# hidden = self.gated.forward(hidden, edge_index)
# hidden = self.gat1.forward(hidden, edge_index)
sess_hidden = self.rebuilt_sess(hidden, node_num, sess_item_index, seq_lens)
if self.item_fusing:
return sess_hidden
else:
return self.get_h_group(sess_hidden, seq_lens)
# + id="2POOFBLKi4r0"
class Embedding2Score(nn.Module):
def __init__(self, hidden_size, n_node, using_represent, item_fusing):
super(Embedding2Score, self).__init__()
self.hidden_size = hidden_size
self.n_node = n_node
self.using_represent = using_represent
self.item_fusing = item_fusing
self.W_1 = nn.Linear(self.hidden_size, self.hidden_size * 2)
self.W_2 = nn.Linear(self.hidden_size, self.hidden_size)
self.W_3 = nn.Linear(self.hidden_size, self.hidden_size)
def forward(self, h_s, h_group, final_s, item_embedding_table):
emb = item_embedding_table.weight.transpose(1, 0)
if self.item_fusing:
z_i_hat = torch.mm(final_s, emb)
else:
gate = F.sigmoid(self.W_2(h_s) + self.W_3(h_group))
sess_rep = h_s * gate + h_group * (1 - gate)
if self.using_represent == 'comb':
z_i_hat = torch.mm(sess_rep, emb)
elif self.using_represent == 'h_s':
z_i_hat = torch.mm(h_s, emb)
elif self.using_represent == 'h_group':
z_i_hat = torch.mm(h_group, emb)
else:
raise NotImplementedError
return z_i_hat,
class ItemFusing(nn.Module):
def __init__(self, hidden_size):
super(ItemFusing, self).__init__()
self.hidden_size = hidden_size
self.use_rnn = True
self.Wf1 = nn.Linear(self.hidden_size, self.hidden_size)
self.Wf2 = nn.Linear(self.hidden_size, self.hidden_size)
self.W_1 = nn.Linear(self.hidden_size, self.hidden_size)
self.W_2 = nn.Linear(self.hidden_size, self.hidden_size)
self.q = nn.Linear(self.hidden_size, 1)
self.W_3 = nn.Linear(2 * self.hidden_size, self.hidden_size)
self.rnn = torch.nn.GRUCell(hidden_size, hidden_size, bias=True)
def forward(self, intra_item_emb, inter_item_emb, seq_len):
final_emb = self.item_fusing(intra_item_emb, inter_item_emb)
# final_emb = self.avg_fusing(intra_item_emb, inter_item_emb)
final_s = self.get_final_s(final_emb, seq_len)
return final_s
def item_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
if self.use_rnn:
final_emb = self.rnn(local_emb, global_emb)
else:
gate = F.sigmoid(self.Wf1(local_emb) + self.Wf2(global_emb))
final_emb = local_emb * gate + global_emb * (1 - gate)
return final_emb
def cnn_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.stack([local_emb, global_emb], dim=2)
embedding = embedding.permute(0, 2, 1)
embedding = self.conv(embedding).permute(0, 2, 1)
embedding = self.W_c(embedding).squeeze()
return embedding
def max_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.stack([local_emb, global_emb], dim=2)
embedding = torch.max(embedding, dim=2)[0]
return embedding
def avg_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = (local_emb + global_emb) / 2
return embedding
def concat_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.cat([local_emb, global_emb], dim=1)
embedding = self.W_4(embedding)
return embedding
def get_final_s(self, hidden, seq_len):
hidden = torch.split(hidden, tuple(seq_len.cpu().numpy()))
v_n = tuple(nodes[-1].view(1, -1) for nodes in hidden)
v_n_repeat = tuple(nodes[-1].view(1, -1).repeat(nodes.shape[0], 1) for nodes in hidden)
v_n_repeat = torch.cat(v_n_repeat, dim=0)
hidden = torch.cat(hidden, dim=0)
# Eq(6)
alpha = self.q(torch.sigmoid(self.W_1(v_n_repeat) + self.W_2(hidden))) # |V|_i * 1
s_g_whole = alpha * hidden # |V|_i * hidden_size
s_g_split = torch.split(s_g_whole, tuple(seq_len.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
# Eq(7)
h_s = self.W_3(torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1))
# h_s = torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1)
return h_s
class NARM(nn.Module):
def __init__(self, opt):
super(NARM, self).__init__()
self.hidden_size = opt.hidden_size
self.gru = nn.GRU(self.hidden_size * 2, self.hidden_size, batch_first=True)
self.linear_one = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_two = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_three = nn.Linear(self.hidden_size, 1, bias=False)
def sess_att(self, hidden, ht, mask):
q1 = self.linear_one(ht).view(ht.shape[0], 1, ht.shape[1]) # batch_size x 1 x latent_size
q2 = self.linear_two(hidden) # batch_size x seq_length x latent_size
alpha = self.linear_three(torch.sigmoid(q1 + q2))
hs = torch.sum(alpha * hidden * mask.view(mask.shape[0], -1, 1).float(), 1)
# hs = torch.sum(alpha * hidden, 1)
return hs
def padding(self, intra_item_embs, inter_item_embs, seq_lens):
inter_padded, intra_padded = [], []
max_len = max(seq_lens).detach().cpu().numpy()
for intra_item_emb, inter_item_emb, seq_len in zip(intra_item_embs, inter_item_embs, seq_lens):
if intra_item_emb.size(0) < max_len:
pad_vec = torch.zeros(max_len - intra_item_emb.size(0), self.hidden_size)
pad_vec = pad_vec.to('cuda')
intra_item_emb = torch.cat((intra_item_emb, pad_vec), dim=0)
inter_item_emb = torch.cat((inter_item_emb, pad_vec), dim=0)
inter_padded.append(inter_item_emb.unsqueeze(dim=0))
intra_padded.append(intra_item_emb.unsqueeze(dim=0))
inter_padded = torch.cat(tuple(inter_padded), dim=0)
intra_padded = torch.cat(tuple(intra_padded), dim=0)
item_embs = torch.cat((inter_padded, intra_padded), dim=-1)
return item_embs
def get_h_s(self, padded, seq_lens, masks):
outputs, _ = self.gru(padded)
output_last = outputs[torch.arange(seq_lens.shape[0]).long(), seq_lens - 1]
hs = self.sess_att(outputs, output_last, masks)
return hs
def forward(self, intra_item_embs, inter_item_embs, seq_lens):
max_len = max(seq_lens).detach().cpu().numpy()
masks = [[1] * le + [0] * (max_len - le) for le in seq_lens.detach().cpu().numpy()]
masks = torch.tensor(masks).to('cuda')
item_embs = self.padding(intra_item_embs, inter_item_embs, seq_lens)
return self.get_h_s(item_embs, seq_lens, masks)
class CNNFusing(nn.Module):
def __init__(self, hidden_size, num_filters):
super(CNNFusing, self).__init__()
self.hidden_size = hidden_size
self.num_filters = num_filters
self.Wf1 = nn.Linear(self.hidden_size, self.hidden_size)
self.Wf2 = nn.Linear(self.hidden_size, self.hidden_size)
self.W_1 = nn.Linear(self.hidden_size, self.hidden_size)
self.W_2 = nn.Linear(self.hidden_size, self.hidden_size)
self.q = nn.Linear(self.hidden_size, 1)
self.W_3 = nn.Linear(2 * self.hidden_size, self.hidden_size)
self.W_4 = nn.Linear(self.hidden_size * 2, self.hidden_size, bias=False)
# self.conv = torch.nn.Conv2d(in_channels=self.hidden_size, out_channels=self.hidden_size, kernel_size=(1, 2))
self.conv = torch.nn.Conv1d(in_channels=2, out_channels=self.num_filters, kernel_size=1)
self.W_c = nn.Linear(self.num_filters, 1)
# def forward(self, inter_item_emb, intra_item_emb, seq_len):
# final_emb = self.cnn_fusing(inter_item_emb, intra_item_emb)
# final_s = self.get_final_s(final_emb, seq_len)
# return final_s
def forward(self, intra_item_emb, inter_item_emb, seq_len):
# final_emb = self.cnn_fusing(intra_item_emb, inter_item_emb)
# final_emb = self.concat_fusing(intra_item_emb, inter_item_emb)
# final_emb = self.avg_fusing(intra_item_emb, inter_item_emb)
final_emb = self.max_fusing(intra_item_emb, inter_item_emb)
# final_emb = intra_item_emb
final_s = self.get_final_s(final_emb, seq_len)
return final_s
def cnn_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.stack([local_emb, global_emb], dim=2)
embedding = embedding.permute(0, 2, 1)
embedding = self.conv(embedding).permute(0, 2, 1)
embedding = self.W_c(embedding).squeeze()
return embedding
def max_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.stack([local_emb, global_emb], dim=2)
embedding = torch.max(embedding, dim=2)[0]
return embedding
def avg_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = (local_emb + global_emb) / 2
return embedding
def concat_fusing(self, local_emb, global_emb):
local_emb = torch.cat(local_emb, dim=0)
global_emb = torch.cat(global_emb, dim=0)
embedding = torch.cat([local_emb, global_emb], dim=1)
embedding = self.W_4(embedding)
return embedding
def get_final_s(self, hidden, seq_len):
hidden = torch.split(hidden, tuple(seq_len.cpu().numpy()))
v_n = tuple(nodes[-1].view(1, -1) for nodes in hidden)
v_n_repeat = tuple(nodes[-1].view(1, -1).repeat(nodes.shape[0], 1) for nodes in hidden)
v_n_repeat = torch.cat(v_n_repeat, dim=0)
hidden = torch.cat(hidden, dim=0)
# Eq(6)
alpha = self.q(torch.sigmoid(self.W_1(v_n_repeat) + self.W_2(hidden))) # |V|_i * 1
s_g_whole = alpha * hidden # |V|_i * hidden_size
s_g_split = torch.split(s_g_whole, tuple(seq_len.cpu().numpy())) # split whole s_g into graphs G_i
s_g = tuple(torch.sum(embeddings, dim=0).view(1, -1) for embeddings in s_g_split)
# Eq(7)
h_s = self.W_3(torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1))
# h_s = torch.cat((torch.cat(v_n, dim=0), torch.cat(s_g, dim=0)), dim=1)
return h_s
class GraphModel(nn.Module):
def __init__(self, opt, n_node):
super(GraphModel, self).__init__()
self.hidden_size, self.n_node = opt.hidden_size, n_node
self.embedding = nn.Embedding(self.n_node, self.hidden_size)
self.dropout = opt.gat_dropout
self.negative_slope = opt.negative_slope
self.heads = opt.heads
self.item_fusing = opt.item_fusing
self.num_filters = opt.num_filters
self.srgnn = SRGNN(self.hidden_size, n_node=n_node, item_fusing=opt.item_fusing)
self.group_graph = GroupGraph(self.hidden_size, dropout=self.dropout, negative_slope=self.negative_slope,
heads=self.heads, item_fusing=opt.item_fusing)
self.fuse_model = ItemFusing(self.hidden_size)
self.narm = NARM(opt)
self.cnn_fusing = CNNFusing(self.hidden_size, self.num_filters)
self.e2s = Embedding2Score(self.hidden_size, n_node, opt.using_represent, opt.item_fusing)
self.loss_function = nn.CrossEntropyLoss()
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, data):
if self.item_fusing:
x = data.x - 1
embedding = self.embedding(x)
embedding = embedding.squeeze()
intra_item_emb = self.srgnn(data, embedding)
num_filters = self.num_filters
mt_x = data.mt_x - 1
embedding = self.embedding(mt_x)
embedding = embedding.squeeze()
inter_item_emb = self.group_graph.forward(embedding, data)
# final_s = self.fuse_model.forward(intra_item_emb, inter_item_emb, data.sequence_len)
# final_s = self.narm.forward(intra_item_emb, inter_item_emb, data.sequence_len)
final_s = self.cnn_fusing.forward(intra_item_emb, inter_item_emb, data.sequence_len)
scores = self.e2s(h_s=None, h_group=None, final_s=final_s, item_embedding_table=self.embedding)
else:
x = data.x - 1
embedding = self.embedding(x)
embedding = embedding.squeeze()
h_s = self.srgnn(data, embedding)
mt_x = data.mt_x - 1
embedding = self.embedding(mt_x)
embedding = embedding.squeeze()
h_group = self.group_graph.forward(embedding, data)
scores = self.e2s(h_s=h_s, h_group=h_group, final_s=None, item_embedding_table=self.embedding)
return scores[0]
# + [markdown] id="WrO1uT0dhlyt"
# ## Trainer
# + id="UPqeYRD1h1ZM"
import numpy as np
import logging
import time
def forward(model, loader, device, writer, epoch, top_k=20, optimizer=None, train_flag=True):
start = time.time()
if train_flag:
model.train()
else:
model.eval()
hit10, mrr10 = [], []
hit5, mrr5 = [], []
hit20, mrr20 = [], []
mean_loss = 0.0
updates_per_epoch = len(loader)
test_dict = {}
for i, batch in enumerate(loader):
if train_flag:
optimizer.zero_grad()
scores = model(batch.to(device))
targets = batch.y - 1
loss = model.loss_function(scores, targets)
if train_flag:
loss.backward()
optimizer.step()
writer.add_scalar('loss/train_batch_loss', loss.item(), epoch * updates_per_epoch + i)
else:
sub_scores = scores.topk(20)[1] # batch * top_k
for score, target in zip(sub_scores.detach().cpu().numpy(), targets.detach().cpu().numpy()):
hit20.append(np.isin(target, score))
if len(np.where(score == target)[0]) == 0:
mrr20.append(0)
else:
mrr20.append(1 / (np.where(score == target)[0][0] + 1))
sub_scores = scores.topk(top_k)[1] # batch * top_k
for score, target in zip(sub_scores.detach().cpu().numpy(), targets.detach().cpu().numpy()):
hit10.append(np.isin(target, score))
if len(np.where(score == target)[0]) == 0:
mrr10.append(0)
else:
mrr10.append(1 / (np.where(score == target)[0][0] + 1))
sub_scores = scores.topk(5)[1] # batch * top_k
for score, target in zip(sub_scores.detach().cpu().numpy(), targets.detach().cpu().numpy()):
hit5.append(np.isin(target, score))
if len(np.where(score == target)[0]) == 0:
mrr5.append(0)
else:
mrr5.append(1 / (np.where(score == target)[0][0] + 1))
mean_loss += loss / batch.num_graphs
end = time.time()
print("\rProcess: [%d/%d] %.2f usetime: %fs" % (i, updates_per_epoch, i/updates_per_epoch * 100, end - start),
end='', flush=True)
print('\n')
if train_flag:
writer.add_scalar('loss/train_loss', mean_loss.item(), epoch)
print("Train_loss: ", mean_loss.item())
else:
writer.add_scalar('loss/test_loss', mean_loss.item(), epoch)
hit20 = np.mean(hit20) * 100
mrr20 = np.mean(mrr20) * 100
hit10 = np.mean(hit10) * 100
mrr10 = np.mean(mrr10) * 100
hit5 = np.mean(hit5) * 100
mrr5 = np.mean(mrr5) * 100
# writer.add_scalar('index/hit', hit, epoch)
# writer.add_scalar('index/mrr', mrr, epoch)
print("Result:")
print("\tMrr@", 20, ": ", mrr20)
print("\tRecall@", 20, ": ", hit20)
print("\tMrr@", top_k, ": ", mrr10)
print("\tRecall@", top_k, ": ", hit10)
print("\tMrr@", 5, ": ", mrr5)
print("\tRecall@", 5, ": ", hit5)
# for seq_len in range(1, 31):
# sub_hit = test_dict[seq_len][0]
# sub_mrr = test_dict[seq_len][1]
# print("Len ", seq_len, ": Recall@", top_k, ": ", np.mean(sub_hit) * 100, "Mrr@", top_k, ": ", np.mean(sub_mrr) * 100)
return mrr20, hit20, mrr10, hit10, mrr5, hit5
def case_study(model, loader, device, n_node):
model.eval()
for i, batch in enumerate(loader):
sc, ss, sg, mg, alpha_s, alpha_g = model(batch.to(device))
targets = batch.y - 1
scs = sc.topk(n_node)[1].detach().cpu().numpy()
sss = ss.topk(n_node)[1].detach().cpu().numpy()
sgs = sg.topk(n_node)[1].detach().cpu().numpy()
mgs = mg.detach().cpu().numpy()
targets = targets.detach().cpu().numpy()
# batch * top_k
for sc, ss, sg, ms, a_s, a_g, target in zip(scs, sss, sgs, mgs, alpha_s, alpha_g, targets):
rc = np.where(sc == target)[0][0] + 1
rs = np.where(ss == target)[0][0] + 1
rg = np.where(sg == target)[0][0] + 1
print("rank c:", rc, "rank s:", rs, "rank g:", rg, "gate:", ms)
print("att s:", a_s, "att g:", a_g)
# + [markdown] id="OWHZhL0fh1W2"
# ## Main
# + id="b-j_jzmgh1Ut"
import os
import argparse
import logging
from tqdm.notebook import tqdm
from torch_geometric.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
# + id="SSGHP_PEjlKP"
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='yoochoose1_64', help='dataset name: diginetica/yoochoose1_64/sample')
parser.add_argument('--batch_size', type=int, default=128, help='input batch size')
parser.add_argument('--hidden_size', type=int, default=100, help='hidden state size')
parser.add_argument('--epoch', type=int, default=15, help='the number of epochs to train for')
parser.add_argument('--lr', type=float, default=0.001, help='learning rate') # [0.001, 0.0005, 0.0001]
parser.add_argument('--lr_dc', type=float, default=0.5, help='learning rate decay rate')
parser.add_argument('--lr_dc_step', type=int, default=4, help='the number of steps after which the learning rate decay')
parser.add_argument('--l2', type=float, default=1e-5, help='l2 penalty') # [0.001, 0.0005, 0.0001, 0.00005, 0.00001]
parser.add_argument('--top_k', type=int, default=20, help='top K indicator for evaluation')
parser.add_argument('--negative_slope', type=float, default=0.2, help='negative_slope')
parser.add_argument('--gat_dropout', type=float, default=0.6, help='dropout rate in gat')
parser.add_argument('--heads', type=int, default=8, help='gat heads number')
parser.add_argument('--num_filters', type=int, default=2, help='gat heads number')
parser.add_argument('--using_represent', type=str, default='comb', help='comb, h_s, h_group')
parser.add_argument('--predict', type=bool, default=False, help='gat heads number')
parser.add_argument('--item_fusing', type=bool, default=True, help='gat heads number')
parser.add_argument('--random_seed', type=int, default=24, help='input batch size')
parser.add_argument('--id', type=int, default=120, help='id')
opt = parser.parse_args(args={})
def main():
torch.manual_seed(opt.random_seed)
torch.cuda.manual_seed(opt.random_seed)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# device = torch.device('cpu')
cur_dir = os.getcwd()
train_dataset = MultiSessionsGraph(cur_dir + '/datasets/' + opt.dataset, phrase='train', knn_phrase='neigh_data_'+str(opt.id))
train_loader = DataLoader(train_dataset, batch_size=opt.batch_size, shuffle=True)
test_dataset = MultiSessionsGraph(cur_dir + '/datasets/' + opt.dataset, phrase='test', knn_phrase='neigh_data_'+str(opt.id))
test_loader = DataLoader(test_dataset, batch_size=opt.batch_size, shuffle=False)
log_dir = cur_dir + '/log/' + str(opt.dataset) + '/' + time.strftime(
"%Y-%m-%d %H:%M:%S", time.localtime())
if not os.path.exists(log_dir):
os.makedirs(log_dir)
writer = SummaryWriter(log_dir)
if opt.dataset == 'cikm16':
n_node = 43097
elif opt.dataset == 'yoochoose1_64':
n_node = 17400
else:
n_node = 309
model = GraphModel(opt, n_node=n_node).to(device)
multigraph_parameters = list(map(id, model.group_graph.parameters()))
srgnn_parameters = (p for p in model.parameters() if id(p) not in multigraph_parameters)
parameters = [{"params": model.group_graph.parameters(), "lr": 0.001}, {"params": srgnn_parameters}]
# best 0.1
lambda1 = lambda epoch: 0.1 ** (epoch // 3)
lambda2 = lambda epoch: 0.1 ** (epoch // 3)
optimizer = torch.optim.Adam(parameters, lr=opt.lr, weight_decay=opt.l2)
#scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=opt.lr_dc_step, gamma=opt.lr_dc)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
if not opt.predict:
best_result20 = [0, 0]
best_epoch20 = [0, 0]
best_result10 = [0, 0]
best_epoch10 = [0, 0]
best_result5 = [0, 0]
best_epoch5 = [0, 0]
for epoch in range(opt.epoch):
scheduler.step(epoch)
print("Epoch ", epoch)
forward(model, train_loader, device, writer, epoch, top_k=opt.top_k, optimizer=optimizer, train_flag=True)
with torch.no_grad():
mrr20, hit20, mrr10, hit10, mrr5, hit5 = forward(model, test_loader, device, writer, epoch, top_k=opt.top_k, train_flag=False)
if hit20 >= best_result20[0]:
best_result20[0] = hit20
best_epoch20[0] = epoch
# torch.save(model.state_dict(), log_dir+'/best_recall_params.pkl')
if mrr20 >= best_result20[1]:
best_result20[1] = mrr20
best_epoch20[1] = epoch
if hit10 >= best_result10[0]:
best_result10[0] = hit10
best_epoch10[0] = epoch
# torch.save(model.state_dict(), log_dir+'/best_recall_params.pkl')
if mrr10 >= best_result10[1]:
best_result10[1] = mrr10
best_epoch10[1] = epoch
# torch.save(model.state_dict(), log_dir+'/best_mrr_params.pkl')
if hit5 >= best_result5[0]:
best_result5[0] = hit5
best_epoch5[0] = epoch
# torch.save(model.state_dict(), log_dir+'/best_recall_params.pkl')
if mrr5 >= best_result5[1]:
best_result5[1] = mrr5
best_epoch5[1] = epoch
print('Best Result:')
print('\tMrr@%d:\t%.4f\tEpoch:\t%d' % (20, best_result20[1], best_epoch20[1]))
print('\tRecall@%d:\t%.4f\tEpoch:\t%d\n' % (20, best_result20[0], best_epoch20[0]))
print('\tMrr@%d:\t%.4f\tEpoch:\t%d' % (opt.top_k, best_result10[1], best_epoch10[1]))
print('\tRecall@%d:\t%.4f\tEpoch:\t%d\n' % (opt.top_k, best_result10[0], best_epoch10[0]))
print('\tMrr@%d:\t%.4f\tEpoch:\t%d' % (5, best_result5[1], best_epoch5[1]))
print('\tRecall@%d:\t%.4f\tEpoch:\t%d' % (5, best_result5[0], best_epoch5[0]))
print("-"*20)
# print_txt(log_dir, opt, best_result, best_epoch, opt.top_k, note, save_config=True)
else:
log_dir = 'log/cikm16/2019-08-19 14:27:33'
model.load_state_dict(torch.load(log_dir+'/best_mrr_params.pkl'))
mrr, hit = forward(model, test_loader, device, writer, 0, top_k=opt.top_k, train_flag=False)
best_result = [hit, mrr]
best_epoch = [0, 0]
# print_txt(log_dir, opt, best_result, best_epoch, opt.top_k, save_config=False)
if __name__ == '__main__':
main()
| _docs/nbs/T919629-DGTN-on-Sample-data-in-PyTorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from Model.Model import DNA_Channel_Model
from Model.config import DEFAULT_PASSER, TM_NGS, TM_NNP
from Encode.Helper_Functions import preprocess, rs_decode, dna_to_int_array, load_dna
from Encode.DNAFountain import DNAFountain, Glass
from Analysis.Analysis import inspect_distribution, save_simu_result, dna_chunk
from Analysis.Fountain_analyzer import error_profile, FT_Analyzer
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import gumbel_r, poisson
import numpy as np
import logging
logging.getLogger().setLevel(logging.CRITICAL)
# plt.rcParams['savefig.dpi'] = 300
plt.rcParams['figure.dpi'] = 300
# %load_ext autoreload
# %autoreload 2
def hist(data,label = '', color = 'r'):
sns.distplot(data, hist=False,bins = 10, color=color, kde_kws={"shade": True},label = label)
# -
# # Distributions of read lines & loss + error
# ## 200 Repeats to determine the distribution prior
# +
file_name = 'lena.jpg'
arg = DEFAULT_PASSER
arg.syn_depth = 15
arg.seq_depth = 5
arg.syn_sub_prob = 0.002
Model = DNA_Channel_Model(None,DEFAULT_PASSER)
FA = FT_Analyzer(file_name, Model, 0.3, 2)
for i in range(200):
print(f'[{i+1}/200] experiment')
FA.run()
# +
data = np.array(FA.decode_lines)
hist(data,'Real',color = 'blue')
loc, scale = gumbel_r.fit(data)
X = np.linspace(loc-4*scale, loc+6*scale, 100)
Y = gumbel_r.pdf(X, loc, scale)
plt.plot(X,Y,label = 'gumber_r',color = 'black', linestyle = '-',linewidth = 0.7)
loc, scale = gumbel_r.fit(data[:10])
Y = gumbel_r.pdf(X, loc, scale)
plt.plot(X,Y,label = 'gumber_r-10',color = 'black', linestyle = '-.',linewidth = 0.7)
# plt.xlabel('number of droplets for decoding')
# plt.ylabel('frequency')
# plt.tick_params(labelsize=9)
plt.legend()
# -
# ## Number Distribution of droplets for successful decoding
data = np.array(FA.decode_lines)
for dist in ['norm', 'expon', 'logistica', 'gumbel', 'gumbel_l', 'gumbel_r', 'extreme1']:
print(scipy.stats.anderson(data, dist=dist))
# +
hist(data,'Real',color = 'blue')
loc, scale = gumbel_r.fit(data)
X = np.linspace(loc-4*scale, loc+6*scale, 100)
Y = gumbel_r.pdf(X, loc, scale)
plt.plot(X,Y,label = 'gumber_r',color = 'black', linestyle = '-',linewidth = 0.7)
loc, scale = gumbel_r.fit(data[:10])
Y = gumbel_r.pdf(X, loc, scale)
plt.plot(X,Y,label = 'gumber_r-10',color = 'black', linestyle = '-.',linewidth = 0.7)
# plt.xlabel('number of droplets for decoding')
# plt.ylabel('frequency')
# plt.tick_params(labelsize=9)
plt.legend()
# -
# ## Number Distribution of loss + fail
# +
data = np.array(FA.fail_nums)
hist(data,'Real')
u, sigma = np.mean(data), np.std(data)
X = np.arange(int(u-4*sigma),int(u+4*sigma))
print(u)
NY = poisson.pmf(X,u)
plt.plot(X,NY,label = 'poisson',color = 'black', linestyle = '-',linewidth = 0.7)
u, sigma = np.mean(data[0:13]), np.std(data[0:10])
print(u)
NY = poisson.pmf(X,u)
plt.plot(X,NY,label = 'poisson-10',color = 'black', linestyle = '-.',linewidth = 0.7)
plt.legend()
# -
# # Choose Proper Alpha and RS length
# FA.compute_overlap(0.2,True,True)
FA.fail_prob(0.25,True,True)
FA.fail_prob(0.21,True,False)
# FA.compute_overlap(0.3,True,False)
en = inspect_distribution(FA.out_dnas)
[sum([n == th for n in en]) for th in [1,2,3,4,5]]
_ = FA.alpha_scan(points = 50, color = 'black')
| Notebook_Encoding Design.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 1 - Python Basics
#
# ***
#
# ## The Python Interface
#
# Experiment in the IPython Shell; type 5 / 8, for example.
# Add another line of code to the Python script on the top-right (not in the Shell): print(7 + 10).
# +
# Division
print(5 / 8)
# + active=""
# Adding comment
# +
# Addition
print(7 + 10)
# -
# ## Python as a calculator
# Addition, subtraction
print(5 + 5)
print(5 - 5)
# Multiplication, division, modulo, and exponentiation
print(3 * 5)
print(10 / 2)
print(18 % 7)
print(4 ** 2)
# How much is your $100 worth after 7 years? (10% return each year)
print(100*1.1**7)
# ## Variable Assignment
# +
# Create a variable savings
savings = 100
# Print out savings
print(savings)
# -
# ## Calculations with variables
# +
# Create a variable savings
savings = 100
# Create a variable growth_multiplier
growth_multiplier = 1.1
# Calculate result
result = savings*growth_multiplier**7
# Print out result
print(result)
# -
# ## Other variable types
# +
# Create a variable desc
desc = "compound interest"
# Create a variable profitable
profitable = True
# -
# ## Guess the type
type(desc)
type(profitable)
# ## Operations with other types
# +
savings = 100
growth_multiplier = 1.1
desc = "compound interest"
# Assign product of growth_multiplier and savings to year1
year1 = savings*growth_multiplier
# Print the type of year1
print(type(year1))
# Assign sum of desc and desc to doubledesc
doubledesc = desc + desc
# Print out doubledesc
print(doubledesc)
# -
# ## Type conversion
# +
# Definition of savings and result
savings = 100
result = 100 * 1.10 ** 7
# Fix the printout
print("I started with $" + str(savings) + " and now have $" + str(result) + ". Awesome!")
# Definition of pi_string
pi_string = "3.1415926"
# Convert pi_string into float: pi_float
pi_float = float(pi_string)
# -
| Introduction_to_Python/Ch1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/robin9804/Jupyter_project/blob/master/CCR_matrix.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="RgWkhYml36YM" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
import cmath
from math import *
# + id="5EYsFsCW4C8f" colab_type="code" colab={}
#parameter
n = 1.46 #굴절율
F = np.array([[-1,0],[0,1]])
ACB = [150, -60, 60, -90]
ABC = [150, 60, -60, 30]
BAC = [-90, -60, 60, 30]
BCA = [-90, 60, -60, 150]
CBA = [30, -60, 60, 150]
CAB = [30, 60, -60, -90]
# + id="CrOWhf2gr7mW" colab_type="code" colab={}
def input_pol(ang):
'''
define Ep and Es with input polarization angle
'''
Ex = cos(ang)
Ey = sin(ang)
E = np.array([[Ex],[Ey]])
return E
#input ray의 phase는 동일하다고 가정
def rotate(ang):
'''
radian to rotation
'''
return np.array([[cos(ang),sin(ang)],[-sin(ang),cos(ang)]])
#phase shift의 angle은 투입각을 말함
def PS_s(ang):
'''
phase shift for P pol
'''
A = (n*sin(ang))**2
y = sqrt(A-1)
x = n*cos(ang)
delta = atan2(y,x)
return 2*delta
def PS_p(ang):
'''
phase shift for s pol
'''
y = n*sqrt((n*sin(ang))**2 -1)
x = cos(ang)
delta = atan2(y,x)
return 2*delta
def MP(ang):
'''
matrix P determine by phase shift P, S
'''
m1 = exp(PS_s(ang)*1j)
m2 = exp(1j*PS_p(ang))
return np.array([[m1,0],[0,m2]])
def Mat_TR(Path,ang):
r0 = np.dot(MP(ang),rotate(Path[0]))
r1 = np.dot(MP(ang),rotate(Path[1]))
r2 = np.dot(MP(ang),rotate(Path[2]))
r3 = np.dot(F,rotate(Path[3]))
return np.dot(np.dot(r3,r2),np.dot(r1,r0))
# + id="8_CgHWh8Ser_" colab_type="code" outputId="0d18c03f-bda7-4413-d291-436c8feaa880" colab={"base_uri": "https://localhost:8080/", "height": 35}
arr = np.array([[1],[2]])
print(arr[[0],[0]])
# + id="ajkHEUZzSlzX" colab_type="code" colab={}
def read_signal(E, A_pol):
if A_pol == 0:
JM = np.array([[1,0],[0,0]])
elif A_pol == 45:
JM = np.array([[0.5,0.5],[0.5,0.5]])
elif A_pol == 90:
JM = np.array([[0,0],[0,1]])
elif A_pol == 135:
JM = np.array([[0.5,-0.5],[-0.5,0.5]])
result = np.dot(JM,E)
Ex = abs(result[[0],[0]])
Ey = abs(result[[1],[0]])
return sqrt(Ex**2 + Ey**2)
# + id="PL7DyE2oaMLZ" colab_type="code" outputId="da574c5d-0f91-4db4-f340-03f5562269a9" colab={"base_uri": "https://localhost:8080/", "height": 54}
E = input_pol(int(input()))
read_signal(E,135)
# + id="OBUP8JYOajdk" colab_type="code" outputId="8acaba5e-6866-4841-f95a-21f1060420b2" colab={"base_uri": "https://localhost:8080/", "height": 287}
def polcam(ang):
pc = np.zeros((2,2))
init = input_pol(ang)
pc[[0],[0]] = read_signal(init,90)
pc[[1],[0]] = read_signal(init,45)
pc[[0],[1]] = read_signal(init,135)
pc[[1],[1]] = read_signal(init,0)
return pc
test = polcam(30)
plt.imshow(test)
plt.colorbar()
# + id="CQzW4HOZcl9L" colab_type="code" outputId="48843e2c-35d3-4094-f255-8f394aee60bc" colab={"base_uri": "https://localhost:8080/", "height": 287}
test = polcam(45)
plt.imshow(test,cmap='binary_r')
plt.colorbar()
# + id="GExwNgxgiti4" colab_type="code" colab={}
| CCR/CCR_matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 커널 서포트 벡터 머신
# - perceptron & SVM같은 선형판별함수(Decision hyperplane) 분류모형은 XOR 문제 풀지 못함
#
# ### 1. 기저함수: 비선형 판별 모형
# - 비선형 $\hat{y} = w^Tx$
# - 선형 $\hat{y} = w^T\phi(x)$
#
#
# - original D차원 독립변수 벡터 $x$
# - transformed M차원 독립변수 벡터 $\phi(x)$
#
# $$
# \phi(\cdot): {R}^D \rightarrow {R}^M \\
# \text{vector x} = (x_1, x_2, \cdots, x_D) \rightarrow \phi(\cdot)=(\phi_1(x), \phi_2(x), \cdots, \phi_M(x))
# $$
# ### 2. 커널 트릭
# - 커널$k(x, y) = x^Ty$ 의미: 데이터 간 유사도 측정(similarity)
# - x, y가 동일 벡터일 때 가장 큼
# - 거리 멀수록 작음
#
# #### 1.목적함수
#
# $$
# L = \sum_{N}^N a_n - \dfrac{1}{2}\sum_{n=1}^N \sum_{m=1}^N a_n a_m y_n y_m x_n^Tx_m \\
# L^{'} = \sum_{N}^N a_n - \dfrac{1}{2}\sum_{n=1}^N \sum_{m=1}^N a_n a_m y_n y_m \phi(x_n^T) \phi(x_m)
# $$
#
# #### 2. 예측모형
#
# $$
# y = w^Tx - w_0 = \sum_{m=1}^N a_n y_n x_n^Tx - w_0 \\
# y^{'} = w^Tx - w_0 = \sum_{m=1}^N a_n y_n \phi(x_n^T) \phi(x) - w_0
# $$
#
# #### 3. 정리
# $$
# k(x_i, x_j) = \phi(x_i)^T \phi(x_j)
# $$
# # 3. 커널 예시
# - 선형 서포트 벡터 머신
#
# $$
# k(x_1, x_2) = x_1^Tx_2
# $$
#
# - 다항 커널
#
# $$
# k(x_1, x_2) = \{ \gamma(x_1^Tx_2) + \theta) \} ^d
# $$
#
# - RBF(Radial Basis Function) = Gaussian Kernel
#
# $$
# k(x_1, x_2) = exp(-\gamma || x_i - x_2 || ^2)
# $$
#
# - 시그모이드 커널(Sigmoid Kernel)
#
# $$
# k(x_1, x_2) = tanh \{\gamma (x_1^T x_2) + \theta \}
# $$
# # 4. code `SVM`
# - `kernel = "linear"` 선형 SVM
# - `kernel = "poly"` 다항 커널
# - `gamma`: $\gamma$
# - `coef`: $\theta$
# - `degree`: $d$
# - `kernel = "rbf"(None)`: RBF 커널
# - `gamma`: $\gamma$
# - `kernel = "sigmoid"(None)`: 시그모이드 커널
# - `gamma`: $\gamma$
# - `coef`: $\theta$
# #### XOR
# +
np.random.seed(99)
X_xor = np.random.randn(200, 2)
X1 = X_xor[:, 0]
X2 = X_xor[:, 1]
# X1, X2을 1&3사분면 / 2&4시분면 으로 분류
Y_xor = np.logical_xor(X1 > 0, X2 > 0)
Y_xor = np.where(Y_xor, 1, 0)
# scatter plot
plt.scatter(X1[Y_xor == 1], X2[Y_xor == 1], c='b', marker='o', label='class1', s=50)
plt.scatter(X1[Y_xor == 0], X2[Y_xor == 0], c='r', marker='s', label='class0', s=50)
plt.legend()
plt.xlabel("x1")
plt.xlabel("x2")
plt.title("XOR classification")
plt.show()
# -
# # Polinomial Kernel
# ### Fucntion Transform
# $$
# \text{vector x = } (x_1, x_2) \rightarrow \phi(x) = (x_1^2, \sqrt{2}x_1x_2, x_2^2)
# $$
X = np.arange(6).reshape(3, 2)
X
x1 = X[:, 0]
x2 = X[:, 1]
x1, x2
# +
from sklearn.preprocessing import FunctionTransformer
def basis(X):
x1 = X[:, 0]
x2 = X[:, 1]
return np.vstack([x1**2, np.sqrt(2)*x1*x2, x2**2 ]).T
FunctionTransformer(basis).fit_transform(X)
# -
# ### XOR -> transformation -> $\phi(x)$
# $\phi$ 공간에서 직선으로 나누어짐
# +
X_xor2 = FunctionTransformer(basis).fit_transform(X_xor)
x1 = X_xor2[:, 0]
x2 = X_xor2[:, 1]
x3 = X_xor2[:, 2]
plt.scatter(x1[Y_xor == 1], x2[Y_xor == 1], c='b', marker='o', s=50)
plt.scatter(x1[Y_xor == 0], x2[Y_xor == 0], c='r', marker='s', s=50)
plt.ylim(-6, 6)
plt.title("The distribution of data after the transformation of space")
plt.xlabel("$\phi_1$")
plt.ylabel("$\phi_2$")
plt.show()
# -
# ### Solution
def plot_xor(X, y, mod, title, xmin=-3, xmax=3, ymin=-3, ymax=3):
x1 = X[:, 0]
x2 = X[:, 1]
XX, YY = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000),
np.arange(ymin, ymax, (ymax-ymin)/1000))
ZZ = np.reshape(mod.predict(\
np.array([XX.ravel(), YY.ravel()]).T), XX.shape)
plt.contourf(XX, YY, ZZ, cmap=mpl.cm.Paired_r, alpha=0.5)
plt.scatter(x1[y == 1], x2[ y == 1], c='b', marker='o', label='class1', s=50)
plt.scatter(x1[y == 0], x2[ y == 0], c='r', marker='s', label='class1', s=50)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.title(title)
plt.xlabel("x1")
plt.ylabel("x2")
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
basis_mod = Pipeline([("basis", FunctionTransformer(basis)),
("svc", SVC(kernel="linear"))]).fit(X_xor, Y_xor)
plot_xor(X_xor, Y_xor, basis_mod, "XOR applied by kernel SVM")
plt.show()
# # RBF
# +
x1 = 0
x2 = np.linspace(-7, 7, 100)
def rbf(x1, x2, gamma):
return np.exp(-gamma * np.abs(x2 -x1) ** 2)
plt.figure(figsize=(8, 4))
plt.plot(x2, rbf(x1, x2, gamma=0.05), label='gamma=0.05')
plt.plot(x2, rbf(x1, x2, gamma=1), label='gamma=1')
plt.plot(x2, rbf(x1, x2, gamma=15), label='gamma=15')
plt.xlim(-4, 4)
plt.grid(False)
plt.xlabel("x2-x1")
plt.legend()
plt.show()
# -
plt.figure(figsize=(8, 8))
plt.subplot(221)
plot_xor(X_xor, Y_xor, SVC(kernel="rbf", gamma=2).fit(X_xor, Y_xor), "RBF(gamma=2)")
plt.subplot(222)
plot_xor(X_xor, Y_xor, SVC(kernel="rbf", gamma=10).fit(X_xor, Y_xor), "RBF(gamma=10)")
plt.subplot(223)
plot_xor(X_xor, Y_xor, SVC(kernel="rbf", gamma=50).fit(X_xor, Y_xor), "RBF(gamma=50)")
plt.subplot(224)
plot_xor(X_xor, Y_xor, SVC(kernel="rbf", gamma=100).fit(X_xor, Y_xor), "RBF(gamma=100)")
plt.tight_layout()
plt.show()
# # Solutions
def plot_iris(X, y, mod, title, xmin=-2.5, xmax=2.5, ymin=-2.5, ymax=2.5):
x1 = X[:, 0]
x2 = X[:, 1]
XX, YY = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000),
np.arange(ymin, ymax, (ymax-ymin)/1000))
ZZ = np.reshape(mod.predict(\
np.array([XX.ravel(), YY.ravel()]).T), XX.shape)
plt.contourf(XX, YY, ZZ, cmap=mpl.cm.Paired_r, alpha=0.5)
plt.scatter(x1[y == 0], x2[ y == 0], c='r', marker='o', label='class1', s=50)
plt.scatter(x1[y == 1], x2[ y == 1], c='g', marker='s', label='class2', s=50)
plt.scatter(x1[y == 2], x2[ y == 2], c='b', marker='^', label='class3', s=50)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.xlabel("petal length")
plt.ylabel("petal width")
plt.title(title)
# +
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
iris = load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
mod1 = SVC(kernel='linear').fit(X_test_std, y_test)
mod2 = SVC(kernel='poly', gamma=10, C=1.0).fit(X_test_std, y_test)
mod3 = SVC(kernel='rbf', gamma=1, C=1.0).fit(X_test_std, y_test)
plt.figure(figsize=(8, 12))
plt.subplot(311)
plot_iris(X_test_std, y_test, mod1, "linear SVC")
plt.subplot(312)
plot_iris(X_test_std, y_test, mod2, "Poly SVC")
plt.subplot(313)
plot_iris(X_test_std, y_test, mod3, "RBF SVC")
plt.tight_layout()
plt.show()
| 06.Math/17.6.3 Kernel SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# <h1 style="font-size:35px;
# color:black;
# ">Lab 1 Quantum Circuits</h1>
# -
# Prerequisite
# - [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)
# - [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)
#
# Other relevant materials
# - [Access IBM Quantum Systems](https://qiskit.org/documentation/install.html#access-ibm-quantum-systems)
# - [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)
# - [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)
# - [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)
# - [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
# <h2 style="font-size:24px;">Part 1: Classical logic gates with quantum circuits</h2>
#
# <br>
# <div style="background: #E8E7EB; border-radius: 5px;
# -moz-border-radius: 5px;">
# <p style="background: #800080;
# border-radius: 5px 5px 0px 0px;
# padding: 10px 0px 10px 10px;
# font-size:18px;
# color:white;
# "><b>Goal</b></p>
# <p style=" padding: 0px 0px 10px 10px;
# font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .</p>
# </div>
#
# An implementation of the `NOT` gate is provided as an example.
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
# <h3 style="font-size: 20px">📓 XOR gate</h3>
#
# Takes two binary strings as input and gives one as output.
#
# The output is '0' when the inputs are equal and '1' otherwise.
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
# <h3 style="font-size: 20px">📓 AND gate</h3>
#
# Takes two binary strings as input and gives one as output.
#
# The output is `'1'` only when both the inputs are `'1'`.
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
# <h3 style="font-size: 20px">📓 NAND gate</h3>
#
# Takes two binary strings as input and gives one as output.
#
# The output is `'0'` only when both the inputs are `'1'`.
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
# <h3 style="font-size: 20px">📓 OR gate</h3>
#
# Takes two binary strings as input and gives one as output.
#
# The output is '1' if either input is '1'.
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
# <h2 style="font-size:24px;">Part 2: AND gate on Quantum Computer</h2>
# <br>
# <div style="background: #E8E7EB; border-radius: 5px;
# -moz-border-radius: 5px;">
# <p style="background: #800080;
# border-radius: 5px 5px 0px 0px;
# padding: 10px 0px 10px 10px;
# font-size:18px;
# color:white;
# "><b>Goal</b></p>
# <p style=" padding: 0px 0px 10px 10px;
# font-size:16px;">Execute AND gate on a real quantum system and learn how the noise properties affect the result.</p>
# </div>
#
# In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerent; they are noisy.
#
# The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.
#
# Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.html#access-ibm-quantum-systems).
#
# Now that you are ready to use the real quantum computer, let's begin.
# <h3 style="font-size: 20px">Step 1. Choosing a device</h3>
# First load the account from the credentials saved on disk by running the following cell:
# + tags=["uses-hardware"]
IBMQ.load_account()
# -
# After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
# + tags=["uses-hardware"]
IBMQ.providers()
# -
# Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
# + tags=["uses-hardware"]
provider = IBMQ.get_provider('ibm-q')
provider.backends()
# -
# Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates.
#
# Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
# + tags=["uses-hardware"]
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_lima')
backend_ex
# -
# For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
# + tags=["uses-hardware"]
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
# -
# One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
# + tags=["uses-hardware"]
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
# -
# Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary.
#
# In this exercise, we select one of the IBM Quantum systems: `ibmq_quito`.
# + tags=["uses-hardware"]
# run this cell
backend = provider.get_backend('ibmq_quito')
# -
# <h3 style="font-size: 20px">Step 2. Define AND function for a real device</h3>
#
# We now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration).
# <h4 style="font-size: 16px">Qiskit Transpiler</h4>
# It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware.
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
# In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.
#
# You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html).
# Let's modify AND function in Part1 properly for the real system with the transpile step included.
# + tags=["uses-hardware"]
from qiskit.tools.monitor import job_monitor
# + tags=["uses-hardware"]
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
# -
# When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline.
# First, examine `ibmq_quito` through the widget by running the cell below.
backend
# <p>📓 Determine three qubit initial layout considering the error map and assign it to the list variable layout2.</p>
layout =
# <p>📓 Describe the reason for your choice of initial layout.</p>
#
# **your answer:**
# Execute `AND` gate on `ibmq_quito` by running the cell below.
# +
output_all = []
qc_trans_all = []
prob_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans, output = AND(input1, input2, backend, layout)
output_all.append(output)
qc_trans_all.append(qc_trans)
prob = output[str(int( input1=='1' and input2=='1' ))]/8192
prob_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
# -
# <h3 style="font-size: 20px">Step 3. Interpret the result</h3>
# There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.
#
# A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output.
# <h4 style="font-size: 16px"> Circuit depth and result accuracy</h4>
# Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibm_lagos` and their circuit depths with the success probability for producing correct answer.
print('Transpiled AND gate circuit for ibmq_vigo with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[0]) )
qc_trans_all[0].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[1]) )
qc_trans_all[1].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[2]) )
qc_trans_all[2].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[3]) )
qc_trans_all[3].draw('mpl')
# <p>📓 Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes.</p>
# **your answer:**
| content/ch-labs/Lab01_QuantumCircuits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/duke-sunshine/Algorithmic-Trading/blob/main/Session_2_VWAP.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="FndclTZNEoyA"
# # 1. Import Data
# + [markdown] id="34pCwht6E9Ge"
# 1. Install the Alpha Vantage API
# 2. [Claim your own API Key](https://www.alphavantage.co/support/#api-key)
# 3. [Import data from Time Series Stock APIs by specifiying API key and Parameters](https://www.alphavantage.co/documentation/)
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="hfRmiYwYEsBM" outputId="d8b31067-2c76-46c1-8d73-3ef7977d4a2e"
#Install Alpha Vantage Package
# !pip install alpha_vantage
# + id="SBojcuS3FT8L"
# Make useful imports
from alpha_vantage.timeseries import TimeSeries
from alpha_vantage.techindicators import TechIndicators
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + id="Ihz8cBEPG3Bg"
#Data class
class Data:
def __init__(self,API_key, symbol, interval):
self.API_key = API_key
self.symbol = symbol
self.interval = interval
def import_data(self):
ts = TimeSeries(key=self.API_key, output_format='pandas')
data=ts.get_intraday(self.symbol, interval = self.interval, outputsize = 'full') # We use 5-min interval to determine the time when traders make investment decisions.
data[0].rename(columns={'1. open':'open', '2. high':'high', '3. low':'low', '4. close':'close', '5. volume':'volume'}, inplace = True)
all_df = data[0]
num = int(3/5 * all_df.shape[0])
all_df.sort_index(ascending=True, inplace=True)
ti = TechIndicators(key=self.API_key, output_format='pandas')
vwap_data=ti.get_vwap(self.symbol, interval=self.interval)
vwap_df = vwap_data[0]
all_df = all_df.merge(vwap_df, how='inner', left_on='date', right_index=True)
df = all_df.iloc[num:].copy()
return df
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="hQrAOYudIOAx" outputId="afcacaa1-35cf-451b-a65d-22e476cb9f2a"
#return Tesla data
TSLA=Data('RQT8G71H1MTA5YZJ','TSLA','5min')
df_TSLA=TSLA.import_data()
df_TSLA.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="qfBbt1TlIhjR" outputId="a4da40c5-1e3b-420f-88e4-96e978856700"
#return Apple data
AAPL=Data('RQT8G71H1MTA5YZJ','AAPL','5min')
df_AAPL=AAPL.import_data()
df_AAPL.head()
# + [markdown] id="ju5H5mKDEwTc"
# # 2. Generate buy and sell signals with Visualizations
# + [markdown] id="BMitpOU6KqWO"
# The Volume Weighted Average Price formula is given by: $VWAP = \frac{\sum Price * Volume}{\sum Volume}$
# + id="h3dE5iNZO0oR"
class signal:
def __init__(self, data, above):
self.data = data
self.above = above
def signals(self):
signals = pd.DataFrame(index=self.data.index)
signals = signals.sort_values(by='date')
signals['signal'] = 0.0
if self.above:
signals['signal'] = np.where(self.data['close'] > self.data['VWAP'], 1.0, 0.0)
else:
signals['signal'] = np.where(self.data['close'] < self.data['VWAP'], 1.0, 0.0)
signals['positions'] = signals['signal'].diff()
signals =signals.dropna()
return signals
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="7Una2GaMZSvv" outputId="5bee0268-1cf9-4eb3-97eb-582f363c5e14"
#return signals for Tesla using VWAP above
VWAP_TSLA = signal(df_TSLA, True)
signals_TSLA = VWAP_TSLA.signals()
signals_TSLA.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="keMEo53jZzCC" outputId="f9c36aaf-89bb-471f-f212-b29f8b6181f6"
#return signals for Apple using VWAP above
VWAP_AAPL = signal(df_AAPL, True)
signals_AAPL = VWAP_AAPL.signals()
signals_AAPL.head()
# + colab={"base_uri": "https://localhost:8080/"} id="t4cnTzWfeGN5" outputId="354a621a-fe43-4cf3-cd09-4449e27b3077"
test = df_TSLA[['close','VWAP']]
test = test.merge(signals_TSLA,how='inner',left_index=True, right_index=True)
print(test.loc[test.positions == 1.0].index)
# + id="TYj31GfDaHh7"
class signal_figure:
def __init__(self, prices, signals, topic):
self.prices = prices
self.signals = signals
self.topic = topic
def signal_figure(self):
close = self.prices[['close', 'VWAP']]
close = close.merge(self.signals,how='inner',left_index=True, right_index=True)
close = close.sort_values(by='date')
close.index = pd.to_datetime(close.index)
fig = plt.figure(figsize = (18,8))
plt.plot(close.close, color='b', lw=1., label = 'Stock Price')
plt.plot(close.VWAP, color='y', lw=1., label = 'VWAP line')
plt.plot(close.loc[close.positions == 1.0].index, close.close[close.positions == 1.0], '^', markersize=5, color='green',label = 'buying signal')
plt.plot(close.loc[close.positions == -1.0].index, close.close[close.positions == -1.0], 'v', markersize=5, color='red',label = 'selling signal')
plt.xlabel('Date')
plt.ylabel('Dollars')
plt.title(self.topic + ' Price')
plt.legend()
plt.show()
return close
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="iyMf3mZzb9xX" outputId="28fae116-0ebf-4a7f-c351-f7f88643e784"
#Buy and sell signals plot for Tesla (5-min interval)
VWAP_TSLA_Figure = signal_figure(df_TSLA, signals_TSLA, 'TSLA')
VWAP_Signal_TSLA = VWAP_TSLA_Figure.signal_figure()
# + colab={"base_uri": "https://localhost:8080/", "height": 488} id="rXBAXiHBgXy5" outputId="06920691-1a69-4ab3-c526-3cf137343b94"
VWAP_Signal_TSLA.head(13)
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="JYTG88KCgEdO" outputId="d453c77c-329b-4fc8-9cc4-837d88428f4e"
#Buy and sell signals plot for Apple (5-min interval)
VWAP_AAPL_Figure = signal_figure(df_AAPL, signals_AAPL, 'AAPL')
VWAP_Signal_AAPL = VWAP_AAPL_Figure.signal_figure()
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="YP0qlxVMgUUT" outputId="2ccf5dc3-5d60-44bc-e542-155f543348a8"
VWAP_Signal_AAPL.head()
# + id="_UT5PusnMaWL"
# + [markdown] id="LZ78e-Q8E2EL"
# # 3. Generate Return of Investment and Portfolio Flows (cash, holding and total)
# + id="egtxByELE2cA"
class portfolio:
def __init__(self,data,topic,initial_capital=10000,max_buy=10000000,max_sell=10000000):
self.data = data
self.topic = topic
self.initial_capital = initial_capital
self.max_buy = max_buy
self.max_sell = max_sell
def portfolios(self):
management = self.data
prices = self.data['close']
states = self.data['positions']
states_buy = []
states_sell = []
cashes = []
stocks = []
holdings = []
cash = self.initial_capital
stock = 0
holding = 0
state = 0
def buy(i,cash,stock,price):
shares = cash // price #shares to buy in integer
if shares<1:
print('order %d: total cash %f, not enough to buy 1 share at price %f' % (i, cash, price))
else:
if shares>self.max_buy:
buy_units = self.max_buy
else:
buy_units = shares
cost = buy_units*price
cash -= cost
stock += buy_units
holding = stock*price
print('index %d: buy %d units at price %f, current cash %f, current stock %f,current holding %f' % (i, buy_units, price, cash, stock, holding))
return cash, stock, holding
def sell(i,cash, stock,price):
if stock == 0:
print('index %d: cannot sell anything, currentstock 0' % (i))
holding = 0
else:
if stock > self.max_sell:
sell_units = self.max_sell
else:
sell_units = stock
stock -=sell_units
revenue = sell_units*price
cash += revenue
holding = stock*price
print('index %d: sell %d units at price %f, current cash %f, current stock %f,current holding %f' % (i, sell_units, price, cash, stock, holding))
return cash, stock, holding
for i in range(0,management.shape[0]):
state = states[i]
price = prices[i]
if state == 1:
cash, stock, holding = buy(i, cash, stock, price)
states_buy.append(i)
elif state == -1:
cash, stock, holding = sell(i,cash, stock, price)
states_sell.append(i)
cashes.append(cash)
stocks.append(stock)
holdings.append(holding)
management['cash']=cashes
management['stock']=stocks
management['holding']=holdings
management['total']=management['cash']+management['holding']
management['roi']=(management['total']-self.initial_capital)/self.initial_capital
fig, (ax1, ax2) = plt.subplots(2,1, sharey=False, figsize=(14,10))
ax1.plot(management[['holding', 'cash', 'total']])
ax1.legend(management[['holding', 'cash', 'total']])
ax1.set_title("Visualization of " + self.topic + " Portfolio Flows")
ax2.plot(management[['roi']])
ax2.legend(management[['roi']])
ax2.set_title(self.topic + " Return of Investment")
plt.show()
return management
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="OtPILXhBhIs7" outputId="dcc8a9f0-d623-42e1-d1b7-91081e9e01de"
VWAP_TSLA_Portfolio = portfolio(VWAP_Signal_TSLA,'TSLA')
TSLA_Portfolio = VWAP_TSLA_Portfolio.portfolios()
TSLA_Portfolio.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="G-pt075LjPUY" outputId="c4484dc8-9286-4b59-d2df-e28ba36c8467"
VWAP_AAPL_Portfolio = portfolio(VWAP_Signal_AAPL,'AAPL')
AAPL_Portfolio = VWAP_AAPL_Portfolio.portfolios()
AAPL_Portfolio.head()
# + id="imygbNw1jPgM"
# + id="RWXgdMgshRGw"
| Session_2_VWAP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Matrix generation
# ## Init symbols for *sympy*
# +
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# %aimport geom_util
# +
# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here
# %config InlineBackend.figure_format='retina'
plt.rcParams['figure.figsize'] = (12, 12)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# SMALL_SIZE = 42
# MEDIUM_SIZE = 42
# BIGGER_SIZE = 42
# plt.rc('font', size=SMALL_SIZE) # controls default text sizes
# plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
# plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
# plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
# plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
init_printing()
# -
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
# ## Cylindrical coordinates
R, L, ga, gv = symbols("R L g_a g_v", real = True, positive=True)
# +
a1 = pi / 2 + (L / 2 - alpha1)/R
a2 = 2 * pi * alpha1 / L
x1 = (R + ga * cos(gv * a1)) * cos(a1)
x2 = alpha2
x3 = (R + ga * cos(gv * a1)) * sin(a1)
r = x1*N.i + x2*N.j + x3*N.k
z = ga/R*gv*sin(gv*a1)
w = 1 + ga/R*cos(gv*a1)
dr1x=(z*cos(a1) + w*sin(a1))
dr1z=(z*sin(a1) - w*cos(a1))
r1 = dr1x*N.i + dr1z*N.k
r2 =N.j
mag=sqrt((w)**2+(z)**2)
nx = -dr1z/mag
nz = dr1x/mag
n = nx*N.i+nz*N.k
dnx=nx.diff(alpha1)
dnz=nz.diff(alpha1)
dn= dnx*N.i+dnz*N.k
# +
Ralpha = r+alpha3*n
R1=r1+alpha3*dn
R2=Ralpha.diff(alpha2)
R3=n
# -
r1
R1a3x=-1/(mag**3)*(w*cos(a1) - z*sin(a1))*(-1/R*w+ga*gv*gv/(R*R)*cos(gv*a1))*z+(1/mag)*(1/R*w*sin(a1)+ga*gv*gv/(R*R)*cos(gv*a1)*sin(a1)+2/R*z*cos(a1))
R1a3x
# +
ddr=r1.diff(alpha1)
cp=r1.cross(ddr)
k=cp.magnitude()/(mag**3)
k
# -
# k=trigsimp(k)
# k
k=simplify(k)
k
# +
q=(1/R*w+ga*gv*gv/(R*R)*cos(gv*a1))
f=q**2+4/(R*R)*z*z
f=trigsimp(f)
f
# -
f=expand(f)
f
trigsimp(f)
# +
q=(1/R*w+ga*gv*gv/(R*R)*cos(gv*a1))
f1=q*w+2/R*z*z
f1=trigsimp(f1)
f1
# -
f1=expand(f1)
f1
f1=trigsimp(f1)
f1
R1a3x = trigsimp(R1a3x)
R1a3x
R1
R2
R3
# ### Draw
# +
import plot
# %aimport plot
x1 = Ralpha.dot(N.i)
x3 = Ralpha.dot(N.k)
alpha1_x = lambdify([R, L, ga, gv, alpha1, alpha3], x1, "numpy")
alpha3_z = lambdify([R, L, ga, gv, alpha1, alpha3], x3, "numpy")
R_num = 1/0.8
L_num = 2
h_num = 0.1
ga_num = h_num/3
gv_num = 20
x1_start = 0
x1_end = L_num
x3_start = -h_num/2
x3_end = h_num/2
def alpha_to_x(a1, a2, a3):
x=alpha1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=alpha3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_init_geometry_2(x1_start, x1_end, x3_start, x3_end, alpha_to_x)
# +
# %aimport plot
R3_1=R3.dot(N.i)
R3_3=R3.dot(N.k)
R3_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R3_1, "numpy")
R3_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R3_3, "numpy")
def R3_to_x(a1, a2, a3):
x=R3_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R3_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, 0, alpha_to_x, R3_to_x)
# +
# %aimport plot
R1_1=R1.dot(N.i)
R1_3=R1.dot(N.k)
R1_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R1_1, "numpy")
R1_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R1_3, "numpy")
def R1_to_x(a1, a2, a3):
x=R1_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R1_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, h_num/2, alpha_to_x, R1_to_x)
# -
# ### Lame params
# +
H1 = sqrt((alpha3*((-(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) - ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R)/R + ga*gv**2*cos((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 - 2*ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + (alpha3*(((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R)/R + ga*gv**2*sin((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + 2*ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) + ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)**2)
H2=S(1)
H3=S(1)
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
dH[i,0]=H[i].diff(alpha1)
dH[i,1]=H[i].diff(alpha2)
dH[i,2]=H[i].diff(alpha3)
trigsimp(H1)
# -
# ### Metric tensor
# ${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
# %aimport geom_util
G_up = getMetricTensorUpLame(H1, H2, H3)
# ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
G_down = getMetricTensorDownLame(H1, H2, H3)
# ### Christoffel symbols
# +
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
# -
# ### Gradient of vector
# $
# \left(
# \begin{array}{c}
# \nabla_1 u_1 \\ \nabla_2 u_1 \\ \nabla_3 u_1 \\
# \nabla_1 u_2 \\ \nabla_2 u_2 \\ \nabla_3 u_2 \\
# \nabla_1 u_3 \\ \nabla_2 u_3 \\ \nabla_3 u_3 \\
# \end{array}
# \right)
# =
# B \cdot
# \left(
# \begin{array}{c}
# u_1 \\
# \frac { \partial u_1 } { \partial \alpha_1} \\
# \frac { \partial u_1 } { \partial \alpha_2} \\
# \frac { \partial u_1 } { \partial \alpha_3} \\
# u_2 \\
# \frac { \partial u_2 } { \partial \alpha_1} \\
# \frac { \partial u_2 } { \partial \alpha_2} \\
# \frac { \partial u_2 } { \partial \alpha_3} \\
# u_3 \\
# \frac { \partial u_3 } { \partial \alpha_1} \\
# \frac { \partial u_3 } { \partial \alpha_2} \\
# \frac { \partial u_3 } { \partial \alpha_3} \\
# \end{array}
# \right)
# = B \cdot D \cdot
# \left(
# \begin{array}{c}
# u^1 \\
# \frac { \partial u^1 } { \partial \alpha_1} \\
# \frac { \partial u^1 } { \partial \alpha_2} \\
# \frac { \partial u^1 } { \partial \alpha_3} \\
# u^2 \\
# \frac { \partial u^2 } { \partial \alpha_1} \\
# \frac { \partial u^2 } { \partial \alpha_2} \\
# \frac { \partial u^2 } { \partial \alpha_3} \\
# u^3 \\
# \frac { \partial u^3 } { \partial \alpha_1} \\
# \frac { \partial u^3 } { \partial \alpha_2} \\
# \frac { \partial u^3 } { \partial \alpha_3} \\
# \end{array}
# \right)
# $
# +
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
# -
# ### Strain tensor
#
# $
# \left(
# \begin{array}{c}
# \varepsilon_{11} \\
# \varepsilon_{22} \\
# \varepsilon_{33} \\
# 2\varepsilon_{12} \\
# 2\varepsilon_{13} \\
# 2\varepsilon_{23} \\
# \end{array}
# \right)
# =
# \left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
# \left(
# \begin{array}{c}
# \nabla_1 u_1 \\ \nabla_2 u_1 \\ \nabla_3 u_1 \\
# \nabla_1 u_2 \\ \nabla_2 u_2 \\ \nabla_3 u_2 \\
# \nabla_1 u_3 \\ \nabla_2 u_3 \\ \nabla_3 u_3 \\
# \end{array}
# \right)$
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
# +
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
# %aimport geom_util
u=getUHat3DPlane(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
# -
# ### Physical coordinates
# $u_i=u_{[i]} H_i$
# +
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
# +
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
row_index = i*3+j
B_P[row_index, row_index] = 1/(H[i]*H[j])
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
# -
StrainL=simplify(E*Grad_U_P)
StrainL
# +
# %aimport geom_util
u=getUHatU3Main(alpha1, alpha2, alpha3)
gradup=Grad_U_P*u
E_NLp = E_NonLinear(gradup)*Grad_U_P
simplify(E_NLp)
# -
# ### Tymoshenko theory
#
# $u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
#
# $u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
#
# $u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
#
# $ \left(
# \begin{array}{c}
# u_1 \\
# \frac { \partial u_1 } { \partial \alpha_1} \\
# \frac { \partial u_1 } { \partial \alpha_2} \\
# \frac { \partial u_1 } { \partial \alpha_3} \\
# u_2 \\
# \frac { \partial u_2 } { \partial \alpha_1} \\
# \frac { \partial u_2 } { \partial \alpha_2} \\
# \frac { \partial u_2 } { \partial \alpha_3} \\
# u_3 \\
# \frac { \partial u_3 } { \partial \alpha_1} \\
# \frac { \partial u_3 } { \partial \alpha_2} \\
# \frac { \partial u_3 } { \partial \alpha_3} \\
# \end{array}
# \right) = T \cdot
# \left(
# \begin{array}{c}
# u \\
# \frac { \partial u } { \partial \alpha_1} \\
# \gamma \\
# \frac { \partial \gamma } { \partial \alpha_1} \\
# w \\
# \frac { \partial w } { \partial \alpha_1} \\
# \end{array}
# \right) $
# +
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
# -
D_p_T = StrainL*T
simplify(D_p_T)
# +
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
simplify(StrainNL)
# -
# ### Square theory
#
# $u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
#
# $u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
#
# $u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
#
# $ \left(
# \begin{array}{c}
# u^1 \\
# \frac { \partial u^1 } { \partial \alpha_1} \\
# \frac { \partial u^1 } { \partial \alpha_2} \\
# \frac { \partial u^1 } { \partial \alpha_3} \\
# u^2 \\
# \frac { \partial u^2 } { \partial \alpha_1} \\
# \frac { \partial u^2 } { \partial \alpha_2} \\
# \frac { \partial u^2 } { \partial \alpha_3} \\
# u^3 \\
# \frac { \partial u^3 } { \partial \alpha_1} \\
# \frac { \partial u^3 } { \partial \alpha_2} \\
# \frac { \partial u^3 } { \partial \alpha_3} \\
# \end{array}
# \right) = L \cdot
# \left(
# \begin{array}{c}
# u_{10} \\
# \frac { \partial u_{10} } { \partial \alpha_1} \\
# u_{11} \\
# \frac { \partial u_{11} } { \partial \alpha_1} \\
# u_{12} \\
# \frac { \partial u_{12} } { \partial \alpha_1} \\
# u_{30} \\
# \frac { \partial u_{30} } { \partial \alpha_1} \\
# u_{31} \\
# \frac { \partial u_{31} } { \partial \alpha_1} \\
# u_{32} \\
# \frac { \partial u_{32} } { \partial \alpha_1} \\
# \end{array}
# \right) $
# +
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
# -
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
# ## Mass matrix
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
M_p = L.T*M*L*(1+alpha3/R)
mass_matr = simplify(integrate(M_p, (alpha3, -h/2, h/2)))
mass_matr
| py/notebooks/MatricesForPlaneCorrugatedShells1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import re
import networkx as nx
from IPython.display import Image, display
from collections import defaultdict, Counter
from itertools import combinations
# +
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
mpl.style.use('seaborn-muted')
# -
class Token:
def __init__(self, token, ignore_case=True, scrub_re='\.'):
self.ignore_case = ignore_case
self.scrub_re = scrub_re
self.token = token
self.token_clean = self._clean(token)
def _clean(self, token):
if self.ignore_case:
token = token.lower()
if self.scrub_re:
token = re.sub(self.scrub_re, '', token)
return token
def __call__(self, input_token):
return self._clean(input_token) == self.token_clean
def __repr__(self):
return '%s<%s>' % (self.__class__.__name__, self.token_clean)
def __str__(self):
return '<%s>' % self.token_clean
def __hash__(self):
return hash((id(self.__class__), self.token_clean, self.ignore_case, self.scrub_re))
def __eq__(self, other):
return hash(self) == hash(other)
class GeoFSA(nx.DiGraph):
def __init__(self):
super().__init__()
self.start_node = self.next_node()
def next_node(self):
"""Get next integer node id, counting up.
"""
node = max(self.nodes) + 1 if self.nodes else 0
self.add_node(node)
return node
def add_token(self, accept_fn, parent=None, optional=False):
s1 = parent if parent else self.start_node
s2 = self.next_node()
self.add_edge(s1, s2, accept_fn=accept_fn, label=str(accept_fn))
last_node = s2
# Add skip transition if optional.
if optional:
s3 = self.next_node()
self.add_edge(s2, s3, label='ε')
self.add_edge(s1, s3, label='ε')
last_node = s3
return last_node
def plot(g):
dot = nx.drawing.nx_pydot.to_pydot(g)
dot.set_rankdir('LR')
display(Image(dot.create_png()))
# +
g = GeoFSA()
south = g.add_token(Token('South'))
lake = g.add_token(Token('Lake'), south)
tahoe = g.add_token(Token('Tahoe'), lake)
comma = g.add_token(Token(','), tahoe, optional=True)
ca = g.add_token(Token('CA'), comma)
california = g.add_token(Token('California'), comma)
los = g.add_token(Token('Los'))
angeles = g.add_token(Token('Angeles'), los)
comma = g.add_token(Token(','), angeles, optional=True)
ca = g.add_token(Token('CA'), comma)
california = g.add_token(Token('California'), comma)
south = g.add_token(Token('South'))
bend = g.add_token(Token('Bend'), south)
comma = g.add_token(Token(','), bend, optional=True)
il = g.add_token(Token('IL'), comma)
illinois = g.add_token(Token('Illinois'), comma)
los = g.add_token(Token('Los'))
angeles = g.add_token(Token('Gatos'), los)
comma = g.add_token(Token(','), angeles, optional=True)
ca = g.add_token(Token('CA'), comma)
california = g.add_token(Token('California'), comma)
san = g.add_token(Token('San'))
francisco = g.add_token(Token('Francisco'), san)
comma = g.add_token(Token(','), francisco, optional=True)
ca = g.add_token(Token('CA'), comma)
california = g.add_token(Token('California'), comma)
# -
plot(g)
leaves = [n for n in g.nodes() if g.out_degree(n)==0]
leaves
removed = set()
for n1, n2 in combinations(leaves, 2):
if n1 in removed or n2 in removed:
continue
n1_in = set([v[2].get('accept_fn') for v in g.in_edges(n1, data=True)])
n2_in = set([v[2].get('accept_fn') for v in g.in_edges(n2, data=True)])
if n1_in == n2_in:
g = nx.contracted_nodes(g, n1, n2)
removed.add(n2)
plot(g)
def merge_key(n):
out_edges = frozenset([v[2].get('accept_fn') for v in g.out_edges(n, data=True)])
desc = frozenset(nx.descendants(g, n))
return (out_edges, desc)
merge_key(21)
# +
inner_keys = defaultdict(list)
inner = [n for n in g.nodes() if g.out_degree(n) > 0]
for n in inner:
inner_keys[merge_key(n)].append(n)
print(inner_keys)
for nodes in inner_keys.values():
if len(nodes) > 1:
for n in nodes[1:]:
g = nx.contracted_nodes(g, nodes[0], n)
# -
plot(g)
| notebooks/19-fsa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
from scripts import project_functions as pf
insurance_cleaned = pf.load_and_process("../../data/raw/insurance.csv")
insurance_cleaned.head()
# +
sns.pairplot(insurance_cleaned)
#There are a few things we can make of this pairplot. First off, lets ignore the Children column
#because it only has values of either 0 or 1. We can see that Age and Insurance cost
#seem to have a positive correlation meaning as age increases, so does the insurance cost. Another
#postitive correlation i can see is between Insurance cost and BMI. I want to look a little further
#into the correlation between BMI and Age.
# +
plot_age_bmi = insurance_cleaned.loc[:,['Age', 'BMI']]
plot_age_bmi.describe().T
#From this we know that the average age is around 39 and the average bmi is roughly 30
# -
# +
age = insurance_cleaned.loc[:,['Age']]
bmi = insurance_cleaned.loc[:,['BMI']]
age.plot(kind = 'hist', color = 'skyblue', ec = 'black', figsize = (6,6)).set(title="Age vs count")
bmi.plot(kind = 'hist', color = 'skyblue', ec = 'black', figsize = (6,6)).set(title="BMI vs count")
#From these histograms we can better visualize the data from above
# +
plot_age_bmi_region = insurance_cleaned.loc[:,['Age', 'BMI', 'Region']]
sns.pairplot(plot_age_bmi_region, hue = 'Region')
#Now, I am using a pairplot with just Age and BMI to see how they correlate and am including the Region just for fun.
#It seems that there is not really a correlation between Age and BMI which seems accurate as studies suggest that
#BMI does not age into account. However, it is interesting to see that BMI seems to be the highest in the Northwest
#Region when age is roughly around 30.
# +
plot_age_bmi.plot(kind = 'scatter', x = 'Age', y = 'BMI', figsize = (8,8)).set(title="Age vs BMI")
#Lastly, here is a scatter plot with BMI as the y-axis and Age and the x-axis. This plot also seems to show
#that there is not much significance between Age and BMI. Thus we can conclude that age does not seem to have any
#correlation to BMI.
# -
| analysis/Quinn/milestone2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
a= Tk().clipboard_get()
print (a)
listl="""
Ahmed
Benarab
is the great
"""
listl1=listl.split('\n')
for i in range(len(listl1)-1) :
if len(listl1[i])==0:
listl1.pop(i)
else:
pass
print(listl1)
print(len(listl1[0]))
import keyboard
input()
keyboard.add_hotkey('a, b', lambda: keyboard.write('foobar'))
rt=['azaz','azaza','edef']
rt.pop(0)
print(rt)
jfdskdj
# +
import keyboard
if keyboard.is_pressed('ctrl+shift'):
keyboard.press_and_release('ctrl+v')
else :
pass
# -
aartyu
# +
import time
import keyboard
from tkinter import Tk
#the paste function
def prp():
time.sleep(1)
# take the data from the klipboard
clp_data= Tk().clipboard_get()
#reorganize the data in a list
list1=clp_data.split('\n')
for i in range(len(list1)-1) :
if len(list1[i])==0:
list1.pop(i)
else:
pass
list1.insert(1, '')
list1.insert(2, '2')
for i in list1 :
keyboard.write(i)
keyboard.press_and_release('tab')
while True :
keyboard.add_hotkey('alt+ctrl', lambda : prp())
# +
import time ,keyboard
from tkinter import Tk
#list1 = []
def refrech() :
# take the data from the clipboard
clp_data= Tk().clipboard_get()
#reorganize the data in a list
list1=clp_data.split('\n')
for i in range(len(list1)-1) :
if len(list1[i])==0:
list1.pop(i)
else:
pass
list1.insert(1, '')
list1.insert(2, '2')
def prp():
refrech()
for i in list1 :
keyboard.write(i)
keyboard.press_and_release('tab')
while keyboard.is_pressed('ctrl+Shift') :
keyboard.add_hotkey('ctrl+Shift', lambda : prp())
time.sleep(0.1)
print("ezez")
# -
er = [1,2,5]
er.insert(0,9856)
print(er)2
# +
import time , keyboard
while keyboard.is_pressed('space'):
print('1')
# -
import keyboard
while keyboard.is_pressed("space") == True:
print ("f*ck")
| tests/ams.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 第6章 Bokehでグラフを描画しよう
#
# ### 6-4: 折れ線グラフ
# +
# リスト6.4.1:リスト-ライク・オブジェクトのデータを渡したグラフ
from bokeh.charts import output_notebook, Line, show
output_notebook()
p = Line([1, 2, 3], plot_width=200, plot_height=200)
show(p)
# -
# リスト6.4.2:入れ子のリストを渡したグラフ
p = Line([[1, 2, 3], [2, 4, 9]], plot_width=200, plot_height=200)
show(p)
# リスト6.4.3:辞書型のデータを渡したグラフ
data = {"x": [1, 2, 3], "y1": [1, 2, 3], "y2": [2, 4, 9]}
p = Line(data, x="x", y="y1", plot_width=200, plot_height=200)
show(p)
# +
# リスト6.4.5:DataFrameを渡したグラフ
import os
import pandas as pd
base_url = (
"https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/"
)
anime_stock_returns_csv = os.path.join(base_url, "anime_stock_returns.csv")
df = pd.read_csv(anime_stock_returns_csv, index_col=0, parse_dates=["Date"])
p = Line(df, plot_width=800, plot_height=200)
show(p)
# -
# リスト6.4.6:X値、 Y値を指定したグラフ
p = Line(df, x="index", y="IG Port", plot_width=800, plot_height=200)
show(p)
# +
# リスト6.4.8:line()メソッドを使用したグラフ
from bokeh.plotting import figure
from bokeh.layouts import column
t4816_csv = os.path.join(base_url, "4816.csv")
df = pd.read_csv(t4816_csv, index_col=0, parse_dates=["Date"])
p1 = figure(width=800, height=250, x_axis_type="datetime")
p1.line(df.index, df["Close"])
# x_rangeに他のグラフのx_rangeを設定するとpanが連動する
p2 = figure(width=800, height=150, x_axis_type="datetime", x_range=p1.x_range)
p2.vbar(df.index, width=1, top=df["Volume"]) # vbar()は次章で解説します。
show(column(p1, p2)) # columnは次章で解説します。
# +
# リスト6.4.9:軸を2つ設定したグラフ
from bokeh.models import LinearAxis, Range1d
p = figure(width=800, height=250, x_axis_type="datetime")
p.extra_y_ranges = {"price": Range1d(start=0, end=df["Close"].max())}
p.line(df.index, df["Close"], y_range_name="price")
p.vbar(df.index, width=1, top=df["Volume"], color="green")
p.add_layout(LinearAxis(y_range_name="price"), "left")
show(p)
# -
| sample-code/notebooks/6-04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DAT210x - Programming with Python for DS
# ## Module6- Lab1
# +
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import time
# -
# Feel free to adjust and experiment with these parameters after you have completed the lab:
C = 1
kernel = 'linear'
# +
# TODO: Change to 200000 once you get to Question#2
iterations = 5000
# You can set this to false if you want to draw the full square matrix:
FAST_DRAW = True
# -
# ### Convenience Functions
def drawPlots(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
# You can use this to break any higher-dimensional space down,
# And view cross sections of it.
# If this line throws an error, use plt.style.use('ggplot') instead
mpl.style.use('ggplot') # Look Pretty
padding = 3
resolution = 0.5
max_2d_score = 0
y_colors = ['#ff0000', '#00ff00', '#0000ff']
my_cmap = mpl.colors.ListedColormap(['#ffaaaa', '#aaffaa', '#aaaaff'])
colors = [y_colors[i] for i in y_train]
num_columns = len(X_train.columns)
fig = plt.figure()
fig.canvas.set_window_title(wintitle)
fig.set_tight_layout(True)
cnt = 0
for col in range(num_columns):
for row in range(num_columns):
# Easy out
if FAST_DRAW and col > row:
cnt += 1
continue
ax = plt.subplot(num_columns, num_columns, cnt + 1)
plt.xticks(())
plt.yticks(())
# Intersection:
if col == row:
plt.text(0.5, 0.5, X_train.columns[row], verticalalignment='center', horizontalalignment='center', fontsize=12)
cnt += 1
continue
# Only select two features to display, then train the model
X_train_bag = X_train.ix[:, [row,col]]
X_test_bag = X_test.ix[:, [row,col]]
model.fit(X_train_bag, y_train)
# Create a mesh to plot in
x_min, x_max = X_train_bag.ix[:, 0].min() - padding, X_train_bag.ix[:, 0].max() + padding
y_min, y_max = X_train_bag.ix[:, 1].min() - padding, X_train_bag.ix[:, 1].max() + padding
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# Plot Boundaries
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Prepare the contour
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=my_cmap, alpha=0.8)
plt.scatter(X_train_bag.ix[:, 0], X_train_bag.ix[:, 1], c=colors, alpha=0.5)
score = round(model.score(X_test_bag, y_test) * 100, 3)
plt.text(0.5, 0, "Score: {0}".format(score), transform = ax.transAxes, horizontalalignment='center', fontsize=8)
max_2d_score = score if score > max_2d_score else max_2d_score
cnt += 1
print("Max 2D Score: ", max_2d_score)
def benchmark(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
print(wintitle + ' Results')
s = time.time()
for i in range(iterations):
# TODO: train the classifier on the training data / labels:
# .. your code here ..
print("{0} Iterations Training Time: ".format(iterations), time.time() - s)
s = time.time()
for i in range(iterations):
# TODO: score the classifier on the testing data / labels:
# .. your code here ..
print("{0} Iterations Scoring Time: ".format(iterations), time.time() - s)
print("High-Dimensionality Score: ", round((score*100), 3))
# ### The Assignment
# Load up the wheat dataset into dataframe `X` and verify you did it properly. Indices shouldn't be doubled, nor should you have any headers with weird characters...
# +
# .. your code here ..
# -
# An easy way to show which rows have nans in them:
X[pd.isnull(X).any(axis=1)]
# Go ahead and drop any row with a nan:
# +
# .. your code here ..
# -
# In the future, you might try setting the nan values to the mean value of that column, the mean should only be calculated for the specific class rather than across all classes, now that you have the labels.
# Copy the labels out of the dataframe into variable `y`, then remove them from `X`.
#
# Encode the labels, using the `.map()` trick we showed you in Module 5, such that `canadian:0`, `kama:1`, and `rosa:2`.
# +
# .. your code here ..
# -
# Split your data into a `test` and `train` set. Your `test` size should be 30% with `random_state` 7. Please use variable names: `X_train`, `X_test`, `y_train`, and `y_test`:
# +
# .. your code here ..
# -
# Create an SVC classifier named `svc` and use a linear kernel. You already have `C` defined at the top of the lab, so just set `C=C`.
# +
# .. your code here ..
# -
# Create an KNeighbors classifier named `knn` and set the neighbor count to `5`:
# +
# .. your code here ..
# -
# ### Fire it Up:
benchmark(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
drawPlots(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
benchmark(svc, X_train, X_test, y_train, y_test, 'SVC')
drawPlots(svc, X_train, X_test, y_train, y_test, 'SVC')
plt.show()
# ### Bonus:
# After submitting your answers, mess around with the gamma, kernel, and C values.
| Module6/.ipynb_checkpoints/Module6 - Lab1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### The purpose of this notebook is to answer question three of my analysis questions:
#
# #### How many marijuana-involved incidents did SFPD report different police districts across the city?
#import modules
import pandas as pd
import altair as alt
# Import our cleaned dataset that contains all of our marijuana incidents. We made this .csv file in the data_cleaning notebook.
mari_incidents = pd.read_csv('all_data_marijuana.csv', dtype=str)
# Convert our incident dates to a datetime data format.
mari_incidents['incident_date'] = pd.to_datetime(mari_incidents['incident_date'])
# Check our date ranges
mari_incidents['incident_date'].min()
mari_incidents['incident_date'].max()
# Looks like we've got a full year of data for 2003, our earliest year. But since 2021 ends in October, we can't do full annual analysis on that year. So let's make a dataframe with our full years of data.
full_years = mari_incidents[
(mari_incidents['incident_date'] >= '2003-01-01') &
(mari_incidents['incident_date'] < '2021-01-01')
].reset_index(drop=True)
# We know from our data dictionary that there are multiple row entries for some individual incidents. But we also know that the incident_number will remain the same across all entries related to the same incident. So since we're just looking at how many incidents there were in each year in each district, we can go ahead and drop all the duplicates in the incident_number column:
full_years_incidents = full_years.drop_duplicates(subset=['incident_number'])
full_years_incidents.head()
incidents_by_district = full_years_incidents.groupby(['police_district']).count()
clean_incidents_by_district = incidents_by_district[['row_id']].copy()
clean_incidents_by_district = clean_incidents_by_district.reset_index()
#rename columns
clean_incidents_by_district.columns = ['police_district', 'number_of_incidents']
#sort by number of incidents
clean_incidents_by_district = clean_incidents_by_district.sort_values(by=['number_of_incidents'], ascending=False).reset_index(drop=True)
clean_incidents_by_district
# So there we have it! That's all the marijuana related incidents the SF Police Department responded to from 2003-2020 by police district. It's clear that incidents that the police responded to are heavily weighted towards certain districts, including Southern, Tenderloin, and Park. An interesting follow up question would be to investigate why. Do more people live in those neighborhoods? Are more marijuana crimes committed in those neighborhoods? Do the police enforce marijuana laws differently in these neighborhoods than other parts of the city?
# Let's visualize our data:
alt.Chart(clean_incidents_by_district).mark_bar().encode(
x='police_district',
y='number_of_incidents'
).properties(
title='San Francisco Police: Marijuana Incidents by Police District 2003-2020'
)
# That's the end of this analysis!
| 04_incident_districts.ipynb |
(* -*- coding: utf-8 -*-
(* --- *)
(* jupyter: *)
(* jupytext: *)
(* text_representation: *)
(* extension: .ml *)
(* format_name: light *)
(* format_version: '1.5' *)
(* jupytext_version: 1.14.4 *)
(* kernelspec: *)
(* display_name: OCaml 4.07.1 *)
(* language: OCaml *)
(* name: ocaml-jupyter *)
(* --- *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* <center> *)
(* *)
(* <h1 style="text-align:center"> Polymorphic Lambda Calculus: System F </h1> *)
(* <h2 style="text-align:center"> CS3100 Fall 2019 </h2> *)
(* </center> *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Review *)
(* *)
(* ### Previously *)
(* *)
(* * Simply Typed Lambda Calulus. *)
(* + Products, Sums, Type Soundness *)
(* + Curry Howard Correspondence *)
(* + Type Erasure *)
(* *)
(* ### Today *)
(* *)
(* * System F: Polymorphic Lambda Calculus *)
(* *)
(* $ *)
(* \newcommand{\stlc}{\lambda^{\rightarrow}} *)
(* \require{color} *)
(* \newcommand{\c}[2]{{\color{#1}{\text{#2}}}} *)
(* $ *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Doubling functions *)
(* *)
(* In simply typed lambda calculus, the `twice f x = f (f x)` function must be specified at every type: *)
(* *)
(* $ *)
(* twice\_unit = \lambda f:1 \rightarrow 1.\lambda x:1. f~(f ~x) \\ *)
(* twice\_sum = \lambda f: A+B \rightarrow A+B.\lambda x:A+B. f~(f ~x) \\ *)
(* twice\_a2a = \lambda f: (A \rightarrow A) \rightarrow (A \rightarrow A). \lambda x:A \rightarrow A. f ~(f ~x) *)
(* $ *)
(* *)
(* Clearly this copy-pasting of code and changing the type violates the basic dictum of software engineering: *)
(* *)
(* **Abstraction Principle:** Each significant piece of functionality in a program should be implemented in just one place in the source code. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Universal Type *)
(* *)
(* * System F (polymorphic lambda calculus) is obtained by extending Curry Howard isomorphism to $\forall$. *)
(* * System F has dedicated syntax for type and term families such as `twice`: *)
(* + The System F term $\Lambda \alpha.\lambda f: \alpha \rightarrow \alpha.\lambda x:\alpha.f ~(f ~x))$ has type $\forall \alpha.(\alpha \rightarrow \alpha) \rightarrow \alpha \rightarrow \alpha$. *)
(* * $\Lambda \alpha.M$ is called a **type abstraction**. *)
(* + Correspondingly, we also have a **type application** of the form $M A$, where $M$ is a term and $A$ is a type. *)
(* * System F offers polymorphism *)
(* + $\c{red}{But which kind?}$ *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Polymorphism *)
(* *)
(* Polymorphism comes in various kinds *)
(* *)
(* 1. Ad-hoc polymorphism: *)
(* * function overloading (C++,Java) *)
(* * operator overloading (C++,Java,Standard ML, C) *)
(* * typeclasses (Haskell) *)
(* 2. Subtyping: *)
(* * subclasses (C++, Java, OCaml) *)
(* * Row polymorphism (OCaml) *)
(* 3. Parametric Polymorphism: *)
(* * polymorphic data and functions in OCaml, Haskell, Standard ML. *)
(* * Generics in Java, C# *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Polymorphism *)
(* *)
(* * Unqualified term "polymorphism" means different things depending on who you talk to *)
(* + OO person: subtype polymorphism; parametric polymorphism is generics *)
(* + FP person: parametric polymorphism. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Types *)
(* *)
(* Types in System F are as follows: *)
(* *)
(* \\[ *)
(* \begin{array}{rcll} *)
(* \text{Types: } A,B & ::= & \alpha & \text{(type variable)} \\ *)
(* & \mid & A \rightarrow B & \text{(function type)} \\ *)
(* & \mid & \forall \alpha.A & \text{(universal type)} *)
(* \end{array} *)
(* \\] *)
(* *)
(* * We have dropped pairs, sums, 1 (unit), 0 types from $\lambda^{\rightarrow}$ *)
(* + Can be encoded! *)
(* + Recall that $\lambda^{\rightarrow}$ without base types was degenerate. *)
(* * Notice the symmetry to untyped lambda calculus terms. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Free Type Variables *)
(* *)
(* We define free type variables on System F types similar to free variables on lambda terms. *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* FTV(\alpha) & = & \alpha \\ *)
(* FTV(A \rightarrow B) & = & FTV(A) \cup FTV(B) \\ *)
(* FTV(\forall \alpha.A) & = & FTV(A) \setminus \{\alpha\} *)
(* \end{array} *)
(* \\] *)
(* *)
(* $\newcommand{\inferrulel}[3]{\displaystyle{\frac{#1}{#2}~~{\small #3}}}$ *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Typing Rules *)
(* *)
(* \\[ *)
(* \begin{array}{cc} *)
(* \inferrulel{}{\Gamma,x:A \vdash x:A}{(var)} \\ \\ *)
(* \inferrulel{\Gamma \vdash M : A \rightarrow B \quad \Gamma \vdash N : A}{\Gamma \vdash M~N : B}{(\rightarrow elim)} & *)
(* \inferrulel{\Gamma,x:A \vdash M : B}{\Gamma \vdash \lambda x:A.M : A \rightarrow B}{(\rightarrow intro)} \\ \\ *)
(* \inferrulel{\Gamma \vdash M : \forall \alpha.A} *)
(* {\Gamma \vdash M ~B : A[B/\alpha]}{(\forall elim)} & *)
(* \inferrulel{\Gamma \vdash M : A \quad \alpha \notin FTV(\Gamma)} *)
(* {\Gamma \vdash \Lambda \alpha.M : \forall \alpha.A} *)
(* {(\forall intro)} *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Terms *)
(* *)
(* \\[ *)
(* \begin{array}{rcll} *)
(* \text{Terms: } M,N & ::= & x & \text{(variable)} \\ *)
(* & \mid & M~N & \text{(application)} \\ *)
(* & \mid & M~[A] & \text{(type application)} \\ *)
(* & \mid & \lambda x:A.M & \text{(abstraction)} \\ *)
(* & \mid & \Lambda \alpha.M & \text{(type abstraction)} *)
(* \end{array} *)
(* \\] *)
(* *)
(* $\newcommand{\inferrule}[2]{\displaystyle{\frac{#1}{#2}}}$ *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Substitution *)
(* *)
(* ### For terms *)
(* *)
(* If $M$ is a term, $N$ a term, and $x$ a variable, we write $M[N/x]$ for capture-free substitution of $N$ for $x$ in $M$. *)
(* *)
(* ### For types *)
(* *)
(* If $M$ is a term, $B$ a type, and $\alpha$ a type variable, we write $M[B/\alpha]$ for capture-free substitution of $B$ for $\alpha$ in $M$. *)
(* *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Reduction: Boring Rules *)
(* *)
(* \\[ *)
(* \begin{array}{ccc} *)
(* \inferrule{M \rightarrow M'}{M~N \rightarrow M'~N} & *)
(* \inferrule{N \rightarrow N'}{M~N \rightarrow M~N'} & *)
(* \inferrule{M \rightarrow M'}{\lambda x.M \rightarrow \lambda x.M'} *)
(* \end{array} *)
(* \\] *)
(* *)
(* \\[ *)
(* \begin{array}{cc} *)
(* \inferrule{M \rightarrow M'}{M~A \rightarrow M'~A} & *)
(* \inferrule{M \rightarrow M'}{\Lambda \alpha.M \rightarrow \Lambda \alpha.M'} *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Reductions: Interesting Rules *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* (\lambda x:A.M) ~N & \rightarrow & M[N/x] \\ *)
(* (\Lambda \alpha.M) ~A & \rightarrow & M[A/\alpha] \\ *)
(* \lambda x:A.M ~x & \rightarrow & \lambda M \quad \text{, if } x \notin FV(M) \\ *)
(* \lambda \alpha.M ~\alpha & \rightarrow & \lambda M \quad \text{, if } \alpha \notin FTV(M) *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Encodings : Booleans *)
(* *)
(* In untyped lambda calculus, `tru` and `fls` values were: *)
(* *)
(* ```ocaml *)
(* tru = 𝜆t.𝜆f.t *)
(* fls = 𝜆t.𝜆f.f *)
(* ``` *)
(* *)
(* In simply typed lambda calculus, there was a `tru` and `fls` for each type: *)
(* *)
(* ```ocaml *)
(* tru_int = 𝜆t:int.𝜆f:int.t *)
(* tru_bool = 𝜆t:float.𝜆f:float.t *)
(* ``` *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Encoding Booleans *)
(* *)
(* In System F, there is a single polymorphic `tru` and `fls` values of type `bool`. We define the type `bool` and the booleans as *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* bool & = & \forall \alpha. \alpha \rightarrow \alpha \rightarrow \alpha \\ *)
(* tru : bool & = & \Lambda \alpha. \lambda t:\alpha. \lambda f:\alpha. t \\ *)
(* fls : bool & = & \Lambda \alpha. \lambda t:\alpha. \lambda f:\alpha. f \\ *)
(* \end{array} *)
(* \\] *)
(* *)
(* Easy to see that the judgements $\vdash tru : bool$ and $\vdash fls : bool$ hold. *)
(* -
(* ## Encoding boolean operations *)
(* *)
(* `test` function is defined as: *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* test : \forall \alpha. bool \rightarrow \alpha \rightarrow \alpha \rightarrow \alpha \\ *)
(* test = \Lambda \alpha. \lambda b : bool. b~\alpha *)
(* \end{array} *)
(* \\] *)
(* *)
(* Notice the **type application** of $b$ to $\alpha$ above. $test$ at $bool$ type is: *)
(* *)
(* \\[ *)
(* test\_bool : bool \rightarrow bool \rightarrow bool = test ~bool *)
(* \\] *)
(* *)
(* We can define logical operators as follows: *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* and : bool \rightarrow bool \rightarrow bool & = & \lambda x:bool.\lambda y:bool.x ~bool ~y ~fls \\ *)
(* or : bool \rightarrow bool \rightarrow bool& = & \lambda x:bool.\lambda y:bool.x ~bool ~tru ~y \\ *)
(* not : bool \rightarrow bool & = & \lambda x:bool.x ~bool ~fls ~tru *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Encoding natural numbers *)
(* *)
(* The type for `nat` is defined as *)
(* *)
(* \\[ *)
(* nat = \forall \alpha. (\alpha \rightarrow \alpha) \rightarrow \alpha \rightarrow \alpha *)
(* \\] *)
(* *)
(* Now we can define church numerals as *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* zero : nat & = & \Lambda \alpha. \lambda s : \alpha \rightarrow \alpha. \lambda z : \alpha. z \\ *)
(* one : nat & = & \Lambda \alpha. \lambda s : \alpha \rightarrow \alpha. \lambda z : \alpha. s~z \\ *)
(* two : nat & = & \Lambda \alpha. \lambda s : \alpha \rightarrow \alpha. \lambda z : \alpha. s~(s~z) *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Pairs *)
(* *)
(* * Unlike $\lambda^{\rightarrow}$, we did not include a pair type in our types. *)
(* *)
(* \\[ *)
(* \begin{array}{rcll} *)
(* \text{Types: } A,B & ::= & \alpha & \text{(type variable)} \\ *)
(* & \mid & A \rightarrow B & \text{(function type)} \\ *)
(* & \mid & \forall \alpha.A & \text{(universal type)} *)
(* \end{array} *)
(* \\] *)
(* *)
(* * We can encode the pair **type**! *)
(* + We encoded pair **terms** in untyped lambda calculus. *)
(* + System F allows encoding of **types** as well as **terms**. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Pairs *)
(* *)
(* Pair type is *)
(* *)
(* \\[ *)
(* A \times B = \forall \alpha.(A \rightarrow B \rightarrow \alpha) \rightarrow \alpha *)
(* \\] *)
(* *)
(* (Why is the pair type this?) *)
(* *)
(* Given $M : A$ and $N : B$, the pair is *)
(* *)
(* \\[ *)
(* \langle M,N \rangle : A \times B = \Lambda \alpha.\lambda f:(A \rightarrow B \rightarrow \alpha). f ~M ~N *)
(* \\] *)
(* *)
(* The projection functions are *)
(* *)
(* \\[ *)
(* \begin{array}{c} *)
(* fst = \Lambda \alpha. \Lambda \beta. \lambda p : \alpha \times \beta. p~\alpha~(\lambda x:\alpha.\lambda y:\beta.x) \\ *)
(* snd = \Lambda \alpha. \Lambda \beta. \lambda p : \alpha \times \beta. p~\beta~(\lambda x:\alpha.\lambda y:\beta.y) *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## <NAME> Correspondence *)
(* *)
(* \\[ *)
(* \begin{array}{cc} *)
(* \inferrulel{}{\Gamma,x:A \vdash A}{(var)} \\ \\ *)
(* \inferrulel{\Gamma \vdash A \implies B \quad \Gamma \vdash A} *)
(* {\Gamma \vdash B}{(\implies elim)} & *)
(* \inferrulel{\Gamma,x:A \vdash B} *)
(* {\Gamma \vdash A \implies B} *)
(* {(\implies intro)} \\ \\ *)
(* \inferrulel{\Gamma \vdash \forall \alpha.A} *)
(* {\Gamma \vdash A[B/\alpha]}{(\forall ~elim)} & *)
(* \inferrulel{\Gamma \vdash A \quad \alpha \notin FTV(\Gamma)} *)
(* {\Gamma \vdash \Lambda \alpha.M : \forall \alpha.A} *)
(* {(\forall ~intro)} *)
(* \end{array} *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## <NAME> Correspondence *)
(* *)
(* \\[ *)
(* \begin{array}{cc} *)
(* \inferrulel{\Gamma \vdash \forall \alpha.A} *)
(* {\Gamma \vdash A[B/\alpha]}{(\forall ~elim)} & *)
(* \inferrulel{\Gamma \vdash A \quad \alpha \notin FTV(\Gamma)} *)
(* {\Gamma \vdash \Lambda \alpha.M : \forall \alpha.A} *)
(* {(\forall ~intro)} *)
(* \end{array} *)
(* \\] *)
(* *)
(* * $\forall~intro$ is universal generalization *)
(* + If a statement has been proved for arbitrary $\alpha$ then it holds for every $\alpha$. *)
(* * $a \wedge b \implies a$ *)
(* * $\neg (a \wedge b) \vee a$ *)
(* * $\neg a \vee \neg b \vee a$ *)
(* * $\top \vee \neg b$ *)
(* * $\top$ *)
(* * Hence, $\forall a,b.a \wedge b \implies a$ *)
(* * Capture arbitrary $\alpha$ by saying that $\alpha$ is not an assumption in $\Gamma$ i.e) not a free type variable for $\Gamma$. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## <NAME> Correspondence *)
(* *)
(* \\[ *)
(* \begin{array}{cc} *)
(* \inferrulel{\Gamma \vdash \forall \alpha.A} *)
(* {\Gamma \vdash A[B/\alpha]}{(\forall ~elim)} & *)
(* \inferrulel{\Gamma \vdash A \quad \alpha \notin FTV(\Gamma)} *)
(* {\Gamma \vdash \Lambda \alpha.M : \forall \alpha.A} *)
(* {(\forall ~intro)} *)
(* \end{array} *)
(* \\] *)
(* *)
(* * $\forall~elim$ is universal specialisation *)
(* + If a statement is true for all propositions $\alpha$ it also holds for some proposition $B$. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Missing Logical Connectives *)
(* *)
(* It turns out that $\forall$ and $\implies$ is sufficient to encode other logical connectives. *)
(* *)
(* \\[ *)
(* \begin{array}{rcl} *)
(* A \wedge B & \Leftrightarrow & \forall \alpha. (A \Rightarrow B \Rightarrow \alpha) \Rightarrow \alpha \\ *)
(* A \vee B & \Leftrightarrow & \forall \alpha. (A \Rightarrow \alpha) \Rightarrow (B \Rightarrow \alpha) \Rightarrow \alpha \\ *)
(* \neg A & \Leftrightarrow & \forall \alpha.A \Rightarrow \alpha \\ *)
(* \top & \Leftrightarrow & \forall \alpha.\alpha \Rightarrow \alpha \\ *)
(* \bot & \Leftrightarrow & \forall \alpha.\alpha \\ *)
(* \exists \beta.A & \Leftrightarrow & \forall \alpha.(\forall \beta.A \Rightarrow \alpha) \Rightarrow \alpha *)
(* \end{array} *)
(* \\] *)
(* *)
(* **Exercise:** Using informal intuitionistic reasoning, prove above. *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* ## Pair type again *)
(* *)
(* Pair type is *)
(* *)
(* \\[ *)
(* A \times B = \forall \alpha.(A \rightarrow B \rightarrow \alpha) \rightarrow \alpha *)
(* \\] *)
(* *)
(* The conjunction operator is defined as *)
(* *)
(* \\[ *)
(* A \wedge B \Leftrightarrow \forall \alpha. (A \Rightarrow B \Rightarrow \alpha) \Rightarrow \alpha *)
(* \\] *)
(* + [markdown] slideshow={"slide_type": "slide"}
(* <center> *)
(* *)
(* <h1 style="text-align:center"> Fin. </h1> *)
(* </center> *)
| lectures/SystemF/systemf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # sentences - find similar sentences
#
# This is a homework project for <NAME>' [*Big Data and Mining*](http://mgnet.org/~douglas/Classes/bigdata/index.html) class, written in python. This is my second trial. My first trial is written in C and used a different definition of distance and is not *github*-ed.
#
# ## Problem Description
#
# ### Distance Function
#
# A *change* to a sentence $\alpha$ can be:
#
# * deleting a word from $\alpha$;
# * or adding a word to $\alpha$.
#
# A distance function $d(\cdot,\cdot)$ of sentences is then defined as:
#
# $$ d(\alpha,\beta):=\text{minimum number of changes applied to $\alpha$ to get $\beta$}
# . $$
#
# The goal is to filter out a set of sentences in a text file, given $k$, such that:
#
# * for any two distinct sentences $\alpha, \beta$ in output file, $d(\alpha,\beta)>k$,
# * and for every sentence $\alpha$ in the input file, there is at least a sentence $\beta$ in
# the output file such that $d(\alpha,\beta)\le k$.
#
# ---
#
# ## Usage
#
# This code is written in Python 2 but should be compatible with python3. 4 currently built-in
# modules are employed: `timeit`, `collections`, `itertools`, and `argparse`.
#
# Run `python[3] sentences.py -h` to show usage information:
# !python sentences.py -h
# ### Example
#
# Solve distance 2 problem on *1M.txt*, and write the result to file *out.txt*:
#
# ```sh
# python sentences.py 1M.txt -d2 -o out.txt
# ```
#
# ---
#
# ## Algorithm
#
# For distance 0, `set` container is used to remove identical sentences. The code without I/O
# can be implemented in one line:
#
# ```python
# distinct_sentences = set(input_sentences)
# ```
#
# Since `set` is implemented using hash table, this algorithm has a linear complexity to the
# number of sentences.
#
# The rest of this section talks about solving $k>1$.
#
# ### Basic Idea
#
# We define some notations as follows:
#
# * $l(\alpha)$: number of words of sentence $\alpha$;
# * $\alpha - n$: set of all strings that are sentence `\alpha` delete $n$ words;
# * $\alpha -m = \beta - n$: $(\alpha-m) \cap (\beta-n) \ne \emptyset$. Or, there exists a way
# such that sentence $\alpha$ removing some $m$ words is identical to sentence $\beta$
# removing $n$ words;
#
# Given two sentences $\alpha$ and $beta$, if
#
# $$ \alpha-m = \beta-n, $$
#
# then
#
# $$ d(\alpha, \beta) = m+n-2p, \text{ for some } p \in \mathbb{N}.$$
#
# If $l(\alpha)-l(\beta)=h\ge0$, since $l(\alpha)-m = l(\beta)-n$, must have $m-n=h$. Thus
#
# $$ d(A,B) = h+2n+2p = h+2t, \text{ for some } t\in\mathbb{N}. $$
#
# Then $d(\alpha, \beta) \le k$ if and only if
#
# $$ \alpha - (t+h) = \beta - t, \text{ and } 2t+h\le k, $$
# which is equivalent as
# $$ \alpha - (t+h) = \beta - t,\; t=\operatorname{floor}\left(\frac{k-h}2\right). $$
#
# ### Functions
#
# For a set $A$ of $p$-word sentences, and a set $B$ of $q$-word sentences, say we want to
# remove all $\beta\in B$, where $d(\alpha, \beta)\le k$ for some $\alpha\in A$. Without loss of
# generality, we assume $p\ge q$. Two functions are written to solve two different cases: $p = q,
# \; \;0<p-q\le k$, which are `amam()` and `ambn()` respectively.
#
# ### Traps and Tricks
#
# One can easily end up deleting more sentences than they should when $k>0$. For example, we
# decide to remove $\alpha$ because $d(\alpha, \beta)\le k$ for some $\beta$, and then remove
# $\beta$ because $d(\beta, \gamma)\le k$, then there is a chance we cannot find any sentence in
# our result within distance $k$ of $\alpha$. To avoid such situation, we go through all
# sentences from the longest to the shortest, and always remove the shorter sentences when a
# pair of neighbors is found.
#
# ---
#
# ## Result & Performance
#
# File metadata:
#
# | Input file | # of lines | file size |
# | :--------: | ---------- | --------- |
# | 100.txt | 100 | 12K |
# | 1K.txt | 1,000 | 96K |
# | 10K.txt | 10,000 | 884K |
# | 100K.txt | 100,000 | 8.4M |
# | 1M.txt | 1,000,000 | 85M |
# | 5M.txt | 5,000,000 | 428M |
# | 25M.txt | 25,000,000 | 2.1G |
#
# Performance (in second):
#
# | Input file | Distance 0 | Distance 1 | Distance 2 |
# | :--------: | ---------- | ---------- | ----------- |
# | 100.txt | 0.000120 | 0.002948 | 0.002877 |
# | 1K.txt | 0.000390 | 0.016582 | 0.131046 |
# | 10K.txt | 0.003342 | 0.148118 | 1.858770 |
# | 100K.txt | 0.040530 | 1.492312 | 21.979624 |
# | 1M.txt | 0.545091 | 16.050876 | 287.006949 |
# | 5M.txt | 2.829443 | 72.055627 | 1526.300508 |
# | 25M.txt | 18.832576 | 250.135241 | 5350.679652 |
#
# Result (# of output sentences)
#
# | Input file | Distance 0 | Distance 1 | Distance 2 |
# | :--------: | ---------- | ---------- | ---------- |
# | 100.txt | 98 | 98 | 98 |
# | 1K.txt | 921 | 921 | 917 |
# | 10K.txt | 9179 | 9160 | 9075 |
# | 100K.txt | 84111 | 83646 | 80873 |
# | 1M.txt | 769170 | 760391 | 714946 |
# | 5M.txt | 3049422 | 2996383 | 2763966 |
# | 25M.txt | 8703720 | 8506155 | 7712287 |
#
# ---
#
# ## Reference
#
# The **Big Sentences** problem is described in [Craig's webpage](http://mgnet.org/~douglas/Classes/common-problems/index.html#BigSentences), which also contains all sentence files used in the above section.
#
# ---
#
# ## Roadmap
#
# - [ ] Add unit test
| README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mcvine
# language: python
# name: mcvine
# ---
import mcvine
from instrument.geometry.pml import weave
from instrument.geometry import operations,shapes
import math
import os, sys
parent_dir = os.path.abspath(os.pardir)
libpath = os.path.join(parent_dir, 'c3dp_source')
figures_path = os.path.join (parent_dir, 'figures')
sample_path = os.path.join (parent_dir, 'sample')
if not libpath in sys.path:
sys.path.insert(0, libpath)
# sys.path.insert(0, '/home/fi0/python3/lib/python3.5/site-packages')
import SCADGen.Parser
from collimator_zigzagBlade import Collimator_geom
# import viewscad
# import solid
# +
scad_flag = True ########CHANGE CAD FLAG HERE
if scad_flag is True:
savepath = figures_path
else:
savepath = sample_path
# -
colimator_front_end_from_center= 39. #though the cell diameter is 3 mm, I can not put the collimator at 3 mm because
#if I put at 3 mm, there will be no blade (only full of channels as the minimum
# channel thickness is 3 mm)
length_of_each_part=60.
# +
########################### LAST PART COMPONENTS ##############################
coll1_length=length_of_each_part
channel1_length=length_of_each_part
min_channel_wall_thickness=1.
minimum_channel_size = 3.
coll1_height_detector=150.
coll1_width_detector=60*2.
coll1_height_detector_right=coll1_height_detector+20.
coll1_front_end_from_center=colimator_front_end_from_center+(2.*length_of_each_part)
print ('last collimator front end from center', coll1_front_end_from_center)
coll1_length_fr_center=coll1_front_end_from_center+coll1_length
print ('last collimator back end from center', coll1_length_fr_center)
# +
import numpy as np
wall_angular_thickness=2*(np.rad2deg(np.arctan((min_channel_wall_thickness/2.)/coll1_length_fr_center)))
print ('wall angular thickness', wall_angular_thickness)
channel_angular_thickness=2*(np.rad2deg(np.arctan((minimum_channel_size/2.)/coll1_length_fr_center)))
print ('channel angular thickness', channel_angular_thickness)
# +
########################### FIRST PART ##############################
coll3_length=length_of_each_part
channel3_length=length_of_each_part
coll3_inner_radius=colimator_front_end_from_center+(0.*length_of_each_part)
print ('inner radius',coll3_inner_radius)
coll3_outer_radius=coll3_length+coll3_inner_radius
print ('outer radius', coll3_outer_radius)
coll3_channel_gap_at_detector = (minimum_channel_size/coll3_inner_radius)*coll3_outer_radius
print ('minimum channel gap at big end', coll3_channel_gap_at_detector)
coll3_height_detector=(coll1_height_detector/coll1_length_fr_center)*coll3_outer_radius
coll3_height_detector_right=(coll1_height_detector_right/coll1_length_fr_center)*coll3_outer_radius
print ('height detector', coll3_height_detector)
coll3_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll3_outer_radius #half part
# coll3_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll3_outer_radius*2 #full part
print ('width detector', coll3_width_detector)
vertical_odd_blades= True
horizontal_odd_blades =True
coll3 = Collimator_geom()
coll3.set_constraints(max_coll_height_detector=coll3_height_detector,
max_coll_width_detector=coll3_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll3_length,
min_channel_size=3,
collimator_front_end_from_center=coll3_inner_radius,
# remove_vertical_blades_manually =True, #only full part
# vertical_blade_index_list_toRemove = [7],#only full part
# remove_horizontal_blades_manually = True, #only full part
# horizontal_blade_index_list_toRemove = [9], #only full part
collimator_parts=False,
no_right_border= True,
no_top_border = True,
horizontal_odd_blades = False,
vertical_odd_blades = False,
)
horizontal_acceptance_angle = coll3.horizontal_acceptance_angle
print ('horizontal acceptance angle', coll3.horizontal_acceptance_angle)
print ('vertical acceptance angle' , coll3.vertical_acceptance_angle)
rotation_angle_for_right_parts = horizontal_acceptance_angle/2.
fist_vertical_number_blades = math.floor (coll3.Vertical_number_channels(channel3_length))
fist_horizontal_number_blades = math.floor(coll3.Horizontal_number_channels(channel3_length))
print ('vertical #channels' , fist_vertical_number_blades)
print ('horizontal # channels' , fist_horizontal_number_blades)
if fist_vertical_number_blades %2 ==0:
fist_vertical_number_blades-=1
if fist_horizontal_number_blades %2 ==0:
fist_horizontal_number_blades-=1
print ('modified vertical #channels' , fist_vertical_number_blades)
print ('modified horizontal # channels' , fist_horizontal_number_blades)
# if vertical_odd_blades:
# if fist_vertical_number_blades %2 != 0:
# fist_vertical_number_blades-= 1
# else:
# if fist_vertical_number_blades %2 ==0:
# fist_vertical_number_blades-=1
# if horizontal_odd_blades:
# if fist_horizontal_number_blades %2 != 0:
# fist_horizontal_number_blades-= 1
# else:
# if fist_horizontal_number_blades %2 ==0:
# fist_horizontal_number_blades-=1
# coll3.set_parameters(vertical_number_channels=28,horizontal_number_channels=11*2,
# channel_length =channel3_length) # the full first part
# coll3_R.set_parameters(vertical_number_channels=28,horizontal_number_channels=11*2
# ,channel_length =channel3_length)
coll3.set_parameters(vertical_number_channels=fist_vertical_number_blades,horizontal_number_channels=fist_horizontal_number_blades,
channel_length =channel3_length)
print ('vertical channel angle :' ,coll3.vertical_channel_angle)
print ('horizontal channel angle :' ,coll3.horizontal_channel_angle)
col_first = coll3.gen_one_col(collimator_Nosupport=True)
# coli_first_right = coll3_R.gen_collimators(detector_angles=[180.+ 12],multiple_collimator=False, collimator_Nosupport=True)
# +
########################## MIDDLE PART #########################################
testing_distance = 0
coll2_length=length_of_each_part
channel2_length=length_of_each_part
coll2_inner_radius=colimator_front_end_from_center+(1.*length_of_each_part) + testing_distance
print ('inner radius', coll2_inner_radius)
coll2_outer_radius=coll2_length+coll2_inner_radius
print ('outer radius', coll2_outer_radius)
coll2_channel_gap_at_detector = (minimum_channel_size/coll2_inner_radius)*coll2_outer_radius
print ('minimum channel gap at big end', coll2_channel_gap_at_detector)
coll2_height_detector=(coll1_height_detector/coll1_length_fr_center)*coll2_outer_radius
coll2_height_detector_right=(coll1_height_detector_right/coll1_length_fr_center)*coll2_outer_radius
print ('collimator height at detector', coll2_height_detector)
coll2_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll2_outer_radius
print ('coll2_width_detector', coll2_width_detector)
coll2_channel_index_to_remove = int (coll3_channel_gap_at_detector/minimum_channel_size)
print ('channel index to remove' ,coll2_channel_index_to_remove)
coll2 = Collimator_geom()
coll2.set_constraints(max_coll_height_detector=coll2_height_detector,
max_coll_width_detector=coll2_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll2_length,
min_channel_size=3,
collimator_front_end_from_center=coll2_inner_radius,
collimator_parts=True,
initial_collimator_horizontal_channel_angle=0.0,
initial_collimator_vertical_channel_angle= 0.0,
# remove_vertical_blades_manually =True,
# vertical_blade_index_list_toRemove = [6,9],
# remove_horizontal_blades_manually =True,
# horizontal_blade_index_list_toRemove = [2, 5, 8, 11, 14, 17],
no_right_border= True,
no_top_border = True,
vertical_even_blades= False,
horizontal_even_blades= False)
middle_vertical_number_blades = math.floor (coll2.Vertical_number_channels(channel2_length))
middle_horizontal_number_blades = math.floor(coll2.Horizontal_number_channels(channel2_length))
print ('vertical # chanels', coll2.Vertical_number_channels(channel2_length))
print ('horizontal # channels', coll2.Horizontal_number_channels(channel2_length))
if middle_vertical_number_blades %2 ==0:
middle_vertical_number_blades-=1
if middle_horizontal_number_blades %2 ==0:
middle_horizontal_number_blades-=1
print ('modified vertical #channels' , middle_vertical_number_blades)
print ('modified horizontal # channels' , middle_horizontal_number_blades)
coll2.set_parameters(vertical_number_channels=middle_vertical_number_blades,horizontal_number_channels=middle_horizontal_number_blades,
channel_length =channel2_length)
print ('vertical channel angle :' ,coll2.vertical_channel_angle)
print ('horizontal channel angle :' ,coll2.horizontal_channel_angle)
coli_middle = coll2.gen_one_col(collimator_Nosupport=True)
# print (coll1_height_detector/coll2_height_detector)
# +
#################### LAST PARTS ################################
coll1_channel_index_to_remove = int (coll2_channel_gap_at_detector/minimum_channel_size)
print ('channel index to remove' ,coll1_channel_index_to_remove)
col_last_left = Collimator_geom()
col_last_left.set_constraints(max_coll_height_detector=coll1_height_detector,
max_coll_width_detector=coll1_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll1_length,
min_channel_size=3.,
collimator_front_end_from_center=coll1_front_end_from_center,
# remove_horizontal_blades_manually =True,
# horizontal_blade_index_list_toRemove = [2, 5, 7, 10, 12, 15, 21, 24,26,29],
# remove_vertical_blades_manually =True,
# vertical_blade_index_list_toRemove = [2,4,7,18,23, 24 ],
collimator_parts=True,
no_right_border= True,
no_top_border = True,
vertical_odd_blades=False,
horizontal_odd_blades=False )
last_vertical_number_blades = math.floor (col_last_left.Vertical_number_channels(channel1_length))
last_horizontal_number_blades = math.floor(col_last_left.Horizontal_number_channels(channel1_length))
print ('vertical # channels', col_last_left.Vertical_number_channels(channel1_length))
print ('horizontal # channels' , col_last_left.Horizontal_number_channels(channel1_length))
if last_vertical_number_blades %2 ==0:
last_vertical_number_blades-=1
if last_horizontal_number_blades %2 ==0:
last_horizontal_number_blades-=1
print ('modified vertical #channels' , last_vertical_number_blades)
print ('modified horizontal # channels' , last_horizontal_number_blades)
col_last_left.set_parameters(vertical_number_channels=last_vertical_number_blades,horizontal_number_channels=last_horizontal_number_blades,
channel_length =channel1_length)
print ('vertical channel angle :' ,col_last_left.vertical_channel_angle)
print ('horizontal channel angle :' ,col_last_left.horizontal_channel_angle)
colilast = col_last_left.gen_one_col(collimator_Nosupport=True)
# +
pyr_lateral_middle = shapes.pyramid(
thickness='%s *mm' % coll1_height_detector_right,
# height='%s *mm' % (height),
height='%s *mm' % (coll1_length_fr_center),
width='%s *mm' % coll1_width_detector)
pyr_lateral_middle = operations.rotate(pyr_lateral_middle, transversal=1, angle='%s *degree' % (90))
pyr_lateral_left_middle = operations.rotate(pyr_lateral_middle, vertical="1",
angle='%s*deg' % (180 + 180-rotation_angle_for_right_parts-wall_angular_thickness-channel_angular_thickness-0.03))
pyr_lateral_right_middle = operations.rotate(pyr_lateral_middle, vertical="1",
angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+wall_angular_thickness+channel_angular_thickness+0.15)))
# pyr_lateral_right_last = operations.rotate(pyr_lateral, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts/2.)))
# +
factor = 10
pyr_lateral_last = shapes.pyramid(
thickness='%s *mm' % coll1_height_detector_right,
# height='%s *mm' % (height),
height='%s *mm' % (coll1_length_fr_center+factor),
width='%s *mm' % coll1_width_detector)
wall_angular_thickness_last=2*(np.rad2deg(np.arctan((min_channel_wall_thickness/2.)/(coll1_length_fr_center+10))))
print ('wall angular thickness sudo', wall_angular_thickness_last)
channel_angular_thickness_last=2*(np.rad2deg(np.arctan((minimum_channel_size/2.)/(coll1_length_fr_center+10))))
print ('channel angular thickness sudo', channel_angular_thickness_last)
pyr_lateral_last = operations.rotate(pyr_lateral_last, transversal=1, angle='%s *degree' % (90))
pyr_lateral_left_last = operations.rotate(pyr_lateral_last, vertical="1",
angle='%s*deg' % (180 + 180-rotation_angle_for_right_parts+wall_angular_thickness-channel_angular_thickness+0.5))
# pyr_lateral_right_last = operations.rotate(pyr_lateral_last, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+(wall_angular_thickness*wall_angular_thickness_last*factor)+channel_angular_thickness-0.12)))
pyr_lateral_right_last = operations.rotate(pyr_lateral_last, vertical="1",
angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+0.75)))
# pyr_lateral_right_last = operations.rotate(pyr_lateral, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts/2.)))
# +
# both=operations.unite(coli_middle_left, colilast_left)
# both= operations.unite(operations.unite
# (operations.unite(operations.unite(operations.unite(colilast_left, col_first), colilast_right),
# coli_first_right), coli_middle_left), coli_middle_right)
whole= operations.unite(operations.unite(colilast, col_first),
coli_middle)
whole_first_part = col_first
whole_middle_part = coli_middle
whole_last_part = colilast
first_middle = operations.unite (col_first, coli_middle)
middle_last = operations.unite (coli_middle, colilast)
# first_left = operations.subtract(whole_first_part, pyr_lateral_right)
# first_right = operations.subtract(whole_first_part, pyr_lateral_left)
middle_left = operations.subtract(whole_middle_part, pyr_lateral_right_middle)
middle_right = operations.subtract(whole_middle_part, pyr_lateral_left_middle)
last_left = operations.subtract(whole_last_part, pyr_lateral_right_last)
last_right = operations.subtract(whole_last_part, pyr_lateral_left_last)
middle_left_last_left = operations.unite( middle_left, last_left)
middle_right_last_right = operations.unite(middle_right, last_right)
whole_joint = operations.unite(operations.unite(middle_left_last_left, middle_right_last_right), whole_first_part)
whole_last_joint = operations.unite(last_left, last_right)
whole_middle_joint = operations.unite(middle_left, middle_right)
# both=operations.unite(coli_middle, coli2R)
# both=operations.unite(operations.unite(operations.unite(coli2, coli3), coli2R), coli3R)
file='whole_joint_part_New'
filename='%s.xml'%(file)
outputfile=os.path.join(savepath, filename)
with open (outputfile,'wt') as file_h:
weave(whole_joint,file_h, print_docs = False)
# file='last_right_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(last_right,file_h, print_docs = False)
# file='whole_last_joint_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(whole_last_joint,file_h, print_docs = False)
# file='middle_left_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(middle_left,file_h, print_docs = False)
# file='middle_right_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(middle_right,file_h, print_docs = False)
# file='whole_middle_joint_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(whole_middle_joint,file_h, print_docs = False)
# -
p = SCADGen.Parser.Parser(outputfile)
p.createSCAD()
test = p.rootelems[0]
cadFile_name='%s.scad'%(file)
cad_file_path=os.path.abspath(os.path.join(savepath, cadFile_name))
cad_file_path
# +
# # !vglrun openscad {cad_file_path}
# -
| notebooks/collimator_differentBlade_threeSections-splitting_correctly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p><div class="lev1 toc-item"><a href="#Table-des-matières" data-toc-modified-id="Table-des-matières-1"><span class="toc-item-num">1 </span>Table des matières</a></div><div class="lev1 toc-item"><a href="#1.-Agrégation-externe-de-mathématiques" data-toc-modified-id="1.-Agrégation-externe-de-mathématiques-2"><span class="toc-item-num">2 </span>1. Agrégation externe de mathématiques</a></div><div class="lev2 toc-item"><a href="#1.1-Leçon-orale,-option-informatique" data-toc-modified-id="1.1-Leçon-orale,-option-informatique-21"><span class="toc-item-num">2.1 </span>1.1 Leçon orale, option informatique</a></div><div class="lev4 toc-item"><a href="#Feedbacks?" data-toc-modified-id="Feedbacks?-2101"><span class="toc-item-num">2.1.0.1 </span>Feedbacks?</a></div><div class="lev1 toc-item"><a href="#2.-Algorithme-de-Cocke-Kasami-Younger" data-toc-modified-id="2.-Algorithme-de-Cocke-Kasami-Younger-3"><span class="toc-item-num">3 </span>2. Algorithme de Cocke-Kasami-Younger</a></div><div class="lev3 toc-item"><a href="#2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923." data-toc-modified-id="2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923.-301"><span class="toc-item-num">3.0.1 </span>2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923.</a></div><div class="lev3 toc-item"><a href="#2.0.2-Références-:" data-toc-modified-id="2.0.2-Références-:-302"><span class="toc-item-num">3.0.2 </span>2.0.2 Références :</a></div><div class="lev2 toc-item"><a href="#2.1-Classes-pour-répresenter-une-grammaire" data-toc-modified-id="2.1-Classes-pour-répresenter-une-grammaire-31"><span class="toc-item-num">3.1 </span>2.1 Classes pour répresenter une grammaire</a></div><div class="lev3 toc-item"><a href="#2.1.1-Du-typage-en-Python-?!" data-toc-modified-id="2.1.1-Du-typage-en-Python-?!-311"><span class="toc-item-num">3.1.1 </span>2.1.1 Du typage en Python ?!</a></div><div class="lev3 toc-item"><a href="#2.1.2-La-classe-Grammaire" data-toc-modified-id="2.1.2-La-classe-Grammaire-312"><span class="toc-item-num">3.1.2 </span>2.1.2 La classe <code>Grammaire</code></a></div><div class="lev3 toc-item"><a href="#2.1.3-Premier-exemple-de-grammaire-(non-Chomsky)" data-toc-modified-id="2.1.3-Premier-exemple-de-grammaire-(non-Chomsky)-313"><span class="toc-item-num">3.1.3 </span>2.1.3 Premier exemple de grammaire (non-Chomsky)</a></div><div class="lev3 toc-item"><a href="#2.1.4-Second-exemple-de-grammaire-(non-Chomsky)" data-toc-modified-id="2.1.4-Second-exemple-de-grammaire-(non-Chomsky)-314"><span class="toc-item-num">3.1.4 </span>2.1.4 Second exemple de grammaire (non-Chomsky)</a></div><div class="lev3 toc-item"><a href="#2.1.5-Dernier-exemple-de-grammaire" data-toc-modified-id="2.1.5-Dernier-exemple-de-grammaire-315"><span class="toc-item-num">3.1.5 </span>2.1.5 Dernier exemple de grammaire</a></div><div class="lev2 toc-item"><a href="#2.2-Vérifier-qu'une-grammaire-est-bien-formée" data-toc-modified-id="2.2-Vérifier-qu'une-grammaire-est-bien-formée-32"><span class="toc-item-num">3.2 </span>2.2 Vérifier qu'une grammaire est bien formée</a></div><div class="lev2 toc-item"><a href="#2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky" data-toc-modified-id="2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky-33"><span class="toc-item-num">3.3 </span>2.3 Vérifier qu'une grammaire est en forme normale de Chomsky</a></div><div class="lev2 toc-item"><a href="#2.4-(enfin)-L'algorithme-de-Cocke-Kasami-Younger" data-toc-modified-id="2.4-(enfin)-L'algorithme-de-Cocke-Kasami-Younger-34"><span class="toc-item-num">3.4 </span>2.4 (enfin) L'algorithme de Cocke-Kasami-Younger</a></div><div class="lev2 toc-item"><a href="#2.5-Exemples" data-toc-modified-id="2.5-Exemples-35"><span class="toc-item-num">3.5 </span>2.5 Exemples</a></div><div class="lev3 toc-item"><a href="#2.5.1-Avec-$G_3$" data-toc-modified-id="2.5.1-Avec-$G_3$-351"><span class="toc-item-num">3.5.1 </span>2.5.1 Avec <span class="MathJax_Preview" style="color: inherit;"><span class="MJXp-math" id="MJXp-Span-545"><span class="MJXp-msubsup" id="MJXp-Span-546"><span class="MJXp-mi MJXp-italic" id="MJXp-Span-547" style="margin-right: 0.05em;">G</span><span class="MJXp-mn MJXp-script" id="MJXp-Span-548" style="vertical-align: -0.4em;">3</span></span></span></span><script type="math/tex" id="MathJax-Element-96">G_3</script></a></div><div class="lev3 toc-item"><a href="#2.5.2-Avec-$G_6$" data-toc-modified-id="2.5.2-Avec-$G_6$-352"><span class="toc-item-num">3.5.2 </span>2.5.2 Avec <span class="MathJax_Preview" style="color: inherit;"><span class="MJXp-math" id="MJXp-Span-556"><span class="MJXp-msubsup" id="MJXp-Span-557"><span class="MJXp-mi MJXp-italic" id="MJXp-Span-558" style="margin-right: 0.05em;">G</span><span class="MJXp-mn MJXp-script" id="MJXp-Span-559" style="vertical-align: -0.4em;">6</span></span></span></span><script type="math/tex" id="MathJax-Element-98">G_6</script></a></div><div class="lev2 toc-item"><a href="#2.6-Mise-en-forme-normale-de-Chomsky-(bonus)" data-toc-modified-id="2.6-Mise-en-forme-normale-de-Chomsky-(bonus)-36"><span class="toc-item-num">3.6 </span>2.6 Mise en forme normale de Chomsky <em>(bonus)</em></a></div><div class="lev3 toc-item"><a href="#2.6.1-Exemple-pour-$G_1$" data-toc-modified-id="2.6.1-Exemple-pour-$G_1$-361"><span class="toc-item-num">3.6.1 </span>2.6.1 Exemple pour <span class="MathJax_Preview">G_1</span><script type="math/tex">G_1</script></a></div><div class="lev3 toc-item"><a href="#2.6.2-Exemple-pour-$G_6$" data-toc-modified-id="2.6.2-Exemple-pour-$G_6$-362"><span class="toc-item-num">3.6.2 </span>2.6.2 Exemple pour <span class="MathJax_Preview">G_6</span><script type="math/tex">G_6</script></a></div>
# + [markdown] nbpresent={"id": "dea75246-a31b-40b4-a49d-b9afe6eee2ee"}
# # Table des matières
# * [1. Agrégation externe de mathématiques](#1.-Agrégation-externe-de-mathématiques)
# * [1.1 Leçon orale, option informatique](#1.1-Leçon-orale,-option-informatique)
# * [2. Algorithme de Cocke-Kasami-Younger](#2.-Algorithme-de-Cocke-Kasami-Younger)
# *
# * [2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923.](#2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923.)
# * [2.0.2 Références :](#2.0.2-Références-:)
# * [2.1 Classes pour répresenter une grammaire](#2.1-Classes-pour-répresenter-une-grammaire)
# * [2.1.1 Du typage en Python ?!](#2.1.1-Du-typage-en-Python-?!)
# * [2.1.2 La classe ``Grammaire``](#2.1.2-La-classe-Grammaire)
# * [2.1.3 Premier exemple de grammaire (non-Chomsky)](#2.1.3-Premier-exemple-de-grammaire-%28non-Chomsky%29)
# * [2.1.4 Second exemple de grammaire (non-Chomsky)](#2.1.4-Second-exemple-de-grammaire-%28non-Chomsky%29)
# * [2.1.5 Dernier exemple de grammaire](#2.1.5-Dernier-exemple-de-grammaire)
# * [2.2 Vérifier qu'une grammaire est bien formée](#2.2-Vérifier-qu'une-grammaire-est-bien-formée)
# * [2.3 Vérifier qu'une grammaire est en forme normale de Chomsky](#2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky)
# * [2.4 (enfin) L'algorithme de Cocke-Kasami-Younger](#2.4-%28enfin%29-L'algorithme-de-Cocke-Kasami-Younger)
# * [2.5 Exemples](#2.5-Exemples)
# * [2.5.1 Avec $G_3$](#2.5.1-Avec-$G_3$)
# * [2.5.2 Avec $G_6$](#2.5.2-Avec-$G_6$)
# * [2.6 Mise en forme normale de Chomsky *(bonus)*](#2.6-Mise-en-forme-normale-de-Chomsky-*%28bonus%29*)
# * [2.6.1 Exemple pour $G_1$](#2.6.1-Exemple-pour-$G_1$)
# * [2.6.2 Exemple pour $G_6$](#2.6.2-Exemple-pour-$G_6$)
#
# + [markdown] nbpresent={"id": "69c52735-d8a8-451b-93bb-eac98efd50ca"}
# # 1. Agrégation externe de mathématiques
# + [markdown] nbpresent={"id": "49566064-31f5-48d8-9843-4cdc6e3ef152"}
# ## 1.1 Leçon orale, option informatique
# + [markdown] nbpresent={"id": "9b3d1cdf-ca96-42cc-988b-be3a8a4715d9"}
# > - Ce [notebook Jupyter](http://jupyter.org/) est une implémentation d'un algorithme constituant un développement pour l'option informatique de l'agrégation externe de mathématiques.
# > - Il s'agit de l'[algorithme de Cocke-Kasami-Younger](https://fr.wikipedia.org/wiki/Algorithme_de_Cocke-Younger-Kasami).
# > - Cette implémentation (partielle) a été rédigée par [<NAME>](http://perso.crans.org/besson/) ([sur GitHub ?](https://github.com/Naereen/), [sur Bitbucket ?](https://bitbucket.org/lbesson)), et [est open-source](https://github.com/Naereen/notebooks/blob/master/agreg/Algorithme%20de%20Cocke-Kasami-Younger%20%28python3%29.ipynb).
#
# > #### Feedbacks?
# > - Vous avez trouvé un bug ? → [Signalez-le moi svp !](https://github.com/Naereen/notebooks/issues/new), merci d'avance.
# > - Vous avez une question ? → [Posez la svp !](https://github.com/Naereen/ama.fr) [](https://GitHub.com/Naereen/ama.fr)
#
# ----
# + [markdown] nbpresent={"id": "29640a95-6ba1-4db6-90c6-873eebb49d06"}
# # 2. Algorithme de Cocke-Kasami-Younger
# + [markdown] nbpresent={"id": "977443a2-4e87-43f0-8198-65a8a02d55a4"}
# ### 2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923.
# + [markdown] nbpresent={"id": "7999839d-b7e5-43d6-8ede-6765f12a8b3c"}
# L'algorithme de Cocke-Kasami-Younger (CYK) permet de résoudre le problème du mot en temps $\mathcal{O}(|w|^3)$, par programmation dynamique.
# La grammaire $G$ doit déjà avoir été mise en forme de [forme normale de Chomsky](https://fr.wikipedia.org/wiki/Forme_normale_de_Chomsky), ce qui prend un temps $\mathcal{O}(|G|^2)$ et produit une grammaire équivalente $G'$ de taille $\mathcal{O}(|G|^2)$ en partant de $G$ (qui doit être bien formée).
# + [markdown] nbpresent={"id": "981227ac-e731-447f-8fe9-b25e4e546547"}
# ### 2.0.2 Références :
# + [markdown] nbpresent={"id": "67357bc2-f510-485b-a17d-fd89b46bbaf7"}
# - [Cocke-Kasami-Younger sur Wikipedia](https://fr.wikipedia.org/wiki/Algorithme_de_Cocke-Younger-Kasami),
# - Bien traité dans ["Hopcroft, Ullman", Ch7.4.4, p298](https://catalogue.ens-cachan.fr/cgi-bin/koha/opac-detail.pl?biblionumber=23694),
# - Esquissé dans ["Carton", Ex4.7 Fig4.2 p170](https://catalogue.ens-cachan.fr/cgi-bin/koha/opac-detail.pl?biblionumber=41719),
# - [Développement tapé en PDF par <NAME> (2014)](http://perso.eleves.ens-rennes.fr/~tpier758/agreg/dvpt/info/CYK.pdf),
# - [Ces slides d'un cours sur les langages et les grammaires](http://pageperso.lif.univ-mrs.fr/~alexis.nasr/Ens/M2/pcfg.pdf).
#
# ----
# + [markdown] nbpresent={"id": "1da0f4ab-ccfc-412f-9efa-e7520d73a821"}
# ## 2.1 Classes pour répresenter une grammaire
# + [markdown] nbpresent={"id": "221b1aa3-999b-46bd-9ef6-ac240ccded3f"}
# Au lieu de types formels définis en OCaml, on utilise des classes en Python, pour répresenter une grammaire (pas seulement en forme normale de Chomsky mais dans une forme un peu plus générale).
# + [markdown] nbpresent={"id": "588b818a-344d-44f3-910f-8b094ab72da0"}
# ### 2.1.1 Du typage en Python ?!
# + [markdown] nbpresent={"id": "eadf8257-9a0e-4406-9455-5b82921f0457"}
# Mais comme je veux frimer en utilisant des types formels, on va utiliser des [annotations de types en Python](https://www.python.org/dev/peps/pep-0484/).
# C'est assez nouveau, disponible **à partir de Python 3.5**. Si vous voulez en savoir plus, une bonne première lecture peut être [cette page](https://mypy.readthedocs.io/en/latest/builtin_types.html).
#
# *Note :* ces annotations de types ne sont PAS nécessaires.
# + nbpresent={"id": "aa8251b1-61e4-4914-9ad1-e66efd88b21f"}
# On a besoin de listes et de tuples
from typing import List, Tuple # Module disponible en Python version >= 3.5
# + [markdown] nbpresent={"id": "7acecf25-b05e-42f7-9269-1ea5ba9e5265"}
# On définit les types qui nous intéressent :
# + nbpresent={"id": "9471cb9a-937d-4f95-9f54-260b3b2e4da3"}
# Type pour une variable, juste une chaine, e.g. 'X' ou 'S'
Var = str
# Type pour un alphabet
Alphabet = List[Var]
# Type pour une règle : un symbole transformé en une liste de symboles
Regle = Tuple[Var, List[Var]]
# + [markdown] nbpresent={"id": "609def09-9448-4ad3-ab50-ab5b76700114"}
# *Note :* ces annotations de types ne sont là que pour illustrer et aider le programmeur, Python reste un langage dynamiquement typé (i.e. on fait ce qu'on veut...).
# + [markdown] nbpresent={"id": "a1f2c855-d249-45fd-a621-65f5d25aedd9"}
# ### 2.1.2 La classe ``Grammaire``
# + [markdown] nbpresent={"id": "aa98b297-0aa0-4ca7-b9ec-64c271212a60"}
# Une grammaire $G$ est définie par :
#
# - $\Sigma$ son alphabet de production, qui sont les lettres dans les mots produits à la fin, e.g., $\Sigma = \{ a, b\}$,
# - $V$ son alphabet de travail, qui sont les lettres utilisées dans la génération de mots, mais pas dans les mots à la fin, e.g., $V = \{S, A\}$,
# - $S$ est le symbole de travail initial,
# - $R$ est un ensemble de règles, qui sont de la forme $U \rightarrow x_1 \dots x_n$ pour $U \in V$ une variable de travail (***pas* de production**), et $x_1, \dots, x_n$ sont variables de production ou de travail (dans $\Sigma \cup V$), e.g., $R = \{ S \rightarrow \varepsilon, S \rightarrow A S b, A \rightarrow a, A \rightarrow a a \}$.
#
#
# Et ainsi on peut definir un classe ``Grammaire``, qui n'est rien d'autre qu'un moyen d'encapsuler ces différentes valeurs $\Sigma$, $V$, $S$, et $R$ (en OCaml, ce serait un type avec des champs d'enregistrement, défini par exemple par ``type grammar = { sigma : string list; v: string list; s: string; r: (string, strin list) list; };;``).
#
# On ajoute aussi une méthode ``__str__`` à cette classe ``Grammaire`` pour afficher la grammaire joliment.
# + nbpresent={"id": "90271656-b57e-408d-b001-5f689f7ec7ae"}
class Grammaire(object):
""" Type pour les grammaires algébriques (en forme de Chomsky). """
def __init__(self, sigma: Alphabet, v: Alphabet, s: Var, r: List[Regle], nom="G"):
""" Grammaire en forme de Chomsky :
- sigma : alphabet de production, type Alphabet,
- v : alphabet de travail, type Alphabet,
- s : symbol initial, type Var,
- r : liste de règles, type List[Regle].
"""
# On se contente de stocker les champs :
self.sigma = sigma
self.v = v
self.s = s
self.r = r
self.nom = nom
def __str__(self) -> str:
""" Permet d'afficher une grammaire."""
str_regles = ', '.join(
"{} -> {}".format(regle[0], ''.join(regle[1]) if regle[1] else 'ε')
for regle in self.r
)
return r"""Grammaire {} :
- Alphabet Σ = {},
- Non terminaux V = {},
- Symbole initial : '{}',
- Règles : {}.""".format(self.nom, set(self.sigma), set(self.v), self.s, str_regles)
# + [markdown] nbpresent={"id": "78754350-e18a-4e1f-ad61-62ca80b13577"}
# ### 2.1.3 Premier exemple de grammaire (non-Chomsky)
# + [markdown] nbpresent={"id": "f81db6b0-5909-4012-9172-5f8d69719308"}
# On commence avec un premier exemple basique, la grammaire $G_1$ avec pour seule règle : $S \rightarrow aSb \;|\; \varepsilon$.
# C'est la grammaire naturelle, bien formée, pour les mots de la forme $a^n b^n$ pour tout $n \geq 0$.
# Cf. [cet exemple sur Wikipedia](https://fr.wikipedia.org/wiki/Grammaire_non_contextuelle#Exemple_1).
# Par contre, elle n'est pas en forme normale de Chomsky.
# + nbpresent={"id": "76f6562f-42fe-46ad-9428-e04c9715e2b9"}
g1 = Grammaire(
['a', 'b'], # Alphabet de production
['S'], # Alphabet de travail
'S', # Symbole initial (un seul)
[ # Règles
('S', []), # S -> ε
('S', ['a', 'S', 'b']), # S -> a S b
],
nom="G1"
)
print(g1)
# + [markdown] nbpresent={"id": "c1a3f18e-6007-4803-8f1d-919099eb9d4d"}
# ### 2.1.4 Second exemple de grammaire (non-Chomsky)
# + [markdown] nbpresent={"id": "7902e165-ed68-4295-81c6-d9cb677071f6"}
# Voici un autre exemple basique, la grammaire $G_2$ qui engendre les expressions arithmétiques
# en trois variables $x$, $y$ et $z$, correctement parenthésées.
# Une seule règle de production, ou une union de règle de production, suffit :
# $$ S \rightarrow x \;|\; y \;|\; z \;|\; S+S \;|\; S-S \;|\; S∗S \;|\; S/S \;|\; (S). $$
#
# Cf. [cet autre exemple sur Wikipedia](https://fr.wikipedia.org/wiki/Grammaire_non_contextuelle#Exemple_2).
# + nbpresent={"id": "23c6f46c-01de-4de7-a493-c2c12b91705a"}
g2 = Grammaire(
['x', 'y', 'z', '+', '-', '*', '/', '(', ')'], # Alphabet de production
['S'], # Alphabet de travail
'S', # Symbole initial (un seul)
[ # Règles
('S', ['x']), # S -> x
('S', ['y']), # S -> y
('S', ['z']), # S -> z
('S', ['S', '+', 'S']), # S -> S + S
('S', ['S', '-', 'S']), # S -> S - S
('S', ['S', '*', 'S']), # S -> S * S
('S', ['S', '/', 'S']), # S -> S / S
('S', ['(', 'S', ')']), # S -> (S)
],
nom="G2"
)
print(g2)
# + [markdown] nbpresent={"id": "4d5849a2-1473-43cc-b262-f73fcc3c7fe9"}
# ### 2.1.5 Dernier exemple de grammaire
# + [markdown] nbpresent={"id": "dfe894be-2610-444e-9c7c-3592c0fd387e"}
# Voici un dernier exemple, moins basique, la grammaire $G_3$ qui engendre des phrases "simples" (et très limitées) en anglais.
# [Inspirée de cet exemple sur Wikipedia (en anglais)](https://en.wikipedia.org/wiki/CYK_algorithm#Example).
# Cette grammaire $G_3$ est sous forme normale de Chomsky.
# + nbpresent={"id": "ab608e34-9e94-46d8-97a3-a16e11748e39"}
g3 = Grammaire(
# Alphabet de production, des vrais mots anglais (avec une espace pour que la phrase soit lisible
['she ', 'eats ', 'with ', 'fish ', 'fork ', 'a ', 'an ', 'ork ', 'sword '],
# Alphabet de travail, des catégories de mots : V pour verbes, P pour pronom etc.
['S', 'NP ', 'VP ', 'PP ', 'V ', 'Det ', 'DetVo ', 'N ', 'NVo ', 'P '],
# Det = a : déterminant
# DetVo = an : déterminant avant un nom commençant par une voyelle
# N = (fish, fork, sword) : un nom
# NVo = ork : un nom commençant par une voyelle
# NP = she | a (fish, fork, sword) | an ork : un sujet
# V = eats : verbe conjugué
# P = with : conjonction de coordination
# VP = eats : verbe conjugué suivi d'un objet
# PP : with NP : complément d'objet direct
'S', # Symbole initial (un seul)
[ # Règles
# Règles de constuction de phrase
( 'S', ['NP ', 'VP '] ), # 'S' -> 'NP' 'VP'
( 'VP ', ['VP ', 'PP '] ), # 'VP' -> 'VP' 'PP'
( 'VP ', ['V ', 'NP '] ), # 'VP' -> 'V' 'NP'
( 'PP ', ['P ', 'NP '] ), # 'PP' -> 'P' 'NP'
( 'NP ', ['Det ', 'N '] ), # 'NP' -> 'Det' 'N'
( 'NP ', ['DetVo ', 'NVo '] ), # 'NP' -> 'DetVo' 'NVo'
# Règles de création de mots
( 'VP ', ['eats '] ), # 'VP' -> 'eats '
( 'NP ', ['she '] ), # 'NP' -> 'she '
( 'V ', ['eats '] ), # 'V' -> 'eats '
( 'P ', ['with '] ), # 'P' -> 'with '
( 'N ', ['fish '] ), # 'N' -> 'fish '
( 'N ', ['fork '] ), # 'N' -> 'fork '
( 'N ', ['sword '] ), # 'N' -> 'sword '
( 'NVo ', ['ork '] ), # 'NVo' -> 'ork '
( 'Det ', ['a '] ), # 'Det' -> 'a '
( 'DetVo ', ['an '] ), # 'DetVo' -> 'an '
],
nom="G3"
)
print(g3)
# + [markdown] nbpresent={"id": "93c8ccae-724e-4a0b-ae3b-d77a7686d50f"}
# Nous utiliserons ces exemples de grammaire plus tard, pour vérifier que nos fonctions sont correctement écrites.
#
# ----
# + [markdown] nbpresent={"id": "8ba9e526-7b9d-46b8-aff5-c24770802e0b"}
# ## 2.2 Vérifier qu'une grammaire est bien formée
# + [markdown] nbpresent={"id": "a89bc86c-9498-4223-aca1-664c432e2fed"}
# On veut pouvoir vérifier qu'une grammaire $G$ (i.e., un objet instance de ``Grammaire``) est bien formée (cf. votre cours de langage formel pour une définition propre) :
#
# - $S$ doit être une variable de travail, i.e., $S \in V$,
# - Les variables de production et les variables de travail doivent être distinctes, i.e., $\Sigma \cap V = \emptyset$,
# - Pour chaque règle, $r = A \rightarrow w$, les membres gauches des règles sont réduits à une seule variable de travail, et les membres droits sont des mots, vides ou constitués de variables de production ou de travail, i.e., $A \in V$, et $w \in (\Sigma \cup V)^{\star}$,
#
# On vérifie ça facilement avec la fonction suivante :
# + nbpresent={"id": "d11ac540-3961-4fe1-9e42-dcaed4f80ee4"}
def estBienFormee(self: Grammaire) -> bool:
""" Vérifie que G est bien formée. """
sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r
tests = [
s in v, # s est bien une variable de travail
sigma.isdisjoint(v), # Lettres et variables de travail sont disjointes
all(
regle[0] in v # Les membres gauches de règles sont des variables
and # Les membres droits de règles sont des variables ou des lettres
all(r in sigma | v for r in regle[1])
for regle in regles
)
]
return all(tests)
# On ajoute la fonction comme une méthode (au cas où...)
Grammaire.estBienFormee = estBienFormee
# + nbpresent={"id": "33675f93-4578-44b5-b7c3-cd58e0b6205f"}
for g in [g1, g2, g3]:
print(g)
print("La grammaire", g.nom, "est-elle bien formée ?", estBienFormee(g))
print()
# + [markdown] nbpresent={"id": "b06222f2-f0a6-483f-8b88-e7084f954d42"}
# On peut définir une autre grammaire qui n'est pas bien formée, pour voir.
# Cette grammaire $G_4$ engendre les mots de la forme $a^{n+k} b^n$ pour $n,k \in \mathbb{N}$, mais on lui donne une règle de dédoublement des $a$ : $a \rightarrow a a$ (notez que $a$, une variable de production, est à gauche d'une règle).
# + nbpresent={"id": "afebb980-fd35-4b8c-9d9e-e33644b43915"}
g4 = Grammaire(
['a', 'b'], # Alphabet de production
['S'], # Alphabet de travail
'S', # Symbole initial (un seul)
[ # Règles
('S', []), # S -> ε
('S', ['a', 'S', 'b']), # S -> a S b
('a', ['a', 'a']), # a -> a a, cette règle n'est pas en forme normale
],
nom="G4"
)
print(g4)
print("La grammaire", g4.nom, "est-elle bien formée ?", estBienFormee(g4))
# + [markdown] nbpresent={"id": "92a7f470-cf9e-443a-9e4d-2dfdc60dab0a"}
# Juste par curiosité, la voici transformée pour devenir bien formée, ici on a juste eu besoin d'ajouter une variable de travail $A$ qui peut donner $a$ ou $A A$ :
# + nbpresent={"id": "5e3a665e-ff7b-4ec1-88b8-77f93147f76d"}
g5 = Grammaire(
['a', 'b'], # Alphabet de production
['S', 'A'], # Alphabet de travail
'S', # Symbole initial (un seul)
[ # Règles
('S', []), # S -> ε
('S', ['A', 'S', 'b']), # S -> A S b
('A', ['A', 'A']), # A -> A A, voila comment on gère a -> a a
('A', ['a']), # A -> a
],
nom="G5"
)
print(g5)
print("La grammaire", g5.nom, "est-elle bien formée ?", estBienFormee(g5))
# + [markdown] nbpresent={"id": "532339d5-5c9d-431d-812b-3d594e71870c"}
# ## 2.3 Vérifier qu'une grammaire est en forme normale de Chomsky
# + [markdown] nbpresent={"id": "737fc81b-be7b-484f-8323-da0f543bc541"}
# On veut maintenant pouvoir vérifier qu'une grammaire $G$ (i.e., un objet instance de ``Grammaire``) est bien en forme normale de Chomsky.
# En effet, l'algorithme CKY n'a aucune chance de fonctionner si la grammaire n'est pas sous la bonne forme.
#
# Pour que $G$ soit en forme normale de Chomsky :
# - elle doit d'abord être bien formée (cf. ci-dessus),
# - et chaque règle doit être
# - soit de la forme $S \rightarrow \varepsilon$,
# - soit de la forme $A \rightarrow a$ pour $(A, a)$ dans $V \times \Sigma$,
# - soit de la forme $A \rightarrow B C$ pour $(A, B, C)$ dans $V^3$ (certains ouvrages demandent à ce qu'il n'y ait aucune production de $S$ le symbole initial, i.e., $B,C \neq S$, mais ça ne change rien pour l'algorithme qu'on implémente plus bas).
#
# On vérifie ça facilement, point par point, dans la fonction suivante :
# + nbpresent={"id": "24c410d2-56f5-4c30-abd3-506afd8e9a06"}
def estChomsky(self: Grammaire) -> bool:
""" Vérifie que G est sous forme normale de Chomksy. """
sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r
estBienChomsky = all(
( # S -> epsilon
regle[0] == s and not regle[1]
) or ( # A -> a
len(regle[1]) == 1
and regle[1][0] in sigma # a in Sigma
) or ( # A -> B C
len(regle[1]) == 2
and regle[1][0] in v # B in V, not Sigma
and regle[1][1] in v # C in V, not Sigma
)
for regle in regles
)
return estBienChomsky and estBienFormee(self)
# On ajoute la fonction comme une méthode (au cas où...)
Grammaire.estChomsky = estChomsky
# + [markdown] nbpresent={"id": "3323c87c-3015-46e9-a08f-9b197123ea54"}
# On peut tester avec les cinq grammaires definies plus haut ($G_1$, $G_2$, $G_3$, $G_4$, $G_5$).
# Seule la grammaire $G_3$ est de Chomsky.
# + nbpresent={"id": "a8f2efbb-f560-4e6c-8c9c-228fd8c3d0ca"}
for g in [g1, g2, g3, g4, g5]:
print(g)
print("La grammaire", g.nom, "est-elle de bien formée ?", estBienFormee(g))
print("La grammaire", g.nom, "est-elle de Chomsky ?", estChomsky(g))
print()
# + [markdown] nbpresent={"id": "e4ba99dc-0283-4e95-a79a-fb015dac3272"}
# À la main, on peut transformer $G_5$ pour la mettre en forme de Chomsky (et après, on passe à CYK).
# Notez que cette transformation est automatique, elle est implémentée dans le cas general (d'une grammaire $G$ bien formée), ci-dessus en partie 5.
# + nbpresent={"id": "8d550da7-ce2e-441a-b5ab-f10874825883"}
g6 = Grammaire(
['a', 'b'], # Alphabet de production
['S', 'T', 'A', 'B'], # Alphabet de travail
'S', # Symbole initial (un seul)
[ # Règles
('S', []), # S -> ε, on efface S si on veut produire le mot vide
# On coupe la règle S -> A S B en deux :
('S', ['A', 'T']), # S -> A T
('T', ['S', 'B']), # T -> S B
('A', ['A', 'A']), # A -> A A, voilà comment on gère a -> a a
# Production de lettres
('A', ['a']), # A -> a
('B', ['b']), # B -> b
],
nom="G6"
)
print(g6)
print("La grammaire", g6.nom, "est-elle bien formée ?", estBienFormee(g6))
print("La grammaire", g6.nom, "est-elle de Chomsky ?", estChomsky(g6))
# + [markdown] nbpresent={"id": "1251164f-12db-4f25-8be3-f9673dff5224"}
# ## 2.4 (enfin) L'algorithme de Cocke-Kasami-Younger
# + [markdown] nbpresent={"id": "2fc4b95c-7173-48bc-a321-132965860cf9"}
# On passe *enfin* à l'algorithme de Cocke-Kasami-Younger.
#
# L'algorithme va prendre une grammaire $G$, bien formée, de taille $|G|$ (definie comme la somme des longueurs de $\Sigma$ et $V$ et la somme des tailles des règles), ainsi qu'un mot $w$ de taille $n = |w|$ (**attention**, ce n'est pas une ``str`` mais une liste de variables ``List[Var]``, i.e., une liste de ``str``).
#
# Le but est de vérifier si le mot $w$ peut être engendrée par la grammaire $G$, i.e., de déterminer si $w \in L(G)$.
# Pour le détail de fonctionnement, cf. le code Python ci dessous, ou [la page Wikipedia](https://fr.wikipedia.org/wiki/Algorithme_de_Cocke-Younger-Kasami).
#
# L'algorithme aura :
#
# - une complexité en mémoire en $\mathcal{O}(|G| + |w|^2)$,
# - une complexité en temps en $\mathcal{O}(|G| \times |w|^3)$, ce qui montrera que le problème du mot pour les grammaires en forme de Chomsky est dans $\mathcal{P}$ (en temps polynomial, c'est déjà cool) et en temps raisonnable (cubique en $n = |w|$, c'est encore mieux !).
#
# On va utiliser une table de hachage ``E`` contiendra, à la fin du calcul, les $E_{i, j}$ définis par :
# $$ E_{i, j} := \{ A \in V : w[i, j] \in L_G(A) \}.$$
# Ou l'on a noté $w[i, j] = w_i \dots w_j$ le sous-mot d'indices $i,\dots,j$, et $L_G(A)$ le langage engendré par $G$ en partant du symbole $A$ (et pas du symbole initial $S$).
#
# *Note :* la table de hachage n'est pas vraiment requise, une liste de liste fonctionnerait aussi mais la notation en serait moins proche de celle utilisée en maths.
# + nbpresent={"id": "f5923095-60a8-4bee-99f9-4882995c47b4"}
def cocke_kasami_younger(self, w):
""" Vérifie si le mot w est dans L(G). """
assert estChomsky(self), "Erreur : {} n'est pas en forme de Chomsky, l'algorithme de Cocke-Kasami-Younger ne fonctionnera pas.".format(self.nom)
sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r
n = len(w)
E = dict() # De taille n^2
# Cas special pour tester si le mot vide est dans L(G)
if n == 0:
return (s, []) in regles, E
# Boucle en O(n^2)
for i in range(n):
for j in range(n):
E[(i, j)] = set()
# Boucle en O(n x |G|)
for i in range(n):
for regle in regles:
# Si regle est de la forme : A -> a
if len(regle[1]) == 1:
A = regle[0]
a = regle[1][0]
if w[i] == a: # Notez que c'est le seul moment ou utilise le mot w !
E[(i, i)] = E[(i, i)] | {A}
# Boucle en O(n^3 x |G|)
for d in range(1, n): # Longueur du morceau
for i in range(n - d): # Début du morceau
j = i + d # Fin du morceau, on regarde w[i]..w[j]
for k in range(i, j): # Parcourt du morceau, ..w[k].., sans la fin
for regle in regles:
# Si regle est de la forme A -> B C
if len(regle[1]) == 2:
A = regle[0]
B, C = regle[1]
if B in E[(i, k)] and C in E[(k + 1, j)]:
E[(i, j)] = E[(i, j)] | {A}
# On a fini, il suffit maintenant d'utiliser la table créée par programmation dynamique
return s in E[(0, n - 1)], E
# On ajoute la fonction comme une méthode (au cas où...)
Grammaire.genere = cocke_kasami_younger
# + [markdown] nbpresent={"id": "d1b05240-02ab-4500-b066-0c6402b6f904"}
# ----
# + [markdown] nbpresent={"id": "5773e8e4-8eea-46b9-8a93-24d0a49ef67b"}
# ## 2.5 Exemples
# + [markdown] nbpresent={"id": "2d9326be-33da-4d10-b886-13e23c95ea30"}
# On présente ici des exemples d'utilisation de cette fonction ``cocke_kasami_younger`` avec les grammaires $G_i$ présentées plus haut et quelques examples de mots $w$.
# + nbpresent={"id": "9dfc4bf1-c35a-43ba-a0b1-f5fa6c3bae7c"}
def testeMot(g, w):
""" Joli affichage pour un test """
print("# Test si w in L(G) :")
print(" Pour", g.nom, "et w =", w)
estDansLG, E = cocke_kasami_younger(g, w)
if estDansLG:
print(" ==> Ce mot est bien engendré par G !")
else:
print(" ==> Ce mot n'est pas engendré par G !")
return estDansLG, E
# + [markdown] nbpresent={"id": "c4c76082-22f2-41da-9be6-d692bf75fb7f"}
# ### 2.5.1 Avec $G_3$
# + nbpresent={"id": "f11cc557-9a8c-43d9-8259-02de9950d273"}
print(g3)
print(estChomsky(g3))
# + nbpresent={"id": "ef2f9cfc-1edf-4d45-925f-1a0709592e2e"}
w1 = [ "she ", "eats ", "a ", "fish ", "with ", "a ", "fork " ] # True
estDansLG1, E1 = testeMot(g3, w1)
# + [markdown] nbpresent={"id": "8119a069-284d-44b4-8734-49e9305d3356"}
# Pour cet exemple, on peut afficher la table ``E`` (en ne montrant que les cases qui ont un $E_{i, j}$ non-vide) :
# + nbpresent={"id": "ff6fe115-4ffe-47b6-b50d-4c816e75c844"}
for k in E1.copy():
if k in E1 and not E1[k]: # On retire les clés qui ont un E[(i, j)] vide
del(E1[k])
print(E1)
# + [markdown] nbpresent={"id": "65b58209-c5a4-4cea-95ec-727afa29f96e"}
# ----
# + nbpresent={"id": "9b75140e-97f5-420c-9416-2a26fc073568"}
w2 = [ "she ", "attacks ", "a ", "fish ", "with ", "a ", "fork " ] # False
estDansLG2, E2 = testeMot(g3, w2)
# + nbpresent={"id": "4213cd21-6ae3-4806-a50c-e6e8897449b3"}
w3 = [ "she ", "eats ", "an ", "ork ", "with ", "a ", "sword " ] # True
estDansLG3, E3 = testeMot(g3, w3)
# + [markdown] nbpresent={"id": "6c8e5dc3-8e39-4463-bb32-708931a9ac19"}
# D'autres exemples :
# + nbpresent={"id": "ef05902d-aa96-4f44-9138-0c82da8c9b27"}
w4 = [ "she ", "eats ", "an ", "fish ", "with ", "a ", "fork " ] # False
estDansLG4, E4 = testeMot(g3, w4)
w5 = [ "she ", "eat ", "a ", "fish ", "with ", "a ", "fork " ] # False
estDansLG5, E5 = testeMot(g3, w5)
w6 = [ "she ", "eats ", "a ", "fish ", "with ", "a ", "fish " , "with ", "a ", "fish " , "with ", "a ", "fish " , "with ", "a ", "fish " ] # True
estDansLG6, E6 = testeMot(g3, w6)
# + [markdown] nbpresent={"id": "2609b145-67f9-4b81-a868-b1485626b70a"}
# ### 2.5.2 Avec $G_6$
# + nbpresent={"id": "9625bf15-25ab-4a1f-8399-921b81aa315a"}
print(g6)
for w in [ [], ['a', 'b'], ['a', 'a', 'a', 'b', 'b', 'b'], # True, True, True
['a', 'a', 'a', 'a', 'b', 'b', 'b'], # True
['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'], # True
['a', 'b', 'a'], ['a', 'a', 'a', 'b', 'b', 'b', 'b'], # False, False
['c'], ['a', 'a', 'a', 'c'], # False, False
]:
testeMot(g6, w)
# + [markdown] nbpresent={"id": "eeb5fbb8-ad49-4129-b380-e1778d23f0b4"}
# ----
# + [markdown] nbpresent={"id": "f117119d-b1bd-4ba7-84b7-788e6a1a181b"}
# ## 2.6 Mise en forme normale de Chomsky *(bonus)*
# + [markdown] nbpresent={"id": "4290aeb6-33da-4e25-934c-67389557731a"}
# On pourrait aussi implémenter la mise en forme normale de Chomsky, comme exposée et prouvée dans le développement.
#
# La preuve faite dans le développement garantit que la fonction ci-dessous transforme une grammaire $G$ en grammaire équivalente $G'$, avec l'éventuelle perte du mot vide $\varepsilon$ :
# $$ L(G') = L(G) \setminus \{ \varepsilon \}. $$
#
# L'algorithme aura :
#
# - une complexité en mémoire en $\mathcal{O}(|G|)$,
# - une complexité en temps en $\mathcal{O}(|G| |\Sigma_G|)$.
#
# C'est un algorithme en deux étapes :
#
# 1. D'abord, on transforme $G$ en $G'$ : on ajoute des variables de travail pour chaque lettre de production, $V_a \in V$ pour $a \in \Sigma$, on remplace chaque $a$ dans des membres gauches de règles par la nouvelle $V_a$, et ensuite on ajoute des règles de production de lettre $V_a \rightarrow a$ dans $R$,
# 2. Ensuite, $G''$ est obtenue en découpant les règles de $G$ qui sont de tailles $> 2$ : une règle $S \rightarrow S_1 \dots S_n$ devient $n-1$ règles : $S \rightarrow S_1 S_2'$, $S_i' \rightarrow S_i S_{i+1}'$ (pour $i = 2,\dots,n - 2$), et $S_{n-1}' \rightarrow S_{n-1} S_n$. Il faut aussi ajouter toutes ces nouvelles variables $S_i'$ (en s'assurant qu'elles sont uniques, pour chaque règle), on ajoute pour cela le numéro de la règle : $S_i'=$ ``A'_k`` pour la ``k`` -ième règle et le symbole $S_i=$ ``A``.
# + nbpresent={"id": "dd178a86-cdc5-40f7-99b8-59d33415cb8d"}
def miseChomsky(self):
""" Met en forme normale de Chomsky la grammaire self, qui doit être bien formée.
- On suppose que l'alphabet Sigma est dans {a,..,z},
- On suppose que l'alphabet v est dans {A,..,Z}.
"""
assert estBienFormee(self), "Erreur : {} n'est pas en bien formée, la mise en forme normale de Chomsky ne fonctionnera pas.".format(self.nom)
sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r
if estChomsky(self):
print("Info : la grammaire {} est déjà en forme normale de Chomsky, il n'y a rien à faire.".format(self.nom))
return Grammaire(sigma, v, s, regles)
assert sigma < set(chr(i) for i in range(ord('a'), ord('z') + 1)), "Erreur : {} n'a pas ses lettres de production Sigma dans 'a'..'z' ...".format(self.nom)
assert v < set(chr(i) for i in range(ord('A'), ord('Z') + 1)), "Erreur : {} n'a pas ses lettres de travail V dans 'A'..'Z' ...".format(self.nom)
# Algorithme en deux étapes, G --> G', puis G' --> G''
# 1. G --> G' : On ajoute des variables de travail et on substitue a -> V_a dans les autres règles
# On pose les attributs de G', qui vont être changés
sigma2 = list(sigma)
v2 = set(v)
s2 = s
regles2 = []
V_ = lambda a: 'V_{}'.format(a)
for a in sigma:
v2.add(V_(a))
regles2.append([V_(a), [a]]) # Ajout de la règle V_a -> a (production de la lettre correspondante)
substitutionLettre = lambda b: V_(b) if (b in sigma) else b
substitutionMot = lambda lb: [substitutionLettre(b) for b in lb]
for regle in regles:
S = regle[0]
w = regle[1]
if len(w) >= 2: # Si ce n'est pas une règle A -> epsilon
regles2.append([S, substitutionMot(w)])
else: # Ici on devrait garder la possibilte de creer le mot vide
regles2.append([S, w])
nom2 = self.nom + "'"
print(Grammaire(list(sigma2), list(v2), s2, regles2, nom=nom2))
# 2. G' --> G'' : On découpe les règles A -> A1..An qui ont n > 2
# On pose les attributs de G'', qui vont être changés
sigma3 = list(sigma2)
v3 = set(v2)
s3 = s2
regles3 = []
for k, regle in enumerate(regles2):
S = regle[0]
w = regle[1] # w = S1 .. Sn
n = len(w)
if n > 2:
prime = lambda Si: "%s'_%d" % (Si, k) # Ajouter le k dans le nom assure que les nouvelles variables de travail sont toutes uniques
# Premiere règle : S -> S_1 S'_2
regles3.append([S, [w[0], prime(w[1])]])
v3.add(prime(w[1]))
for i in range(1, len(w) - 2):
# Pour chaque règle intermédiaire : S'_i -> S_i S'_{i+1}
regles3.append([prime(w[i]), [w[i], prime(w[i + 1])]])
v3.add(prime(w[i]))
v3.add(prime(w[i + 1]))
# Dernière règle : S'_{n-1} -> S_{n-1} S_n
regles3.append([prime(w[n - 2]), [w[n - 2], w[n - 1]]])
v3.add(prime(w[n - 2]))
else:
regles3.append([S, w])
# Terminé
nom3 = self.nom + "''"
return Grammaire(list(sigma3), list(v3), s3, regles3, nom=nom3)
# On ajoute la fonction comme une méthode (au cas où...)
Grammaire.miseChomsky = miseChomsky
# + [markdown] nbpresent={"id": "ac1e7550-4c59-4052-9815-e57cbdd37dee"}
# ### 2.6.1 Exemple pour $G_1$
# + nbpresent={"id": "9a23d60c-82eb-497e-a2f6-5a7fae98839a"}
print(g1)
print("\n(Non) La grammaire", g1.nom, "est-elle de Chomsky ?", estChomsky(g1))
print("\nOn essaie de la mettre sous forme normale de Chomksy...\n")
g1_Chom = miseChomsky(g1)
print(g1_Chom)
print("\n ==> La grammaire", g1_Chom.nom, "est-elle de Chomsky ?", estChomsky(g1_Chom))
# + [markdown] nbpresent={"id": "ade465ca-80a7-45a8-a507-b12316f31994"}
# ### 2.6.2 Exemple pour $G_6$
# + nbpresent={"id": "16c2a9b2-acdc-458d-872f-474686ffd1c7"}
print(g5)
print("\n(Non) La grammaire", g5.nom, "est-elle de Chomsky ?", estChomsky(g5))
print("\nOn essaie de la mettre sous forme normale de Chomksy...\n")
g5_Chom = miseChomsky(g5)
print(g5_Chom)
print("\n ==> La grammaire", g5_Chom.nom, "est-elle de Chomsky ?", estChomsky(g5_Chom))
# + [markdown] nbpresent={"id": "55b060c3-5d0a-419a-b2d1-39ec92eb2fee"}
# ----
#
# > *C'est tout pour aujourd'hui les amis !*
# > [Allez voir d'autres notebooks](https://github.com/Naereen/notebooks/tree/master/agreg) si vous voulez.
| agreg/Algorithme de Cocke-Kasami-Younger (python3).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from pyvi import ViTokenizer
import re
import string
import codecs
import time
# +
#Từ điển tích cực, tiêu cực, phủ định
path_nag = 'data3/nag.txt'
path_pos = 'data3/pos.txt'
path_not = 'data3/not.txt'
# +
with codecs.open(path_nag, 'r', encoding='UTF-8') as f:
nag = f.readlines()
nag_list = [n.replace('\n', '') for n in nag]
with codecs.open(path_pos, 'r', encoding='UTF-8') as f:
pos = f.readlines()
pos_list = [n.replace('\n', '') for n in pos]
with codecs.open(path_not, 'r', encoding='UTF-8') as f:
not_ = f.readlines()
not_list = [n.replace('\n', '') for n in not_]
# +
VN_CHARS_LOWER = u'ạảãàáâậầấẩẫăắằặẳẵóòọõỏôộổỗồốơờớợởỡéèẻẹẽêếềệểễúùụủũưựữửừứíìịỉĩýỳỷỵỹđð'
VN_CHARS_UPPER = u'ẠẢÃÀÁÂẬẦẤẨẪĂẮẰẶẲẴÓÒỌÕỎÔỘỔỖỒỐƠỜỚỢỞỠÉÈẺẸẼÊẾỀỆỂỄÚÙỤỦŨƯỰỮỬỪỨÍÌỊỈĨÝỲỶỴỸÐĐ'
VN_CHARS = VN_CHARS_LOWER + VN_CHARS_UPPER
# Hàm này dùng để remove dấu \ / ? ~ . trong câu
def no_marks(s):
__INTAB = [ch for ch in VN_CHARS]
__OUTTAB = "a"*17 + "o"*17 + "e"*11 + "u"*11 + "i"*5 + "y"*5 + "d"*2
__OUTTAB += "A"*17 + "O"*17 + "E"*11 + "U"*11 + "I"*5 + "Y"*5 + "D"*2
__r = re.compile("|".join(__INTAB))
__replaces_dict = dict(zip(__INTAB, __OUTTAB))
result = __r.sub(lambda m: __replaces_dict[m.group(0)], s)
return result
# -
def normalize_text(text):
#Remove các ký tự kéo dài: vd: đẹppppppp
text = re.sub(r'([A-Z])\1+', lambda m: m.group(1).upper(), text, flags=re.IGNORECASE)
# Chuyển thành chữ thường
text = text.lower()
#Chuẩn hóa tiếng Việt, xử lý emoj, chuẩn hóa tiếng Anh, thuật ngữ
replace_list = {
'òa': 'oà', 'óa': 'oá', 'ỏa': 'oả', 'õa': 'oã', 'ọa': 'oạ', 'òe': 'oè', 'óe': 'oé','ỏe': 'oẻ',
'õe': 'oẽ', 'ọe': 'oẹ', 'ùy': 'uỳ', 'úy': 'uý', 'ủy': 'uỷ', 'ũy': 'uỹ','ụy': 'uỵ', 'uả': 'ủa',
'ả': 'ả', 'ố': 'ố', 'u´': 'ố','ỗ': 'ỗ', 'ồ': 'ồ', 'ổ': 'ổ', 'ấ': 'ấ', 'ẫ': 'ẫ', 'ẩ': 'ẩ',
'ầ': 'ầ', 'ỏ': 'ỏ', 'ề': 'ề','ễ': 'ễ', 'ắ': 'ắ', 'ủ': 'ủ', 'ế': 'ế', 'ở': 'ở', 'ỉ': 'ỉ',
'ẻ': 'ẻ', 'àk': u' à ','aˋ': 'à', 'iˋ': 'ì', 'ă´': 'ắ','ử': 'ử', 'e˜': 'ẽ', 'y˜': 'ỹ', 'a´': 'á',
#Quy các icon về 2 loại emoj: Tích cực hoặc tiêu cực
"👹": "nagative", "👻": "positive", "💃": "positive",'🤙': ' positive ', '👍': ' positive ',
"💄": "positive", "💎": "positive", "💩": "positive","😕": "nagative", "😱": "nagative", "😸": "positive",
"😾": "nagative", "🚫": "nagative", "🤬": "nagative","🧚": "positive", "🧡": "positive",'🐶':' positive ',
'👎': ' nagative ', '😣': ' nagative ','✨': ' positive ', '❣': ' positive ','☀': ' positive ',
'♥': ' positive ', '🤩': ' positive ', 'like': ' positive ', '💌': ' positive ',
'🤣': ' positive ', '🖤': ' positive ', '🤤': ' positive ', ':(': ' nagative ', '😢': ' nagative ',
'❤': ' positive ', '😍': ' positive ', '😘': ' positive ', '😪': ' nagative ', '😊': ' positive ',
'?': ' ? ', '😁': ' positive ', '💖': ' positive ', '😟': ' nagative ', '😭': ' nagative ',
'💯': ' positive ', '💗': ' positive ', '♡': ' positive ', '💜': ' positive ', '🤗': ' positive ',
'^^': ' positive ', '😨': ' nagative ', '☺': ' positive ', '💋': ' positive ', '👌': ' positive ',
'😖': ' nagative ', '😀': ' positive ', ':((': ' nagative ', '😡': ' nagative ', '😠': ' nagative ',
'😒': ' nagative ', '🙂': ' positive ', '😏': ' nagative ', '😝': ' positive ', '😄': ' positive ',
'😙': ' positive ', '😤': ' nagative ', '😎': ' positive ', '😆': ' positive ', '💚': ' positive ',
'✌': ' positive ', '💕': ' positive ', '😞': ' nagative ', '😓': ' nagative ', '️🆗️': ' positive ',
'😉': ' positive ', '😂': ' positive ', ':v': ' positive ', '=))': ' positive ', '😋': ' positive ',
'💓': ' positive ', '😐': ' nagative ', ':3': ' positive ', '😫': ' nagative ', '😥': ' nagative ',
'😃': ' positive ', '😬': ' 😬 ', '😌': ' 😌 ', '💛': ' positive ', '🤝': ' positive ', '🎈': ' positive ',
'😗': ' positive ', '🤔': ' nagative ', '😑': ' nagative ', '🔥': ' nagative ', '🙏': ' nagative ',
'🆗': ' positive ', '😻': ' positive ', '💙': ' positive ', '💟': ' positive ',
'😚': ' positive ', '❌': ' nagative ', '👏': ' positive ', ';)': ' positive ', '<3': ' positive ',
'🌝': ' positive ', '🌷': ' positive ', '🌸': ' positive ', '🌺': ' positive ',
'🌼': ' positive ', '🍓': ' positive ', '🐅': ' positive ', '🐾': ' positive ', '👉': ' positive ',
'💐': ' positive ', '💞': ' positive ', '💥': ' positive ', '💪': ' positive ',
'💰': ' positive ', '😇': ' positive ', '😛': ' positive ', '😜': ' positive ',
'🙃': ' positive ', '🤑': ' positive ', '🤪': ' positive ','☹': ' nagative ', '💀': ' nagative ',
'😔': ' nagative ', '😧': ' nagative ', '😩': ' nagative ', '😰': ' nagative ', '😳': ' nagative ',
'😵': ' nagative ', '😶': ' nagative ', '🙁': ' nagative ',
#Chuẩn hóa 1 số sentiment words/English words
':))': ' positive ', ':)': ' positive ', 'ô kêi': ' ok ', 'okie': ' ok ', ' o kê ': ' ok ',
'okey': ' ok ', 'ôkê': ' ok ', 'oki': ' ok ', ' oke ': ' ok ',' okay':' ok ','okê':' ok ',
' tks ': u' cám ơn ', 'thks': u' cám ơn ', 'thanks': u' cám ơn ', 'ths': u' cám ơn ', 'thank': u' cám ơn ',
'⭐': 'star ', '*': 'star ', '🌟': 'star ', '🎉': u' positive ',
'kg ': u' không ','not': u' không ', u' kg ': u' không ', '"k ': u' không ',' kh ':u' không ','kô':u' không ','hok':u' không ',' kp ': u' không phải ',u' kô ': u' không ', '"ko ': u' không ', u' ko ': u' không ', u' k ': u' không ', 'khong': u' không ', u' hok ': u' không ',
'he he': ' positive ','hehe': ' positive ','hihi': ' positive ', 'haha': ' positive ', 'hjhj': ' positive ',
' lol ': ' nagative ',' cc ': ' nagative ','cute': u' dễ thương ','huhu': ' nagative ', ' vs ': u' với ', 'wa': ' quá ', 'wá': u' quá', 'j': u' gì ', '“': ' ',
' sz ': u' cỡ ', 'size': u' cỡ ', u' đx ': u' được ', 'dk': u' được ', 'dc': u' được ', 'đk': u' được ',
'đc': u' được ','authentic': u' chuẩn chính hãng ',u' aut ': u' chuẩn chính hãng ', u' auth ': u' chuẩn chính hãng ', 'thick': u' positive ', 'store': u' cửa hàng ',
'shop': u' cửa hàng ', 'sp': u' sản phẩm ', 'gud': u' tốt ','god': u' tốt ','wel done':' tốt ', 'good': u' tốt ', 'gút': u' tốt ',
'sấu': u' xấu ','gut': u' tốt ', u' tot ': u' tốt ', u' nice ': u' tốt ', 'perfect': 'rất tốt', 'bt': u' bình thường ',
'time': u' thời gian ', 'qá': u' quá ', u' ship ': u' giao hàng ', u' m ': u' mình ', u' mik ': u' mình ',
'ể': 'ể', 'product': 'sản phẩm', 'quality': 'chất lượng','chat':' chất ', 'excelent': 'hoàn hảo', 'bad': 'tệ','fresh': ' tươi ','sad': ' tệ ',
'date': u' hạn sử dụng ', 'hsd': u' hạn sử dụng ','quickly': u' nhanh ', 'quick': u' nhanh ','fast': u' nhanh ','delivery': u' giao hàng ',u' síp ': u' giao hàng ',
'beautiful': u' đẹp tuyệt vời ', u' tl ': u' trả lời ', u' r ': u' rồi ', u' shopE ': u' cửa hàng ',u' order ': u' đặt hàng ',
'chất lg': u' chất lượng ',u' sd ': u' sử dụng ',u' dt ': u' điện thoại ',u' nt ': u' nhắn tin ',u' tl ': u' trả lời ',u' sài ': u' xài ',u'bjo':u' bao giờ ',
'thik': u' thích ',u' sop ': u' cửa hàng ', ' fb ': ' facebook ', ' face ': ' facebook ', ' very ': u' rất ',u'quả ng ':u' quảng ',
'dep': u' đẹp ',u' xau ': u' xấu ','delicious': u' ngon ', u'hàg': u' hàng ', u'qủa': u' quả ',
'iu': u' yêu ','fake': u' giả mạo ', 'trl': 'trả lời', '><': u' positive ',
' por ': u' tệ ',' poor ': u' tệ ', 'ib':u' nhắn tin ', 'rep':u' trả lời ',u'fback':' feedback ','fedback':' feedback ',
#dưới 3* quy về 1*, trên 3* quy về 5*
'6 sao': ' 5star ','6 star': ' 5star ', '5star': ' 5star ','5 sao': ' 5star ','5sao': ' 5star ',
'starstarstarstarstar': ' 5star ', '1 sao': ' 1star ', '1sao': ' 1star ','2 sao':' 1star ','2sao':' 1star ',
'2 starstar':' 1star ','1star': ' 1star ', '0 sao': ' 1star ', '0star': ' 1star ',}
for k, v in replace_list.items():
text = text.replace(k, v)
# chuyen punctuation thành space
translator = str.maketrans(string.punctuation, ' ' * len(string.punctuation))
text = text.translate(translator)
text = ViTokenizer.tokenize(text)
texts = text.split()
len_text = len(texts)
texts = [t.replace('_', ' ') for t in texts]
for i in range(len_text):
cp_text = texts[i]
if cp_text in not_list: # Xử lý vấn đề phủ định (VD: áo này chẳng đẹp--> áo này notpos)
numb_word = 2 if len_text - i - 1 >= 4 else len_text - i - 1
for j in range(numb_word):
if texts[i + j + 1] in pos_list:
texts[i] = 'notpos'
texts[i + j + 1] = ''
if texts[i + j + 1] in nag_list:
texts[i] = 'notnag'
texts[i + j + 1] = ''
else: #Thêm feature cho những sentiment words (áo này đẹp--> áo này đẹp positive)
if cp_text in pos_list:
texts.append('positive')
elif cp_text in nag_list:
texts.append('nagative')
text = u' '.join(texts)
#remove nốt những ký tự thừa thãi
text = text.replace(u'"', u' ')
text = text.replace(u'️', u'')
text = text.replace('🏻','')
return text
class DataSource(object):
def _load_raw_data(self, filename, is_train=True):
a = []
b = []
regex = 'train_'
if not is_train:
regex = 'test_'
with open(filename, 'r', encoding="utf8") as file:
for line in file:
if regex in line:
b.append(a)
a = [line]
elif line != '\n':
a.append(line)
b.append(a)
return b[1:]
def _create_row(self, sample, is_train=True):
d = {}
d['id'] = sample[0].replace('\n', '')
review = ""
if is_train:
for clause in sample[1:-1]:
review += clause.replace('\n', ' ')
review = review.replace('.', ' ')
d['label'] = int(sample[-1].replace('\n', ' '))
else:
for clause in sample[1:]:
review += clause.replace('\n', ' ')
review = review.replace('.', ' ')
d['review'] = review
return d
def load_data(self, filename, is_train=True):
raw_data = self._load_raw_data(filename, is_train)
lst = []
for row in raw_data:
lst.append(self._create_row(row, is_train))
return lst
def transform_to_dataset(self, x_set,y_set):
X, y = [], []
for document, topic in zip(list(x_set), list(y_set)):
document = normalize_text(document)
X.append(document.strip())
y.append(topic)
#Augmentation bằng cách remove dấu tiếng Việt
#X.append(no_marks(document))
# y.append(topic)
return X, y
# +
# Loading file training
ds = DataSource()
train_data = pd.DataFrame(ds.load_data('data3/train.crash'))
# +
#Thêm mẫu bằng cách lấy trong từ điển Sentiment (nag/pos)
for index,row in enumerate(nag_list):
new_data.append(['pos'+str(index),'0',row])
for index,row in enumerate(nag_list):
new_data.append(['nag'+str(index),'1',row])
# +
# Loading file testing
new_data = []
new_data = pd.DataFrame(new_data,columns=list(['id','label','review']))
train_data.append(new_data)
#test_data = pd.DataFrame(ds.load_data('data3/test.crash', is_train=False))
# -
X_train = train_data.review
y_train = train_data.label
# +
#print(X_train[1],y_train[1])
# -
# Phần này thực hiện các module augmentation
# +
# Load danh sách các từ stop word
stoplist = []
with open("./data/vnstopword.txt", encoding="utf-8") as f :
text = f.read()
for word in text.split('\n') : #Tách ra mỗi dòng là 1 từ stopword riêng lẻ
stoplist.append(word)
f.close()
# +
from gensim.models import KeyedVectors, Word2Vec
# Load mô hình trained word2vec để embedding
word2vec_model_path = ("./data/w2v.bin")
#w2v = Word2Vec.load(word2vec_model_path)
w2v = KeyedVectors.load_word2vec_format(word2vec_model_path,binary = True)
# +
from nltk import word_tokenize
from pyvi import ViTokenizer, ViPosTagger
import random
#====================================================================================================
# Hàm lấy từ đồng nghĩa trong file vietnamsyn.txt
#synonyms_lexicon = get_synonyms('./vietnamsyn.txt')
def get_synonyms(path):
synonyms_lexicon = {}
text_entries = [l.strip() for l in open(path, encoding="utf8").readlines()]
for e in text_entries:
e = e.split('\t')
k = e[0]
v = e[1:len(e)]
synonyms_lexicon[k] = v
return synonyms_lexicon
#====================================================================================================
# Hàm hoán đổi ngẫu nhiên vị trí các từ trong câu. Số lượng câu mới tạo ra là n
def Random_Swap(sentence,n):
new_sentences = []
words = sentence.split()
for i in range(n):
random.shuffle(words)
new_sentences.append(' '.join(words))
return new_sentences
#====================================================================================================
# Hàm xoá từ trong câu nếu từ đó là: Động từ (V), giới từ (C), số (M), dấu câu (F)
def Random_Deletion(sentence):
new_sentence = []
tagged_word = ViPosTagger.postagging(sentence)
# Chia POS thành 2 list cho dễ thực hiện
word = tagged_word[0]
tag = tagged_word[1]
edited_sentence = [i for i,j in zip(word,tag) if j != 'P' and j != 'V' and j != 'C' and j != 'F' and j != 'M']
edited_sentence = ' '.join(edited_sentence)
new_sentence.append(edited_sentence)
return new_sentence
#====================================================================================================
# Hàm thay thế từ đồng nghĩa
def syn_rep(sentence, synonyms_lexicon):
keys = synonyms_lexicon.keys()
words = sentence.split()
n_sentence = sentence
for w in words:
if w not in stoplist:
if w in keys:
n_sentence = n_sentence.replace(w, synonyms_lexicon[w][0]) # Thay đổi từ đồng nghĩa ở cột kế tiếp
return n_sentence
#===================================================================================
def Synonym_Replacement(sentence):
#Get synonyms word from this file
synonyms_lexicon = get_synonyms('./data/vietnamsyn.txt')
new_sentence = []
sen_replaced = syn_rep(sentence, synonyms_lexicon)
new_sentence.append(sen_replaced)
return new_sentence
#====================================================================================================
# Hàm chèn từ vào câu
def Insert(sentence, synonyms_lexicon):
keys = synonyms_lexicon.keys()
words = sentence.split()
n_sentence = sentence
for w in words:
if w not in stoplist:
if w in keys:
n_sentence = n_sentence + ' ' + synonyms_lexicon[w][0] # Chèn từ đồng nghĩa vào cuối câu.
return n_sentence
#===================================================================================
def Random_Insert(sentence):
#Get synonyms word from this file
synonyms_lexicon = get_synonyms('./data/vietnamsyn.txt')
new_sentence = []
sen_inserted = Insert(sentence, synonyms_lexicon)
new_sentence.append(sen_inserted)
return new_sentence
#====================================================================================================
# Finding out a similarity word from word embedding space
def Similarity(word):
# Lấy ra 1 similarity word đầu tiên với score lớn nhất
word_similarity = w2v.most_similar(word,topn=1)
# Vòng lặp này để trả về word đầu tiên, với similarity score là lớn nhất
for x in word_similarity:
word = ''.join(x[0]) # Lệnh này lấy ra chữ, bỏ qua score
return word
#===================================================================================
# Repalcement similitary word from word vector embedding space
def Word_Replacement(sentence):
words = sentence.split()
replaced_sentence = []
new_sentence = ''
for word in words:
if word not in w2v:
new_sentence = new_sentence + ' ' + word
else:
if word in stoplist:
new_sentence = new_sentence + ' ' + word
else:
new_word = Similarity(word)
new_sentence = new_sentence + ' ' + new_word
replaced_sentence.append(new_sentence.strip())
return replaced_sentence
# -
# Đoạn code này là đọc tất cả các comment và label. Thực hiện agumentation.
# Sau đó lưu vào file .csv
import csv
header=['comment','label']
with open(r'data3.csv', 'a', encoding='utf8') as f:
writer = csv.writer(f)
writer.writerow(header)
for i in range(0,len(X_train)):
comment = str(X_train[i])
label = y_train[i]
data = [comment,label]
writer.writerow(data)
# Thực hiện swapping
data= [Random_Swap(comment,1),y_train[i]]
writer.writerow(data)
# Thực hiện thay thế từ đồng nghĩa
#data= [Synonym_Replacement(comment),y_train[i]]
#writer.writerow(data)
# Thực hiện việc xoá ngẫu nhiên
data= [Random_Deletion(comment),y_train[i]]
writer.writerow(data)
# Thực hiện việc chèn từ đồng nghĩa cuối câu
data= [Random_Insert(comment),y_train[i]]
writer.writerow(data)
# Thực hiện việc thay thế từ gần nghĩa ()
#data= [Word_Replacement(comment),y_train[i]]
#writer.writerow(data)
import pandas as pd
a = pd.read_csv('data3.csv', encoding='utf8')
a.shape
| text_augmentation/Augmentation for data3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
library(tidyverse)
library(data.table)
library(RColorBrewer)
library(lemon)
library(ggsci)
library(egg)
# +
metric_list = c(
'auc'='AUC',
'auc_min'='AUC',
'loss_bce'='Loss',
'loss_bce_max'='Loss',
'ace_abs_logistic_logit'='ACE',
'ace_abs_logistic_logit_max'='ACE',
'net_benefit_rr_0.075'='NB (7.5%)',
'net_benefit_rr_0.075_min'='NB (7.5%)',
'net_benefit_rr_recalib_0.075'='cNB (7.5%)',
'net_benefit_rr_recalib_0.075_min'='cNB (7.5%)',
'net_benefit_rr_0.2'='NB (20%)',
'net_benefit_rr_0.2_min'='NB (20%)',
'net_benefit_rr_recalib_0.2'='cNB (20%)',
'net_benefit_rr_recalib_0.2_min'='cNB (20%)'
)
tag_list = c(
'erm_baseline'='ERM (Pooled)',
'erm_subset'='ERM (Stratified)',
'regularized_loss_max'='Regularized (Loss)',
'regularized_auc_min'='Regularized (AUC)',
'dro_loss_max'='DRO (Loss)',
'dro_auc_min'='DRO (AUC)'
)
attribute_list=c(
'FALSE'='Overall',
'TRUE'='Worst-case',
'age_group'='Age',
'gender_concept_name'='Sex',
'race_eth'='Race/Eth',
'race_eth_gender'='Race/Eth/Sex',
'has_ckd_history'='CKD',
'has_ra_history'='RA',
'has_diabetes_type1_history'='Diabetes T1',
'has_diabetes_type2_history'='Diabetes T2'
)
is_min_max_metric_list = c(
'FALSE'='Overall',
'TRUE'='Worst-case'
)
eval_group_list=c(
'Age',
'40-50',
'50-60',
'60-75',
'Sex',
'Female',
'Male',
'Race/Eth',
'Race/Eth/Sex',
'Asian',
'Black',
'Hispanic',
'Other',
'White',
'A-F',
'A-M',
'B-F',
'B-M',
'H-F',
'H-M',
'O-F',
'O-M',
'W-F',
'W-M',
'ckd_present',
'ckd_absent',
'ra_present',
'ra_absent',
'diabetes_type1_present',
'diabetes_type1_absent',
'diabetes_type2_present',
'diabetes_type2_absent',
'CKD',
'RA',
'Diabetes T1',
'Diabetes T2',
'Present',
'Absent'
)
eval_group_clean_map <- c(
'FEMALE'='Female',
'MALE'='Male',
'Black or African American'='Black',
'Hispanic or Latino'='Hispanic',
'other'= 'Other',
'white'= 'White',
'Asian | FEMALE'= 'A-F',
'Asian | MALE'= 'A-M',
'Black or African American | FEMALE'='B-F',
'Black or African American | MALE'='B-M',
'Hispanic or Latino | FEMALE'='H-F',
'Hispanic or Latino | MALE'='H-M',
'Other | FEMALE'='O-F',
'Other | MALE'='O-M',
'White | FEMALE'='W-F',
'White | MALE'='W-M',
'ckd_present'='Present',
'ckd_absent'='Absent',
'ra_present'='Present',
'ra_absent'='Absent',
'diabetes_type1_present'='Present',
'diabetes_type1_absent'='Absent',
'diabetes_type2_present'='Present',
'diabetes_type2_absent'='Absent'
)
# +
attribute_sets = list(
'race_eth'=c('race_eth')
# 'race_eth_sex'=c('race_eth', 'gender_concept_name', 'race_eth_gender'),
# 'comorbidities'=c('has_ckd_history', 'has_ra_history', 'has_diabetes_type1_history', 'has_diabetes_type2_history')
)
metric_sets=list(
'performance'=c('auc', 'loss_bce', 'ace_abs_logistic_logit'),
'net_benefit'=c('net_benefit_rr_0.075', 'net_benefit_rr_recalib_0.075')
)
metric_sets_min_max=list(
'performance'=c('auc', 'auc_min', 'loss_bce', 'loss_bce_max', 'ace_abs_logistic_logit', 'ace_abs_logistic_logit_max'),
'net_benefit'=c(
'net_benefit_rr_0.075',
'net_benefit_rr_0.075_min',
'net_benefit_rr_recalib_0.075',
'net_benefit_rr_recalib_0.075_min'
)
)
# +
clean_eval_group <- function(df) {
for (i in names(eval_group_clean_map)) {
df <- df %>% mutate(
eval_group = replace(eval_group, eval_group == i, eval_group_clean_map[[i]])
)
}
return(df)
}
transform_df <- function(
df,
var_to_spread,
metrics_to_plot,
fold_id_to_plot
) {
temp <- df %>%
filter(
metric %in% metrics_to_plot,
tag %in% tags_to_plot,
eval_group != 'overall'
) %>%
select(metric, eval_attribute, eval_group,
tag, CI_quantile_95, .data[[var_to_spread]]) %>%
distinct() %>%
spread(CI_quantile_95, .data[[var_to_spread]]) %>%
clean_eval_group() %>%
mutate(
eval_group = gsub('[[)]', "", eval_group)
) %>%
mutate(
metric=factor(metric, levels=names(metric_list[metrics_to_plot]), labels=metric_list[metrics_to_plot]),
tag=factor(tag, levels=names(tag_list[tags_to_plot]), labels=tag_list[tags_to_plot]),
eval_attribute=factor(eval_attribute, levels=names(attribute_list), labels=attribute_list)
)
return(temp)
}
transform_df_combined_marginal <- function(
df,
var_to_spread,
metrics_to_plot,
fold_id_to_plot
) {
temp <- df %>%
mutate(
bare_metric=as.character(strsplit(metric, '_min|_max')),
is_min_max_metric=grepl('_min|_max', metric)
) %>%
filter(
metric %in% metrics_to_plot,
tag %in% tags_to_plot,
eval_group == 'overall'
) %>%
mutate(metric=bare_metric) %>%
select(
metric,
is_min_max_metric,
eval_attribute,
eval_group,
tag,
CI_quantile_95,
.data[[var_to_spread]]
) %>%
distinct() %>%
spread(CI_quantile_95, .data[[var_to_spread]]) %>%
mutate(
metric=factor(metric, levels=names(metric_list[metrics_to_plot]), labels=metric_list[metrics_to_plot]),
tag=factor(tag, levels=names(tag_list[tags_to_plot]), labels=tag_list[tags_to_plot]),
eval_attribute=factor(eval_attribute, levels=names(attribute_list), labels=attribute_list),
is_min_max_metric=factor(is_min_max_metric, levels=names(is_min_max_metric_list), labels=is_min_max_metric_list)
) %>%
mutate(
eval_group=eval_attribute,
eval_attribute=is_min_max_metric
)
return(temp)
}
make_plot_combined_marginal <- function(
df,
tags_to_plot=c('erm_baseline', 'erm_subset', 'aware_loss_max', 'dro_loss_max'),
fold_id_to_plot='test',
metric_set_key='performance',
combined=FALSE,
y_label='',
mode=NULL
) {
results_absolute <- transform_df(
df,
var_to_spread='comparator',
metrics_to_plot=metric_sets[[metric_set_key]],
fold_id_to_plot='test'
)
results_relative <- transform_df(
df,
var_to_spread='delta',
metrics_to_plot=metric_sets[[metric_set_key]],
fold_id_to_plot='test'
) %>% mutate(metric = paste0(metric, ' (rel)')) %>% mutate(erm_value=0)
results_absolute_marginal <- transform_df_combined_marginal(
df,
var_to_spread='comparator',
metrics_to_plot=metric_sets_min_max[[metric_set_key]],
fold_id_to_plot='test'
)
if (metric_set_key == 'performance') {
results_relative_marginal <- transform_df_combined_marginal(
df,
var_to_spread='delta',
metrics_to_plot=metric_sets_min_max[[metric_set_key]],
fold_id_to_plot='test'
) %>% mutate(metric = paste0(metric, ' (rel)')) %>% mutate(erm_value=0)
combined_results <- full_join(results_absolute, results_relative) %>%
full_join(results_absolute_marginal) %>%
full_join(results_relative_marginal) %>%
mutate(
eval_attribute=factor(eval_attribute, levels=attribute_list, labels=attribute_list),
eval_group=factor(eval_group, levels=eval_group_list, labels=eval_group_list),
metric=factor(metric, levels=c('AUC', 'AUC (rel)', 'ACE', 'ACE (rel)', 'Loss', 'Loss (rel)'))
)
if (!is.null(mode)) {
if (mode == 'relative') {
combined_results <- combined_results %>%
filter(metric %in% c('AUC (rel)', 'ACE (rel)', 'Loss (rel)')) %>%
mutate(metric=str_replace(metric, stringr::fixed(' (rel)'), stringr::fixed(''))) %>%
mutate(metric=factor(metric, levels=c('AUC', 'ACE', 'Loss')))
} else if (mode == 'absolute') {
combined_results <- combined_results %>%
filter(
!str_detect(metric, stringr::fixed('(rel'))
)
}
}
} else if (metric_set_key == 'net_benefit') {
results_relative_marginal <- transform_df_combined_marginal(
df,
var_to_spread='delta',
metrics_to_plot=metric_sets_min_max[[metric_set_key]],
fold_id_to_plot='test'
) %>% mutate(metric = paste0(metric, ' (rel)')) %>% mutate(erm_value=0)
combined_results <- full_join(results_absolute, results_relative) %>%
full_join(results_absolute_marginal) %>%
full_join(results_relative_marginal) %>%
mutate(metric=str_replace(metric, stringr::fixed(') (rel)'), stringr::fixed('; rel)'))) %>%
mutate(
eval_attribute=factor(eval_attribute, levels=attribute_list, labels=attribute_list),
eval_group=factor(eval_group, levels=eval_group_list, labels=eval_group_list),
metric=factor(metric, levels=c('NB (7.5%)', 'NB (7.5%; rel)',
'cNB (7.5%)', 'cNB (7.5%; rel)',
'NB (20%)', 'NB (20%; rel)',
'cNB (20%)', 'cNB (20%; rel)'
)
)
)
if (!is.null(mode)) {
if (mode == 'relative') {
combined_results <- combined_results %>%
filter(str_detect(metric, stringr::fixed('; rel'))) %>%
mutate(metric=str_replace(metric, stringr::fixed('; rel)'), stringr::fixed(')'))) %>%
# mutate(metric=str_replace(metric, stringr::fixed(' (rel)'), stringr::fixed(' (relative)'))) %>%
mutate(metric=factor(metric, c('NB (7.5%)', 'cNB (7.5%)', 'NB (20%)', 'cNB (20%)')))
} else if (mode == 'absolute') {
combined_results <- combined_results %>%
filter(
!str_detect(metric, stringr::fixed('; rel'))
)
}
}
}
g <- combined_results %>%
ggplot(aes(eval_group, mid, color=tag)) +
coord_cartesian(clip=FALSE) +
geom_point(position=position_dodge(width=0.75), size=1) +
geom_linerange(
aes(ymin=lower, ymax=upper),
size=1,
position=position_dodge(width=0.75)
) +
lemon::facet_rep_grid(
rows = vars(metric),
cols=vars(eval_attribute),
scales='free',
switch='y',
) +
theme_bw() +
ggsci::scale_color_d3() +
theme(
axis.title = element_text(size = rel(1.75)),
axis.title.y = element_blank(),
axis.title.x = element_blank(),
strip.text.x = element_text(size = rel(1.35), vjust=1),
strip.text.y = element_text(size = rel(1.1)),
strip.background = element_blank(),
strip.placement = "outside",
axis.text.x = element_text(angle = 45, vjust=0.95, hjust=1),
axis.text = element_text(size=rel(1), color='black'),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.border = element_blank(),
axis.line = element_line(color='black'),
legend.text=element_text(size=rel(0.85)),
legend.position='bottom'
) +
labs(
y=y_label,
color = "Method"
)
g <- g + geom_hline(aes(yintercept=erm_value), color='black', linetype='dashed', size=0.5, alpha=0.5)
g <- tag_facet(g, open="", close="", tag_pool=c(toupper(letters), as.character(tolower(as.roman(1:10)))),
hjust = -0.5, vjust = 0.5
)
return(g)
}
# +
# task_path_prefixes = c(
# 'optum'
# )
for (attribute_set in names(attribute_sets)) {
for (metric_set_key in names(metric_sets)) {
# for (metric_set_key in c('net_benefit')) {
eval_attributes <- attribute_sets[[attribute_set]]
data_path = '../zipcode_cvd/experiments/figures_data'
results_path = file.path(data_path, 'performance', 'result_df_ci.csv')
aggregated_results = fread(results_path)
figure_path = file.path('../zipcode_cvd/experiments/figures/defense', metric_set_key, attribute_set)
dir.create(figure_path, recursive=TRUE)
## Plot absolute performance metrics
tags_to_plot <- c(
'erm_baseline',
'erm_subset',
'regularized_loss_max',
'regularized_auc_min',
'dro_loss_max',
'dro_auc_min'
)
# metrics_to_plot <- c('auc', 'loss_bce', 'ace_abs_logistic_logit')
fold_id_to_plot <- 'test'
if ('exp' %in% names(aggregated_results)) {
aggregated_results <- aggregated_results %>% rename(experiment_name=exp)
}
aggregated_results <- aggregated_results %>% filter(eval_attribute %in% eval_attributes)
g <- make_plot_combined_marginal(
df=aggregated_results,
tags_to_plot=tags_to_plot,
fold_id_to_plot=fold_id_to_plot,
metric_set_key=metric_set_key,
mode='absolute'
)
ggsave(filename=file.path(figure_path, 'method_comparison_absolute.png'), plot=g, device='png', width=10, height=6, units='in')
ggsave(filename=file.path(figure_path, 'method_comparison_absolute.pdf'), plot=g, device='pdf', width=10, height=6, units='in')
g <- make_plot_combined_marginal(
df=aggregated_results,
tags_to_plot=tags_to_plot,
fold_id_to_plot=fold_id_to_plot,
metric_set_key=metric_set_key,
mode='relative'
)
ggsave(filename=file.path(figure_path, 'method_comparison_relative.png'), plot=g, device='png', width=10, height=6, units='in')
ggsave(filename=file.path(figure_path, 'method_comparison_relative.pdf'), plot=g, device='pdf', width=10, height=6, units='in')
# Combined marginal
g <- make_plot_combined_marginal(
df=aggregated_results,
tags_to_plot=tags_to_plot,
fold_id_to_plot=fold_id_to_plot,
metric_set_key=metric_set_key
)
ggsave(filename=file.path(figure_path, 'method_comparison_combined_marginal.png'), plot=g, device='png', width=10, height=8, units='in')
ggsave(filename=file.path(figure_path, 'method_comparison_combined_marginal.pdf'), plot=g, device='pdf', width=10, height=8, units='in')
}
}
| notebooks/make_plots_performance_ggplot_simplified.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="_P4h2OCk5-o6"
# ### **This notebook generates segmentation results for validation data with input size 240*240**
# + colab={"base_uri": "https://localhost:8080/"} id="JM-4Xm4lriNs" outputId="95bdb3da-0aab-4846-ef5b-120ad34580d1"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="a3mIl0z2syvo" outputId="3b2a0228-1cf6-465e-8a1d-a3486d35fde2"
pip install nilearn
# + colab={"base_uri": "https://localhost:8080/"} id="KGphatcts10H" outputId="97545d09-128a-4210-df28-8e20e11c07de"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
import keras
from keras.models import Model, load_model
from keras.layers import Input ,BatchNormalization , Activation ,Dropout
from keras.layers.convolutional import Conv2D, UpSampling2D,Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import optimizers
from sklearn.model_selection import train_test_split
import os
import nibabel as nib
import cv2 as cv
import matplotlib.pyplot as plt
from keras import backend as K
import glob
import skimage.io as io
import skimage.color as color
import random as r
import math
from nilearn import plotting
import pickle
import skimage.transform as skTrans
from nilearn import image
from nilearn.image import resample_img
import nibabel.processing
import warnings
import shutil
# + id="Hzvgh8bkxP5d"
for dirname, _, filenames in os.walk('/content/drive/MyDrive/MRI Data/BraTS2020_ValidationData'):
for filename in filenames:
print(os.path.join(dirname, filename))
# + [markdown] id="txsr6d5J7yVa"
# ### **Data Preprocessing**
# + id="CgM-R7s7xOAr"
def Data_Preprocessing(modalities_dir):
all_modalities = []
for modality in modalities_dir:
nifti_file = nib.load(modality)
brain_numpy = np.asarray(nifti_file.dataobj)
all_modalities.append(brain_numpy)
all_modalities = np.array(all_modalities)
all_modalities = np.rint(all_modalities).astype(np.int16)
all_modalities = all_modalities[:, :, :, :]
all_modalities = np.transpose(all_modalities)
avg_modality=[]
for i in range(len(all_modalities)):
x=(all_modalities[i,:,:,0]+all_modalities[i,:,:,1]+all_modalities[i,:,:,2]+all_modalities[i,:,:,3])/4
avg_modality.append(x)
gt=all_modalities[:,:,:,4]
P_Data=np.stack(np.stack((avg_modality, gt), axis = -1))
#P_Data=np.stack(np.stack((avg_modality), axis = -1))#for validation dataset
return P_Data
# + id="TjNDVuyp6MRI"
batch=120
# + colab={"base_uri": "https://localhost:8080/"} id="rrI0CEnMxJZc" outputId="08089757-0668-4432-e5fd-0a5f8cd0e39b"
Path='/content/drive/MyDrive/MRI Data/BraTS2020_ValidationData/MICCAI_BraTS2020_ValidationData'# for validation dataset
p=os.listdir(Path)
Input_Data= []
# generate data in batches or else computationl resources get exhausted
for i in p[batch-40:batch+5]:
brain_dir = os.path.normpath(Path+'/'+i)
flair = glob.glob(os.path.join(brain_dir, '*_flair*.nii'))
t1 = glob.glob(os.path.join(brain_dir, '*_t1*.nii'))
t1ce = glob.glob(os.path.join(brain_dir, '*_t1ce*.nii'))
t2 = glob.glob(os.path.join(brain_dir, '*_t2*.nii'))
gt = glob.glob( os.path.join(brain_dir, '*_seg*.nii'))
modalities_dir = [flair[0], t1[0], t1ce[0], t2[0], gt[0]]
P_Data = Data_Preprocessing(modalities_dir)
Input_Data.append(P_Data)
print('This is done ', i)
# + [markdown] id="CSImg4H_78XY"
# ### **Generating Segmentation Results**
# + id="hsZ9ii_3terg"
def Convolution(input_tensor,filters):
x = Conv2D(filters=filters,kernel_size=(3, 3),padding = 'same',strides=(1, 1))(input_tensor)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def model(input_shape):
inputs = Input((input_shape))
conv_1 = Convolution(inputs,32)
maxp_1 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_1)
conv_2 = Convolution(maxp_1,64)
maxp_2 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_2)
conv_3 = Convolution(maxp_2,128)
maxp_3 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_3)
conv_4 = Convolution(maxp_3,256)
maxp_4 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_4)
conv_5 = Convolution(maxp_4,512)
upsample_6 = UpSampling2D((2, 2)) (conv_5)
conv_6 = Convolution(upsample_6,256)
upsample_7 = UpSampling2D((2, 2)) (conv_6)
upsample_7 = concatenate([upsample_7, conv_3])
conv_7 = Convolution(upsample_7,128)
upsample_8 = UpSampling2D((2, 2)) (conv_7)
conv_8 = Convolution(upsample_8,64)
upsample_9 = UpSampling2D((2, 2)) (conv_8)
upsample_9 = concatenate([upsample_9, conv_1])
conv_9 = Convolution(upsample_9,32)
outputs = Conv2D(1, (1, 1), activation='sigmoid') (conv_9)
model = Model(inputs=[inputs], outputs=[outputs])
return model
# + colab={"base_uri": "https://localhost:8080/"} id="NZGIdiiYtjAq" outputId="7fd51a8c-9334-46c5-951d-7faa9c7a4451"
model = model(input_shape = (240,240,1))
model.summary()
# + id="t1J8aY2cs864"
model.load_weights('/content/drive/MyDrive/MRI Data/Brats2020_20Images/BraTs2020_20.h5')
# + id="p42CqaHGvvS6"
def Data_Concatenate(Input_Data):
counter=0
Output= []
for i in range(2):
print('$')
c=0
counter=0
for ii in range(len(Input_Data)):
if (counter < len(Input_Data)-1):
a= Input_Data[counter][:,:,:,i]
#print('a={}'.format(a.shape))
b= Input_Data[counter+1][:,:,:,i]
#print('b={}'.format(b.shape))
if (counter==0):
c= np.concatenate((a, b), axis=0)
#print('c1={}'.format(c.shape))
counter= counter+2
else:
c1= np.concatenate((a, b), axis=0)
c= np.concatenate((c, c1), axis=0)
print('c2={}'.format(c.shape))
counter= counter+2
if (counter == len(Input_Data)-1):
a= Input_Data[counter][:,:,:,i]
c= np.concatenate((c, a), axis=0)
print('c2={}'.format(c.shape))
counter=counter+2
print('c2={}'.format(c.shape))
c= c[:,:,:,np.newaxis]
Output.append(c)
return Output
# + colab={"base_uri": "https://localhost:8080/"} id="2TFzOvFmvxh5" outputId="bdba3bc4-df12-43c0-ff07-db0f647c28d8"
InData= Data_Concatenate(Input_Data)
# + id="sHcMb4mhv1VS"
AIO= concatenate(InData, axis=3)
AIO=np.array(AIO,dtype='float32')
TR=np.array(AIO[:,:,:,0],dtype='float32')
TRL=np.array(AIO[:,:,:,1],dtype='float32')#segmentation
AIO=TRL=0
AIO=np.array(AIO,dtype='float32')
# + id="A-MsYLhqv4OI"
#predict segmentation
Segmentation = model.predict(TR)
# + id="WU3Fw5QFv6Qs"
# save segmentation results for further use
with open('/content/drive/MyDrive/MRI Data/Brats2020_20Images/Segmentation_Output_80_125'+'.pkl','wb') as f:
pickle.dump(Segmentation,f)
# + [markdown] id="EGZC9LB18DaZ"
# ### **Storing Segmentaion Results in Google Drive**
# + id="NxmibmWV7GEN"
#converting Segmentation result back to 125 sections
Section=[]
previous=0
for i in range(len(Segmentation)):
if (i % 155 == 0):
a=Segmentation[i:i+155,:,:,0]
Section.append(a)
# + id="UMa1c5dq7H9-"
Section = np.transpose(Section, (0,3,2,1))
# + id="Bi2orNFS7L_Z"
x=nib.load('/content/drive/MyDrive/MRI Data/BraTS2020_ValidationData/MICCAI_BraTS2020_ValidationData/BraTS20_Validation_001/BraTS20_Validation_001_flair.nii')
target_affine=x.affine
# + id="NUe3r7NX7PAl"
# generate .nii.gz files
counter=81
for i in range(len(Section)):
data=Section[i]
img = nib.Nifti1Image(data, target_affine)
nib.save(img, '/content/drive/MyDrive/MRI Data/Brats2020_20Images/BraTS20_Validation_'+str(counter).zfill(3)+'.nii.gz')
print('/content/drive/MyDrive/MRI Data/Brats2020_20Images/BraTS20_Validation_'+str(counter).zfill(3)+'.nii.gz')
counter=counter+1
| Python Files/UNeT_Validation_Data_240_240.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Pragnya77/Machine_Learning_python/blob/master/Understanding_datasets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="K_3deV6G8Bwp" colab_type="code" colab={}
# + [markdown] id="CdFp_pzS8ZTb" colab_type="text"
# **Calling Functions**
# + id="OTVq733Z8ifT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="dec4eab2-c176-4448-9174-4f6947117358"
import numpy as np
import pandas as pd
# simple graph, bar chart, line, scattered plot, pie, histogram
import matplotlib.pyplot as plt
# complex graph and data distribution, violin, pairplt, heatmap etc
import seaborn as sns
# + id="NI6EQaBZ9uou" colab_type="code" colab={}
webinar_data= pd.read_csv ('https://raw.githubusercontent.com/kusumikakd/Datasets/master/Datasets/Responses%20of%20participant%20-%20Form%20Responses%201.csv')
# + id="uJToT0Ne_2vh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="4c9d129e-f60c-449f-b247-8a6e8ae594f5"
webinar_data.head()
# + id="FDQz97vHAILN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="a310f0b0-fa18-4b59-ef79-1177687e4b71"
webinar_data.tail()
# + id="1BARXaMTA8Hh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 634} outputId="0c3db708-e5c6-424e-be68-a017ed71ee30"
webinar_data.head(20)
# + id="Pd5dpeMYBMd6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="fcd491ed-298f-4c65-f7fd-b327150a7e3d"
webinar_data.isnull()
# + id="OnF_LMUmCePI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="f74cae72-bac0-492b-d4c4-b5f025c05198"
webinar_data.isnull().sum()
# + id="M-FxSWd9EINZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="99a81899-fa2e-44ef-c97e-0d81a02a6258"
webinar_data
# + id="1gCJe5BIEOuL" colab_type="code" colab={}
mall_cus_data= pd.read_csv('https://raw.githubusercontent.com/kusumikakd/Datasets/master/Datasets/Mall_Customers.csv')
# + id="9k0P6mdmIvh6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="84adfdd4-4688-4d3a-81ff-2a9741a89d07"
mall_cus_data.isnull().sum()
# + id="FGMNyHdkI_vY" colab_type="code" colab={}
diabetes_data = pd.read_csv('https://raw.githubusercontent.com/kusumikakd/Datasets/master/Datasets/Diabetes_Preprocessing.csv')
# + id="DGXyO8D-JL0V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="7393ae73-2337-49eb-ff5c-baf702b42497"
diabetes_data.isnull().sum()
# + id="bB_sYPUfJShK" colab_type="code" colab={}
titanic_data = pd.read_csv('https://raw.githubusercontent.com/kusumikakd/Datasets/master/Datasets/Titanic_Data.csv')
# + id="9d4S8pjDJf5Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="13d81afb-f6d7-4f97-817b-e6093b05c286"
titanic_data.isnull().sum()
# + id="FHSMadOgJnvx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="189e502b-a760-4565-e6cb-0b8e6e29aed1"
titanic_data.isnull()
# + id="yj_IdKVIJrp8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 266} outputId="1ff3de29-a537-4212-ffd6-d010e3e62380"
titanic_data.info
# + id="gPT6mQlYKt-7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 355} outputId="ef464624-b133-48d7-91e1-d742e778c6ec"
titanic_data.info()
# + id="y57DnF3SK3yD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="d8d52073-7b4c-4c2c-a98f-5807a3db5f7e"
titanic_data.drop(['Cabin'], axis=1)
# + id="UdtSI6NHLDl8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="2b9ba2f6-0862-4d3c-dcd9-2f575eea59d8"
titanic_data.info
# + id="_6_1zFLtLWKB" colab_type="code" colab={}
titanic_data2 = titanic_data.drop(['Cabin'], axis=1)
# + id="lTfegV6ML-7H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="a6ba3e88-6393-49c8-e2f7-39abe6ec99e3"
titanic_data2
# + id="xEAuX09dMGmp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="68d62687-d57b-4381-b0a2-efd4c198a179"
titanic_data2.isnull().sum()
# + id="517AeLedMXqg" colab_type="code" colab={}
titanic_data.drop(['Cabin'], axis =1, inplace = True)
# + id="ZrrO_zypMuWT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="7ad5b1a6-cf76-4cfc-8737-84c636a9c609"
titanic_data.isnull().sum()
# + id="dx6NfYZoMzFw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="0ef4dc87-0304-484f-fec7-8ac1bfa5c96c"
titanic_data.isnull().sum().sort_values(ascending = False)
# + id="ock7q347Nmci" colab_type="code" colab={}
age_med = titanic_data['Age'].median()
# + id="RE7V6O9xOVYG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="9b2a46ae-8bb1-4921-fe3f-610e8555c053"
age_med
# + id="MYfbJvTQOh5E" colab_type="code" colab={}
titanic_data['Age'].replace(np.nan, age_med, inplace = True)
# + id="GZxGE90mO_z5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="149e6308-1924-4571-ec43-e43e00d995b3"
titanic_data.isnull().sum()
# + id="3lArUlaYPGK_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="2d752c54-19e6-4701-f3a0-641b5e8adf84"
titanic_data.replace('?', np.nan)
# + id="8ompy80hQdeh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="88bcf2b3-8a01-4ba1-f726-016765cfab4a"
titanic_data['Age'].plot()
# + id="Vj5QhcH-RuiV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 610} outputId="df9fec63-d901-4a57-b5ed-cd5810e8d348"
titanic_data['Age'].plot(figsize=[20,10])
# + id="YPtnoSOsUYox" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="e7e22be0-fa3d-46fd-cacd-2383f5c273e6"
titanic_data['Age'].sort_values().unique()
# + id="0FSX2qEcU6D_" colab_type="code" colab={}
# + [markdown] id="LGl9jOpTVVbM" colab_type="text"
# **line graph**
# + id="RTxINYm5VYzV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 299} outputId="4c79bfa7-bc09-4226-ca3b-f44b9358b10a"
pd.DataFrame(titanic_data['Age'].sort_values().unique()).plot(color= 'blue', title= 'Unique age values')
# + id="5w9a2uGNWI4Q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 500} outputId="036e9138-0c81-4203-e63f-4f952f8f344d"
titanic_data.boxplot(figsize=[20,8])
# + id="UibLi0HaWmnB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="e3bc0fdd-829f-4f5e-dda3-ccded24c1720"
titanic_data.columns
# + id="V78nBHFcXIa4" colab_type="code" colab={}
# + [markdown] id="hFIgUIRDZDQr" colab_type="text"
# **Loc and Iloc**
# + id="tRTItntXZHNB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="1f7f9b97-5338-4ed8-8990-2fcc62ebd704"
titanic_data.loc[:,:]
# + id="zoaRavYnZNwU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0e8bea93-c1f6-4b75-d041-f9171cc66f56"
titanic_data.shape
# + id="rJWaYE1uZaa9" colab_type="code" colab={}
y=titanic_data.loc[:,'Name']
# + id="nmt_Ghd_Z-WY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="64ee0f36-0803-4c99-c0fd-dad141cfab92"
y
# + id="Yi5g2EszZ_vT" colab_type="code" colab={}
x=titanic_data.loc[:,'Sex':'Age']
# + id="QOk_nDU4aJ4C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="db1adb3c-81aa-4011-9631-1eb476241a0a"
x
# + id="XCvkJ5AyaKnd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 764} outputId="8e7bb9ab-2e74-4410-bfc3-0922ae8b1866"
titanic_data.loc[10:50, 'Age']
# + id="v5wgP6fFaSRn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="a959c996-9a68-4744-f0b4-ca65e6558537"
titanic_data.iloc[:,3]
# + id="QXE4LRoma1_B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="e72b71f2-fefb-42b7-a6f2-5327c58539bc"
titanic_data.columns
# + id="BlcV-Qx4bTfL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="17191155-ba57-4232-8c8b-4ea130e7b671"
titanic_data.iloc[10:50, 4:6]
# + id="r7fmX-vWby0U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="eac6e208-b269-4c66-d273-1eee6e718567"
diabetes_data
# + id="Go57mUIyn8BL" colab_type="code" colab={}
x=diabetes_data.drop(['Pregnancies', 'Outcome'], axis =1)
# + id="eAzDlVCeqFfI" colab_type="code" colab={}
x.replace(0,np.nan, inplace=True)
# + id="1esidBITqnMs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="2e15fe9d-04e1-411d-90d5-d0fc0b7b754c"
x.isnull().sum().sort_values(ascending= False)
# + id="ILif3QQLqwbN" colab_type="code" colab={}
# + [markdown] id="7NVxKVd_rHU1" colab_type="text"
# **Simple Imputer**
# + id="a9mVQ2gkrNcn" colab_type="code" colab={}
from sklearn.impute import SimpleImputer
impute = SimpleImputer(strategy = 'median')
# + id="QgYyWsZdrtWW" colab_type="code" colab={}
diabetes_data_array= impute.fit_transform(x)
# + id="qhcg1oMhr5FR" colab_type="code" colab={}
diabetes_df= pd.DataFrame(diabetes_data_array, columns = x.columns)
# + id="58QSFKFSsPyW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="6db8844b-a4ed-461f-96d1-5dab55a2dd43"
diabetes_df.isnull().sum()
# + id="GREdJG-rscqw" colab_type="code" colab={}
diabetes_data = pd.read_csv('https://raw.githubusercontent.com/kusumikakd/Datasets/master/Datasets/Diabetes_Preprocessing.csv')
# + id="V93vklaYuCTw" colab_type="code" colab={}
diabetes_df['Pregnancies']=diabetes_data.Pregnancies
# + id="B3b7DByouU3t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="872b6e79-7388-4b1a-ffc9-eae541f1f5b7"
diabetes_df.columns
# + id="CO5zk-kAuY1m" colab_type="code" colab={}
y=diabetes_data['Outcome']
# + id="0MWfRo_lu1gh" colab_type="code" colab={}
| Understanding_datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from bs4 import BeautifulSoup
import requests
# +
# path = 'https://movie.naver.com/movie/point/af/list.naver'
# req = requests.get(path)
# +
# soup = BeautifulSoup(req.content, 'html.parser')
# +
# movies = soup.select('tbody > tr')
# movie = movies[0]
# movie
# +
# title= movie.select('a.movie')
# title
# +
# href = title[0]['href']
# href
# +
# type(href)
# +
# title = movie.select('a.movie')
# +
# title[0].text.strip()
# +
# score = movie.select('td > div > em')
# score
# +
# score[0].text.strip()
# +
# author = movie.select('a.author')
# author
# +
# author[0].text.strip()
# -
# https://movie.naver.com/movie/point/af/list.naver?&page=1
# +
uri = 'https://movie.naver.com/movie/point/af/list.naver?&page='
data = [] # or list()
for page in range(1, 1001):
target = uri+str(page)
# print(target)
req = requests.get(target)
soup = BeautifulSoup(req.content,'html.parser')
movies = soup.select('tbody > tr')
for movie in movies:
title = movie.select('a.movie')
score = movie.select('td > div > em')
author = movie.select('a.author')
data.append([title[0]['href'], title[0].text.strip(),score[0].text.strip(), author[0].text.strip()])
len(data)
# -
data
import pandas as pd
pd_data=pd.DataFrame(data, columns=['Code','Title','Score','Author'])
pd_data
pd_data['Code'] = pd_data['Code'].str[16:22]
pd_data
pd_data['Code'] = pd_data['Code'].str.replace(pat=r'[^\w]', repl=r'', regex=True)
pd_data
pd_data.to_excel('./saves/naver_movie_reviewdata.xls', index=False)
| scraping/naver_movie_score.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/alanexplorer/Robotic-Algorithm-Tutorial/blob/master/kalmanFIlter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="WE-9GMjH7tQp"
# # Kalman Filter
#
# ## Introduction
#
# Kalman filtering is an algorithm that provides estimates of some unknown variables given the measurements observed over time. Kalman filters have been demonstrating its usefulness in various applications. Kalman filters have relatively simple form and require small computational power
#
# ## Problem definition
#
# Kalman filters are used to estimate states based on linear dynamical systems in
# state space format. The Kalman filter represents beliefs by the moments parameterization: At time $t$, the belief is represented by the the mean $\mu_t$ and the covariance $\Sigma_t$. The process model defines the evolution of the state from time $t - 1$ to time $t$. The state transition probability $p(x_t | u_t, x_{t−1})$ must be a linear function in its arguments with added Gaussian noise. This is expressed by the following equation:
#
# $x_t = A_tx_{t−1} + B_tu_t + \varepsilon_t$
#
# Here $x_t$ and $x_{t−1}$ are state vectors, and ut is the control vector at time t.
# In our notation, both of these vectors are vertical vectors. They are of the
# form
#
# $x_{t}^{n} = \begin{pmatrix} \\ x_{t}^{1}\\ x_{t}^{2}\\ \vdots \\ x_{t}^{n}\\ \end{pmatrix}$ and $u_{t}^{m} = \begin{pmatrix} \\ u_{t}^{1}\\ u_{t}^{2}\\ \vdots \\ u_{t}^{m}\\ \end{pmatrix}$
#
# where $A_t$ is the state transition matrix applied to the previous state vector $x_{t−1}$, $A_t$ is a square matrix of size $n \times n$, where $n$ is
# the dimension of the state vector $x_t$. $B_t$ is the control-input matrix applied to the control vector $u_{k}$, $B_t$ have a size $n \times m$, with $m$ being the dimension of the control vector $u_t$. and $\varepsilon_t$ is the process noise vector that is assumed to be zero-mean Gaussian with the covariance $R_t$, $\varepsilon_t \sim 𝒩(0,R)$.
#
# The measurement probability $p(z_t | x_t)$ must also be linear in its arguments, with added Gaussian noise. The process model is paired with the measurement model that describes the relationship between the state and the measurement at the current time step t as:
#
# $z_t = C_tx_t + \delta_t$
#
# where $z_t$ is the measurement vector, $C_t$ is the measurement matrix, $C_t$ is a matrix of size $k \times n$, where $k$ is the dimension of the measurement vector $z_t$. The $\delta_t$ is the measurement noise vector that is assumed to be zero-mean Gaussian with the covariance $Q_t$ , $\delta_t \sim 𝒩(0,Q)$.
#
# The role of the Kalman filter is to provide estimate of $x_t$ at time $t$, given the initial estimate of $x_0$ , the series of measurement, $z_1,z_2,…,z_t$ , and the information of the system described by $A_t$ , $B_t$ , $C_t$ , $Q$, and $R$. Note that subscripts to these matrices are omitted here by assuming that they are invariant over time as in most applications. Although the covariance matrices are supposed to reflect the statistics of the noises, the true statistics of the noises is not known or not Gaussian in many practical applications. Therefore, Q and R are usually used as tuning parameters that the user can adjust to get desired performance.
#
# ## Pseudocode
#
#
# $1: Algorithm Kalmanfilter(μt−1, Σt−1, ut, zt):$
#
# $2: \bar{\mu}_t = A_t \mu_{t−1} + B_t u_t$
#
# $3: \bar{\Sigma}_t = A_t \Sigma_{t−1} A^T_t + R_t$
#
# $4: K_t = \bar{\Sigma}_t C^T_t (C_t \Sigma_t C^T_t + Q_t)^{−1}$
#
# $5: \mu_t = \bar{\mu}_t + K_t(z_t − C_t \bar{\mu}_t)$
#
# $6: \Sigma_t = (I − K_t C_t)\bar{\Sigma}_t$
#
# $7: return (\mu_t, \Sigma_t)$
#
#
# ## Summary
#
# ### Prediction:
#
# | Description | Representation in the pseudocodigo|
# |----------------------------|-------------------------------------------------------|
# | Predicted state estimate | $\bar{\mu} _t = A_t \mu_ {t−1} + B_t u_t$ |
# | Predicted error covariance | $\bar{\Sigma} _t = A_t \Sigma_ {t−1} A^T_t + R_t$ |
#
# ### Update:
#
# | Description | Representation in the pseudocodigo |
# |--------------------------|----------------------------------------------------------------|
# | Measurement residual | $(z_t − C_t \bar{\mu} _t)$ |
# | Kalman gain | $K_t = \bar{\Sigma} _t C^T_t (C_t \Sigma_t C^T_t + Q_t)^{−1} $ |
# | Updated state estimate | $\mu_t = \bar{\mu} _t + K_t(z_t − C_t \bar{\mu} _t)$ |
# | Updated error covariance | $\Sigma_t = (I − K_t C_t)\bar{\Sigma} _t$ |
# + [markdown] colab_type="text" id="060k6LUCc_eF"
# ## Kalman Filter for Sensor Fusion
# + [markdown] colab_type="text" id="nPG3zWlic_eG"
# ## The Kalman Filter 1-D
#
# Kalman filters are discrete systems that allows us to define a dependent variable by an independent variable, where by we will solve for the independent variable so that when we are given measurements (the dependent variable),we can infer an estimate of the independent variable assuming that noise exists from our input measurement and noise also exists in how we’ve modeled the world with our math equations because of inevitably unaccounted for factors in the non-sterile world.Input variables become more valuable when modeled as a system of equations,ora matrix, in order to make it possible to determine the relationships between those values. Every variables in every dimension will contain noise, and therefore the introduction of related inputs will allow weighted averaging to take place based on the predicted differential at the next step, the noise unaccounted for in the system,and the noise introduced by the sensor inputs.
# + colab={} colab_type="code" id="v9deFMZZc_eG"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import seaborn as sb
from scipy import stats
import time
from numpy.linalg import inv
import scipy.stats as scs
# + colab={} colab_type="code" id="m8_tIgN9fD7b"
# %matplotlib inline
fw = 10 # figure width
# + [markdown] colab_type="text" id="VmH-ULpAc_eP"
# #### Plot the Distributions in this range:
# + colab={} colab_type="code" id="UedLopL1c_eP"
x = np.linspace(-100,100,1000)
# + colab={} colab_type="code" id="Ad31AnRuc_eS"
mean0 = 0.0 # e.g. meters or miles
var0 = 20.0
# + colab={"base_uri": "https://localhost:8080/", "height": 338} colab_type="code" id="dj2pPK33c_eV" outputId="b32f9ca7-a11e-40ab-d7fa-273d9c482d39"
plt.figure(figsize=(fw,5))
plt.plot(x, scs.norm.pdf(x, mean0, var0), 'b', label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Position');
# + [markdown] colab_type="text" id="TfF4Rka2c_eZ"
# ## Now we have something, which estimates the moved distance
# + [markdown] colab_type="text" id="PlXaHM8nc_ea"
# #### The Mean is meters, calculated from velocity*dt or step counter or wheel encoder ...
#
# #### VarMove is the Estimated or determined with static measurements
# + colab={} colab_type="code" id="hZ0iplHec_eb"
meanMove = 25.0
varMove = 10.0
# + colab={"base_uri": "https://localhost:8080/", "height": 338} colab_type="code" id="UoPSCK6tc_ee" outputId="95f1dff3-ec0a-4375-dc52-d0a097144e1d"
plt.figure(figsize=(fw,5))
plt.plot(x,scs.norm.pdf(x, meanMove, varMove), 'r', label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Distance moved');
# + [markdown] colab_type="text" id="WX3uBW6Ec_eh"
# Both Distributions have to be merged together
# $\mu_\text{new}=\mu_\text{0}+\mu_\text{move}$ is the new mean and $\sigma^2_\text{new}=\sigma^2_\text{0}+\sigma^2_\text{move}$ is the new variance.
#
#
# + colab={} colab_type="code" id="7WYW-Lbvc_ei"
def predict(var, mean, varMove, meanMove):
new_var = var + varMove
new_mean= mean+ meanMove
return new_var, new_mean
# + colab={} colab_type="code" id="bQM4JSMNc_em"
new_var, new_mean = predict(var0, mean0, varMove, meanMove)
# + colab={"base_uri": "https://localhost:8080/", "height": 336} colab_type="code" id="hn8KWrfSc_ep" outputId="b717980a-b6a0-4e15-8b87-8ed1e32a09f9"
plt.figure(figsize=(fw,5))
plt.plot(x,scs.norm.pdf(x, mean0, var0), 'b', label='Beginning Normal Distribution')
plt.plot(x,scs.norm.pdf(x, meanMove, varMove), 'r', label='Movement Normal Distribution')
plt.plot(x,scs.norm.pdf(x, new_mean, new_var), 'g', label='Resulting Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Prediction Step');
plt.savefig('Kalman-Filter-1D-Step.png', dpi=150)
# + [markdown] colab_type="text" id="PZcR5_pxc_eu"
# ### What you see: The resulting distribution is flat > uncertain.
#
# The more often you run the predict step, the flatter the distribution get
#
# First Sensor Measurement (Position) is coming in...
# #### Sensor Defaults for Position Measurements
# (Estimated or determined with static measurements)
# + colab={} colab_type="code" id="w7id7_JYc_eu"
meanSensor = 25.0
varSensor = 12.0
# + colab={"base_uri": "https://localhost:8080/", "height": 324} colab_type="code" id="NkR19YSZc_ex" outputId="81d7effb-703b-4a2a-9ee6-33777dd38b52"
plt.figure(figsize=(fw,5))
plt.plot(x,scs.norm.pdf(x, meanSensor, varSensor), 'c')
plt.ylim(0, 0.1);
# + [markdown] colab_type="text" id="r2Z4woKzc_ez"
# Now both Distributions have to be merged together
# $\sigma^2_\text{new}=\cfrac{1}{\cfrac{1}{\sigma^2_\text{old}}+\cfrac{1}{\sigma^2_\text{Sensor}}}$ is the new variance and the new mean value is $\mu_\text{new}=\cfrac{\sigma^2_\text{Sensor} \cdot \mu_\text{old} + \sigma^2_\text{old} \cdot \mu_\text{Sensor}}{\sigma^2_\text{old}+\sigma^2_\text{Sensor}}$
# + colab={} colab_type="code" id="wtbxW8iuc_ez"
def correct(var, mean, varSensor, meanSensor):
new_mean=(varSensor*mean + var*meanSensor) / (var+varSensor)
new_var = 1/(1/var +1/varSensor)
return new_var, new_mean
# + colab={} colab_type="code" id="HT6JhDpXc_e2"
var, mean = correct(new_var, new_mean, varSensor, meanSensor)
# + colab={"base_uri": "https://localhost:8080/", "height": 336} colab_type="code" id="p2g-t5DHc_e5" outputId="6b941ae2-14da-4612-99fb-dcf0e0b540fd"
plt.figure(figsize=(fw,5))
plt.plot(x,scs.norm.pdf(x, new_mean, new_var), 'g', label='Beginning (after Predict)')
plt.plot(x,scs.norm.pdf(x, meanSensor, varSensor), 'c', label='Position Sensor Normal Distribution')
plt.plot(x,scs.norm.pdf(x, mean, var), 'm', label='New Position Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Update Step');
# + [markdown] colab_type="text" id="KHAVvUnfc_e8"
# ###### This is called the Measurement or Correction step! The Filter get's more serious about the actual state.
# + [markdown] colab_type="text" id="Dgb_d4hYc_e8"
# #### Let's put everything together: The 1D Kalman Filter
# "Kalman-Filter: Predicting the Future since 1960"
#
# Let's say, we have some measurements for position and for distance traveled. Both have to be fused with the 1D-Kalman Filter.
# + colab={} colab_type="code" id="RitNVg4Gc_e9"
positions = (10, 20, 30, 40, 50)+np.random.randn(5)
distances = (10, 10, 10, 10, 10)+np.random.randn(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="28QSD9AFc_fA" outputId="a50c4728-7e29-473f-f701-347a043b6abd"
positions
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="G5Ha1hn5c_fC" outputId="6b9c25ff-a8a7-435c-daf1-9d0086a494a9"
distances
# + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="o3Esp3N3c_fE" outputId="fe76e82e-3a23-4002-9036-e5cc7b085e63"
for m in range(len(positions)):
# Predict
var, mean = predict(var, mean, varMove, distances[m])
#print('mean: %.2f\tvar:%.2f' % (mean, var))
plt.plot(x,scs.norm.pdf(x, mean, var), label='%i. step (Prediction)' % (m+1))
# Correct
var, mean = correct(var, mean, varSensor, positions[m])
print('After correction: mean= %.2f\tvar= %.2f' % (mean, var))
plt.plot(x,scs.norm.pdf(x, mean, var), label='%i. step (Correction)' % (m+1))
plt.ylim(0, 0.1);
plt.xlim(-20, 120)
plt.legend();
# + [markdown] colab_type="text" id="eAun_5FGc_fG"
#
# The sensors are represented as normal distributions with their parameters ($\mu$ and $\sigma^2$) and are calculated together with addition or convolution. The prediction decreases the certainty about the state, the correction increases the certainty.
#
# Prediction: Certainty $\downarrow$
# Correction: Certainty $\uparrow$
# + [markdown] colab_type="text" id="0jkqbt8rc_fH"
# ## Kalman Filter - Multi-Dimensional Measurement
# -
# ### Kalman Filter Implementation for Constant Velocity Model (CV) in Python
#
# 
#
# Situation covered: You drive with your car in a tunnel and the GPS signal is lost. Now the car has to determine, where it is in the tunnel. The only information it has, is the velocity in driving direction. The x and y component of the velocity ($\dot x$ and $\dot y$) can be calculated from the absolute velocity (revolutions of the wheels) and the heading of the vehicle (yaw rate sensor).
# 
#
# First, we have to initialize the matrices and vectors. Setting up the math.
# ## State Vector
#
# Constant Velocity Model for Ego Motion
#
# $$x_t= \left[ \matrix{ x \\ y \\ \dot x \\ \dot y} \right] = \matrix{ \text{Position x} \\ \text{Position y} \\ \text{Velocity in x} \\ \text{Velocity in y}}$$
# Formal Definition (Motion of Law):
#
# $$x_{t} = \textbf{$A_t$} \cdot x_{t-1}$$
#
# which is
#
# $$x_{t} = \begin{bmatrix}1 & 0 & \Delta t & 0 \\ 0 & 1 & 0 & \Delta t \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ \dot x \\ \dot y \end{bmatrix}_{t-1}$$
# Observation Model:
#
# $$z_t = \textbf{$C_t$}\cdot x_t$$
#
# which is
#
# $$z_t = \begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \cdot x_t$$ means: You observe the velocity directly in the correct unit
# ### Initial State $x_0$
#
# $$x_{0} = \begin{bmatrix}0 \\ 0 \\ 0 \\ 0\end{bmatrix}$$
x = np.matrix([[0.0, 0.0, 0.0, 0.0]]).T
print(x, x.shape)
plt.scatter(float(x[0]),float(x[1]), s=100)
plt.title('Initial Location')
# ### Covariance Matrix $P_0$ ($\Sigma_0$)
#
# An uncertainty must be given for the initial state $x_0$ . In the 1D case, the $\mu_0$ , now a matrix, defines an initial uncertainty for all states.
#
# This matrix is most likely to be changed during the filter passes. It is changed in both the Predict and Correct steps. If one is quite sure about the states at the beginning, one can use low values here, if one does not know exactly how the values of the state vector are, the covariance matrix should be initialized with very large values (1 million or so) to allow the filter to converge relatively quickly (find the right values based on the measurements).
#
#
# $$P_{0} = \begin{bmatrix}\sigma^2_x & 0 & 0 & 0 \\ 0 & \sigma^2_y & 0 & 0 \\ 0 & 0 & \sigma^2_{\dot x} & 0 \\ 0 & 0 & 0 & \sigma^2_{\dot y} \end{bmatrix}$$
#
# with $\sigma$ as the standard deviation
P = np.diag([1000.0, 1000.0, 1000.0, 1000.0])
print(P, P.shape)
# +
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(P, interpolation="none", cmap=plt.get_cmap('binary'))
plt.title('Initial Covariance Matrix $P$')
ylocs, ylabels = plt.yticks()
# set the locations of the yticks
plt.yticks(np.arange(7))
# set the locations and labels of the yticks
plt.yticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
xlocs, xlabels = plt.xticks()
# set the locations of the yticks
plt.xticks(np.arange(7))
# set the locations and labels of the yticks
plt.xticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
plt.xlim([-0.5,3.5])
plt.ylim([3.5, -0.5])
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", "5%", pad="3%")
plt.colorbar(im, cax=cax);
# -
# ### Dynamic Matrix $A$
#
# It is calculated from the dynamics of the Egomotion.
#
# $$x_{t} = x_{t-1} + \dot x_{t-1} \cdot \Delta t$$
# $$y_{t} = y_{t} + \dot y_{t-1} \cdot \Delta t$$
# $$\dot x_{t} = \dot x_{t-1}$$
# $$\dot y_{t} = \dot y_{t-1}$$
# +
dt = 0.1 # Time Step between Filter Steps
A = np.matrix([[1.0, 0.0, dt, 0.0],
[0.0, 1.0, 0.0, dt],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
# -
# ### Measurement Matrix $C_t$
#
# We directly measure the Velocity $\dot x$ and $\dot y$
#
# $$H = \begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}$$
C = np.matrix([[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
print(C, C.shape)
# ### Measurement Noise Covariance $Q_t$
#
# Tells the Kalman Filter how 'bad' the sensor readings are.
#
# $$Q_t = \begin{bmatrix}\sigma^2_{\dot x} & 0 \\ 0 & \sigma^2_{\dot y} \end{bmatrix}$$
# +
ra = 10.0**2
Q = np.matrix([[ra, 0.0],
[0.0, ra]])
print(Q, Q.shape)
# +
# Plot between -10 and 10 with .001 steps.
xpdf = np.arange(-10, 10, 0.001)
plt.subplot(121)
plt.plot(xpdf, norm.pdf(xpdf,0,Q[0,0]))
plt.title('$\dot x$')
plt.subplot(122)
plt.plot(xpdf, norm.pdf(xpdf,0,Q[1,1]))
plt.title('$\dot y$')
plt.tight_layout()
# -
# ### Process Noise Covariance $R$
#
# The Position of the car can be influenced by a force (e.g. wind), which leads to an acceleration disturbance (noise). This process noise has to be modeled with the process noise covariance matrix R.
#
# $$R = \begin{bmatrix}\sigma_{x}^2 & \sigma_{xy} & \sigma_{x \dot x} & \sigma_{x \dot y} \\ \sigma_{yx} & \sigma_{y}^2 & \sigma_{y \dot x} & \sigma_{y \dot y} \\ \sigma_{\dot x x} & \sigma_{\dot x y} & \sigma_{\dot x}^2 & \sigma_{\dot x \dot y} \\ \sigma_{\dot y x} & \sigma_{\dot y y} & \sigma_{\dot y \dot x} & \sigma_{\dot y}^2 \end{bmatrix}$$
#
# One can calculate R as
#
# $$R = G\cdot G^T \cdot \sigma_v^2$$
#
# with $G = \begin{bmatrix}0.5dt^2 & 0.5dt^2 & dt & dt\end{bmatrix}^T$ and $\sigma_v$ as the acceleration process noise, which can be assumed for a vehicle to be $8.8m/s^2$, according to: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2011). [Empirical evaluation of vehicular models for ego motion estimation](http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5940526). 2011 IEEE Intelligent Vehicles Symposium (IV), 534–539. doi:10.1109/IVS.2011.5940526
# +
sv = 8.8
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[dt]])
R = G*G.T*sv**2
# -
from sympy import Symbol, Matrix
from sympy.interactive import printing
printing.init_printing()
dts = Symbol('dt')
Rs = Matrix([[0.5*dts**2],[0.5*dts**2],[dts],[dts]])
Rs*Rs.T
# +
fig = plt.figure(figsize=(6, 6))
im = plt.imshow(R, interpolation="none", cmap=plt.get_cmap('binary'))
plt.title('Process Noise Covariance Matrix $P$')
ylocs, ylabels = plt.yticks()
# set the locations of the yticks
plt.yticks(np.arange(7))
# set the locations and labels of the yticks
plt.yticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
xlocs, xlabels = plt.xticks()
# set the locations of the yticks
plt.xticks(np.arange(7))
# set the locations and labels of the yticks
plt.xticks(np.arange(6),('$x$', '$y$', '$\dot x$', '$\dot y$'), fontsize=22)
plt.xlim([-0.5,3.5])
plt.ylim([3.5, -0.5])
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(plt.gca())
cax = divider.append_axes("right", "5%", pad="3%")
plt.colorbar(im, cax=cax);
# -
# ### Identity Matrix $I$
I = np.eye(4)
print(I, I.shape)
# ## Measurements
#
# For example, we are using some random generated measurement values
# +
m = 200 # Measurements
vx= 20 # in X
vy= 10 # in Y
mx = np.array(vx+np.random.randn(m))
my = np.array(vy+np.random.randn(m))
measurements = np.vstack((mx,my))
print(measurements.shape)
print('Standard Deviation of Acceleration Measurements=%.2f' % np.std(mx))
print('You assumed %.2f in Q.' % Q[0,0])
# +
fig = plt.figure(figsize=(16,5))
plt.step(range(m),mx, label='$\dot x$')
plt.step(range(m),my, label='$\dot y$')
plt.ylabel(r'Velocity $m/s$')
plt.title('Measurements')
plt.legend(loc='best',prop={'size':18})
# +
# Preallocation for Plotting
xt = []
yt = []
dxt= []
dyt= []
Zx = []
Zy = []
Px = []
Py = []
Pdx= []
Pdy= []
Rdx= []
Rdy= []
Kx = []
Ky = []
Kdx= []
Kdy= []
def savestates(x, Z, P, Q, K):
xt.append(float(x[0]))
yt.append(float(x[1]))
dxt.append(float(x[2]))
dyt.append(float(x[3]))
Zx.append(float(Z[0]))
Zy.append(float(Z[1]))
Px.append(float(P[0,0]))
Py.append(float(P[1,1]))
Pdx.append(float(P[2,2]))
Pdy.append(float(P[3,3]))
Rdx.append(float(Q[0,0]))
Rdy.append(float(Q[1,1]))
Kx.append(float(K[0,0]))
Ky.append(float(K[1,0]))
Kdx.append(float(K[2,0]))
Kdy.append(float(K[3,0]))
# -
# # Kalman Filter
#
# 
for n in range(len(measurements[0])):
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x
# Project the error covariance ahead
P = A*P*A.T + R
# Measurement Update (Correction)
# ===============================
# Compute the Kalman Gain
S = C*P*C.T + Q
K = (P*C.T) * np.linalg.pinv(S)
# Update the estimate via z
Z = measurements[:,n].reshape(2,1)
y = Z - (C*x) # Innovation or Residual
x = x + (K*y)
# Update the error covariance
P = (I - (K*C))*P
# Save states (for Plotting)
savestates(x, Z, P, Q, K)
# # Let's take a look at the filter performance
# ### Kalman Gains $K$
def plot_K():
fig = plt.figure(figsize=(16,9))
plt.plot(range(len(measurements[0])),Kx, label='Kalman Gain for $x$')
plt.plot(range(len(measurements[0])),Ky, label='Kalman Gain for $y$')
plt.plot(range(len(measurements[0])),Kdx, label='Kalman Gain for $\dot x$')
plt.plot(range(len(measurements[0])),Kdy, label='Kalman Gain for $\dot y$')
plt.xlabel('Filter Step')
plt.ylabel('')
plt.title('Kalman Gain (the lower, the more the measurement fullfill the prediction)')
plt.legend(loc='best',prop={'size':22})
plot_K()
# ### Uncertainty Matrix $P$
def plot_P():
fig = plt.figure(figsize=(16,9))
plt.plot(range(len(measurements[0])),Px, label='$x$')
plt.plot(range(len(measurements[0])),Py, label='$y$')
plt.plot(range(len(measurements[0])),Pdx, label='$\dot x$')
plt.plot(range(len(measurements[0])),Pdy, label='$\dot y$')
plt.xlabel('Filter Step')
plt.ylabel('')
plt.title('Uncertainty (Elements from Matrix $P$)')
plt.legend(loc='best',prop={'size':22})
plot_P()
# ### State Estimate $x$
def plot_x():
fig = plt.figure(figsize=(16,9))
plt.step(range(len(measurements[0])),dxt, label='$\dot x$')
plt.step(range(len(measurements[0])),dyt, label='$\dot y$')
plt.axhline(vx, color='#999999', label='$\dot x_{real}$')
plt.axhline(vy, color='#999999', label='$\dot y_{real}$')
plt.xlabel('Filter Step')
plt.title('Estimate (Elements from State Vector $x$)')
plt.legend(loc='best',prop={'size':22})
plt.ylim([0, 30])
plt.ylabel('Velocity')
plot_x()
# ## Position x/y
def plot_xy():
fig = plt.figure(figsize=(16,16))
plt.scatter(xt,yt, s=20, label='State', c='k')
plt.scatter(xt[0],yt[0], s=100, label='Start', c='g')
plt.scatter(xt[-1],yt[-1], s=100, label='Goal', c='r')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Position')
plt.legend(loc='best')
plt.axis('equal')
plot_xy()
| kalmanFIlter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Compare detected face locations in different pliers face detection methods
#
#
# ### Tools that detect faces:
#
# * Google Cloud Vision API
# * Clarifai
# * pliers itself
#
#
# ### Common measures
# * boundaries of faces
#
#
# ### Ways to assess similarity
# * Eucledian distance between coordinates?
# +
import numpy as np
from os.path import join as opj
from pliers.extractors import (ClarifaiAPIImageExtractor,
FaceRecognitionFaceLocationsExtractor,
GoogleVisionAPIFaceExtractor,
merge_results)
from pliers.stimuli import ImageStim
from pliers.filters import FrameSamplingFilter
from matplotlib import pyplot as plt
from matplotlib import image as mpimg
from matplotlib import patches as patches
# -
def plot_boundingBox(img, rect_coords, savename='', title=''):
fig,ax = plt.subplots(1)
if isinstance(img, str):
img = mpimg.imread(img)
imgplot = ax.imshow(img)
# add bounding boxes
for c in rect_coords:
rect = patches.Rectangle((c[0], c[1]), c[2], c[3],
linewidth=2,
edgecolor='r',
facecolor='none',
)
ax.add_patch(rect)
# turn off axis
plt.axis('off')
plt.title(title)
# save
if not savename:
plt.show()
else:
plt.savefig(savename)
def extract_bounding(results,
api='builtin',
x=None,
y=None):
"""
Extract bounding box coordinates from a face extraction with pliers build-in tool
params
------
results: pandas dataframe, result of a .to_df() operation on an extraction result
api: one of 'builtin', 'clarifai', 'google'
x, y: stimulus dimensions in pixel
returns
-------
coords: dictionary, with one key per face and coordinates in pixel. Order of coords:
top, right, bottom, left
>>> extract_bounding(result_clarifai, api='clarifai', x=444, y=600)
"""
allowed_api = ['builtin', 'clarifai', 'google']
if api not in allowed_api:
raise ValueError(f'expected api specification from on in {allowed_api}, however I got "{api}".')
# initialize an exmpty dict
coords = {}
if api == 'builtin':
for idx, i in results.iterrows():
coords[idx] = [c for c in i['face_locations']]
# simple assertion to check whether results are within image dimensions
assert all([x_dim < x for x_dim in [coords[idx][1], coords[idx][3]]])
assert all([y_dim < x for y_dim in [coords[idx][0], coords[idx][2]]])
elif api == 'clarifai':
assert x, y != None
for idx, i in results.iterrows():
# extract coordinates and scale them to pixels
coords[idx] = [i['top_row'] * y,
i['right_col'] * x,
i['bottom_row'] * y,
i['left_col'] * x
]
return coords
# define static test images (single and many faces)
img_pth = opj('../', 'data', 'obama.jpg')
img_pth_many = opj('../', 'data', 'thai_people.jpg')
stim = ImageStim(img_pth)
stim_many = ImageStim(img_pth_many)
# +
# the results of the face detection are given relative to stimulus size. Let's get the image dimensions in pixel
y, x = stim.data.shape[:2]
print(f'the one-face picture is {x} pixel x {y} pixel in size')
y2, x2 = stim_many.data.shape[:2]
print(f'the many-face picture is {x2} pixel x {y2} pixel in size')
# +
# quick overview of the pictures
plt_img = mpimg.imread(img_pth)
plt_img2 = mpimg.imread(img_pth_many)
plt.figure(1)
plt.subplot(211)
plt.imshow(plt_img)
plt.subplot(212)
plt.imshow(plt_img2)
plt.axis('off')
plt.show()
# -
# ### pliers face detection
ext_pliers = FaceRecognitionFaceLocationsExtractor()
# for single face
result_pliers = ext_pliers.transform(stim).to_df()
# for many faces stimulus
result_pliers_many = ext_pliers.transform(stim_many).to_df()
# extract faces for single and multi-face images from pliers-builtin, and plot them
for res, im, x_dim, y_dim in [(result_pliers, img_pth, x, y), (result_pliers_many, img_pth_many, x2, y2)]:
d = extract_bounding(res, x=x_dim, y=y_dim)
for k, i in d.items():
top, right, bottom, left = i
box_width = right-left
box_height = top-bottom
coords = [left, bottom, box_width, box_height]
plot_boundingBox(im, coords, 'Pliers builtin')
# ### clarifai face detection
# +
# the clarifai extraction needs a model and an api key
model='face'
ext_clarifai = ClarifaiAPIImageExtractor(api_key='<KEY>',
model=model)
result_clarifai = ext_clarifai.transform(stim).to_df()
# for many faces
result_clarifai_many = ext_clarifai.transform(stim_many).to_df()
# +
# transform relative coordinates into pixel
#top_row = y * result_clarifai['top_row'][0]
#bottom_row = y * result_clarifai['bottom_row'][0]
#left_col = x * result_clarifai['left_col'][0]
#right_col = x * result_clarifai['right_col'][0]
#print(top_row, right_col, bottom_row, left_col)
# Plot bounding on image
#box_width = right_col - left_col
#box_height = top_row - bottom_row
#coords = [left_col, bottom_row, box_width, box_height]
#plot_boundingBox(img_pth, coords, 'Clarifai: wide face bounding box')
# fig,ax = plt.subplots(1)
# plt_img = mpimg.imread(img_pth)
# imgplot = ax.imshow(plt_img)
# #bottom left xy, width, height
# rect = patches.Rectangle((left_col,bottom_row),right_col-left_col,top_row-bottom_row,
# linewidth=2,
# edgecolor='r',
# facecolor='none',
# )
# #plt.scatter(0, top_row)
# # Add the patch to the Axes
# ax.add_patch(rect)
# plt.title('Pliers builtin: face bounding box')
# plt.show()
# -
# extract faces for single and multi-face images from pliers-builtin, and plot them
for res, im, x_dim, y_dim in [(result_clarifai, img_pth, x, y), (result_clarifai_many, img_pth_many, x2, y2)]:
d = extract_bounding(res, api='clarifai', x=x_dim, y=y_dim)
for k, i in d.items():
top, right, bottom, left = i
box_width = right-left
box_height = top-bottom
coords = [left, bottom, box_width, box_height]
plot_boundingBox(im, coords, 'Clarifai')
# ### Google Cloud vision API face detection
ext_google = GoogleVisionAPIFaceExtractor(discovery_file='/home/adina/NeuroHackademy-02c15db15c2a.json')
#ext_google = GoogleVisionAPIFaceExtractor(discovery_file='/Users/Mai/NeuroHackademy-02c15db15c2a.json')
result_google = ext_google.transform(stim_many).to_df()
result_google
# +
# Google has "wide" and "narrow" bounding boxes. Here we get the wide bounding box
result_google.to_dict(orient='records')
# vertex coordinates are in the same scale as the original image.
# vertices are in order top-left, top-right, bottom-right, bottom-left.
top_left_x = result_google['boundingPoly_vertex1_x'][0]
top_right_x = result_google['boundingPoly_vertex2_x'][0]
bottom_right_x = result_google['boundingPoly_vertex3_x'][0]
bottom_left_x = result_google['boundingPoly_vertex4_x'][0]
top_left_y = result_google['boundingPoly_vertex1_y'][0]
top_right_y = result_google['boundingPoly_vertex2_y'][0]
bottom_right_y = result_google['boundingPoly_vertex3_y'][0]
bottom_left_y = result_google['boundingPoly_vertex4_y'][0]
print(top_left_x, top_right_x, bottom_right_x, bottom_left_x)
print(top_left_y, top_right_y, bottom_right_y, bottom_left_y)
# +
# # Plot bounding on image
box_width = bottom_right_x - bottom_left_x
box_height = bottom_right_y - top_right_y
coords_google_wide = [[bottom_left_x, top_left_y, box_width, box_height]]
plot_boundingBox(img_pth, coords_google_wide, title='Google: wide face bounding box')
# -
coords_google_wide
# +
# Google has "wide" and "narrow" bounding boxes. Here we get the narrow bounding box
result_google.to_dict(orient='records')
# vertex coordinates are in the same scale as the original image.
# vertices are in order top-left, top-right, bottom-right, bottom-left.
top_left_x = result_google['fdBoundingPoly_vertex1_x'][0]
top_right_x = result_google['fdBoundingPoly_vertex2_x'][0]
bottom_right_x = result_google['fdBoundingPoly_vertex3_x'][0]
bottom_left_x = result_google['fdBoundingPoly_vertex4_x'][0]
top_left_y = result_google['fdBoundingPoly_vertex1_y'][0]
top_right_y = result_google['fdBoundingPoly_vertex2_y'][0]
bottom_right_y = result_google['fdBoundingPoly_vertex3_y'][0]
bottom_left_y = result_google['fdBoundingPoly_vertex4_y'][0]
print(top_left_x, top_right_x, bottom_right_x, bottom_left_x)
print(top_left_y, top_right_y, bottom_right_y, bottom_left_y)
# +
# Plot bounding on image
box_width = bottom_right_x - bottom_left_x
box_height = bottom_right_y - top_right_y
coords_google_narrow = [[bottom_left_x, top_left_y, box_width, box_height]]
plot_boundingBox(img_pth, coords_google_narrow, title = 'Google: narrow face bounding box')
# -
# ### Compare different face bounding boxes
# let's start with looking at the coords
print('pliers: ' + str(coords_pliers))
print('clarifai: ' + str(coords_clarifai))
print('google (wide): ' + str(coords_google_wide))
print('google (narrow): ' + str(coords_google_narrow))
# +
# Plot on the same figure
# Make a dictionary with coords
face_apis = ['pliers', 'clarifai', 'google_wide', 'google_narrow']
coord_dict = dict(zip(face_apis, [coords_pliers, coords_clarifai, coords_google_wide, coords_google_narrow]))
# -
# ### Detect faces in a video
# +
# Path to video
video_pth = opj('../', 'data', 'obama_speech.mp4')
# Sample 2 frames per second
sampler = FrameSamplingFilter(hertz=2)
frames = sampler.transform(video_pth)
# -
# Extract using google API
ext_google = GoogleVisionAPIFaceExtractor(discovery_file='/Users/Mai/NeuroHackademy-02c15db15c2a.json')
results_google = ext_google.transform(frames)
results_google = merge_results(results_google, )
# +
# get coordinates of the bounding box
# top_left_x = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex1_x']
# top_right_x = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex2_x']
# bottom_left_x = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex3_x']
# bottom_right_x = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex4_x']
# top_left_y = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex1_y']
# top_right_y = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex2_y']
# bottom_left_y = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex3_y']
# bottom_right_y = results_google['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex4_y']
# +
#for i in frames.n_frames:
out_dir = opj('../', 'output')
for i in range(frames.n_frames):
# get this frame
f = frames.get_frame(i)
f_data = f.data
f_name = f.name
# get api output for this frame
f_results_google = results_google.loc[df['stim_name'] == f_name]
# get bounding box
coords = []
for index, row in f_results_google.iterrows():
coords.append(get_faceBounds_google(row))
# plot img with box and save
savename = opj(out_dir, 'img_' + str(i) + '.jpg')
plot_boundingBox(f_data, coords, savename)
# -
s = 'b'
if not s:
print('b')
else:
print('c')
def get_faceBounds_google(df, boxtype='tight'):
if boxtype is 'tight':
top_left_x = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex1_x']
top_right_x = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex2_x']
bottom_left_x = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex3_x']
bottom_right_x = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex4_x']
top_left_y = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex1_y']
top_right_y = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex2_y']
bottom_left_y = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex3_y']
bottom_right_y = df['GoogleVisionAPIFaceExtractor#fdBoundingPoly_vertex4_y']
elif boxtype is 'wide':
top_left_x = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex1_x']
top_right_x = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex2_x']
bottom_left_x = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex3_x']
bottom_right_x = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex4_x']
top_left_y = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex1_y']
top_right_y = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex2_y']
bottom_left_y = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex3_y']
bottom_right_y = df['GoogleVisionAPIFaceExtractor#boundingPoly_vertex4_y']
coords = [bottom_left_x, bottom_left_y, bottom_right_x - bottom_left_x, top_right_y - bottom_right_y]
return coords
f_results_google.keys()
len(top_left_x)
frames.get_frame(10).data
# +
num_unique_faces = np.unique(results_google['object_id'])
# -
unique(results_google['object_id'])
df_face1 = results_google.loc[df['object_id'] == 0]
results_google
| face_annot/code/Facedetection_comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: MindSpore
# language: python
# name: mindspore
# ---
# # Loading Text Dataset
#
# `Ascend` `GPU` `CPU` `Data Preparation`
#
# [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvZ3JhbW1pbmdfZ3VpZGUvZW4vbWluZHNwb3JlX2xvYWRfZGF0YXNldF90ZXh0LmlweW5i&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/programming_guide/en/mindspore_load_dataset_text.ipynb) [](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_en/load_dataset_text.ipynb)
# ## Overview
#
# The `mindspore.dataset` module provided by MindSpore enables users to customize their data fetching strategy from disk. At the same time, data processing and tokenization operators are applied to the data. Pipelined data processing produces a continuous flow of data to the training network, improving overall performance.
#
# In addition, MindSpore supports data loading in distributed scenarios. Users can define the number of shards while loading. For more details, see [Loading the Dataset in Data Parallel Mode](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html#loading-the-dataset-in-data-parallel-mode).
#
# This tutorial briefly demonstrates how to load and process text data using MindSpore.
#
# ## Preparations
#
# 1. Prepare the following text data.
# Welcome to Beijing!
#
# 北京欢迎您!
#
# 我喜欢English!
# 2. Create the `tokenizer.txt` file, copy the text data to the file, and save the file under `./datasets` directory. The directory structure is as follow.
# +
import os
if not os.path.exists('./datasets'):
os.mkdir('./datasets')
file_handle = open('./datasets/tokenizer.txt', mode='w')
file_handle.write('Welcome to Beijing \n北京欢迎您! \n我喜欢English! \n')
file_handle.close()
# -
# ! tree ./datasets
# 3. Import the `mindspore.dataset` and `mindspore.dataset.text` modules.
import mindspore.dataset as ds
import mindspore.dataset.text as text
# ## Loading Dataset
#
# MindSpore supports loading common datasets in the field of text processing that come in a variety of on-disk formats. Users can also implement custom dataset class to load customized data. For detailed loading methods of various datasets, please refer to the [Loading Dataset](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html) chapter in the Programming Guide.
#
# The following tutorial demonstrates loading datasets using the `TextFileDataset` in the `mindspore.dataset` module.
#
# 1. Configure the dataset directory as follows and create a dataset object.
DATA_FILE = "./datasets/tokenizer.txt"
dataset = ds.TextFileDataset(DATA_FILE, shuffle=False)
# 2. Create an iterator then obtain data through the iterator.
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# ## Processing Data
#
# For the data processing operators currently supported by MindSpore and their detailed usage methods, please refer to the [Processing Data](https://www.mindspore.cn/docs/programming_guide/en/master/pipeline.html) chapter in the Programming Guide
#
# The following tutorial demonstrates how to construct a pipeline and perform operations such as `shuffle` and `RegexReplace` on the text dataset.
#
# 1. Shuffle the dataset.
ds.config.set_seed(58)
dataset = dataset.shuffle(buffer_size=3)
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# 2. Perform `RegexReplace` on the dataset.
replace_op1 = text.RegexReplace("Beijing", "Shanghai")
replace_op2 = text.RegexReplace("北京", "上海")
dataset = dataset.map(operations=replace_op1)
dataset = dataset.map(operations=replace_op2)
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# ## Tokenization
#
# For the data tokenization operators currently supported by MindSpore and their detailed usage methods, please refer to the [Tokenizer](https://www.mindspore.cn/docs/programming_guide/en/master/tokenizer.html) chapter in the Programming Guide.
#
# The following tutorial demonstrates how to use the `WhitespaceTokenizer` to tokenize words with space.
#
# 1. Create a `tokenizer`.
tokenizer = text.WhitespaceTokenizer()
# 2. Apply the `tokenizer`.
dataset = dataset.map(operations=tokenizer)
# 3. Create an iterator and obtain data through the iterator.
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']).tolist())
| docs/mindspore/programming_guide/source_en/load_dataset_text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Purpose: Pig Analysis Segmentation and Specific Features
# ### Purpose: To move from thresholding to segmentation and shape feature quantification
# Created by: <NAME>
# Creation Date: 05/21/2021
# Last Update: 06/4/2021 (updated to only include the Li threshold information)
# *Step 1: Import Necessary Packages*
# +
import numpy as np
import pandas as pd
from scipy import ndimage
import skimage.filters
from skimage import morphology
from skimage.measure import label, regionprops, regionprops_table
from skimage.color import label2rgb
from skimage import io
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
# -
# *Step 2: User Inputs*
# +
#replace the example path from my computer with the path to the image on your computer
cell_im_location = '/Users/hhelmbre/Desktop/Sample_piglet_dataset/FGR_P4_2414_frontal_cortex_2.tif'
# -
# *Step 3: Reading in the Image*
cell_im = io.imread(cell_im_location)
cell_im.shape
# *Step 4: Viewing the Image*
# *Step 5: Splitting Channels for Thresholding*
nucleus_im = cell_im[0,:,:]
cell_im = cell_im[1,:,:]
plt.imshow(cell_im)
# *Step 6: Applying the Li Threshold*
thresh_li = skimage.filters.threshold_li(cell_im)
binary_li = cell_im > thresh_li
# *Step 7: Checking our Threshold*
# +
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
ax[0, 0].imshow(cell_im, cmap=plt.cm.gray)
ax[0, 0].set_title('Original')
ax[0, 1].hist(cell_im.ravel(), bins=256)
ax[0, 1].set_title('Histogram')
ax[0, 1].set_xlim((0, 256))
ax[1, 0].imshow(binary_li, cmap=plt.cm.gray)
ax[1, 0].set_title('Thresholded (Otsu)')
ax[1, 1].hist(cell_im.ravel(), bins=256)
ax[1, 1].axvline(thresh_li, color='r')
ax[1, 1].set_xlim((0, 256))
for a in ax[:, 0]:
a.axis('off')
plt.show()
#I do not know why the ravel has stoped working here
# -
# *Step 8: Removing Small Objects from the Threshold (Li) Image*
thresh_li = skimage.filters.threshold_li(cell_im)
binary_li = cell_im > thresh_li
new_binary_li = morphology.remove_small_objects(binary_li, min_size=64)
# +
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
ax[0, 0].imshow(cell_im, cmap=plt.cm.gray)
ax[0, 0].set_title('Original')
ax[0, 1].hist(cell_im.ravel(), bins=256)
ax[0, 1].set_title('Histogram')
ax[0, 1].set_xlim((0, 256))
ax[1, 0].imshow(new_binary_li, cmap=plt.cm.gray)
ax[1, 0].set_title('Thresholded (Otsu)')
ax[1, 1].hist(cell_im.ravel(), bins=256)
ax[1, 1].axvline(thresh_otsu, color='r')
ax[1, 1].set_xlim((0, 256))
for a in ax[:, 0]:
a.axis('off')
plt.show()
#Still not sure why the ravel is not working
# -
# *Step 9: Labeling the Image*
label_image = label(new_binary_li)
image_label_overlay = label2rgb(label_image, image=new_binary_li, bg_label=0)
# *Step 10: Viewing the labeled image with area boxes*
# +
fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(image_label_overlay)
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 100:
# draw rectangle around segmented coins
minr, minc, maxr, maxc = region.bbox
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
# -
# *Step 10: Filling in shape holes to see if it improves our labeling*
# +
new_binary_otsu = ndimage.binary_fill_holes(new_binary_li)
new_binary_otsu = morphology.remove_small_objects(new_binary_li, 500)
# -
label_image = label(new_binary_otsu)
image_label_overlay = label2rgb(label_image, image=new_binary_otsu, bg_label=0)
# +
fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(image_label_overlay)
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 500:
# draw rectangle around segmented coins
minr, minc, maxr, maxc = region.bbox
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
# -
# *Step 12: Getting a .csv file of multiple regionprops*
# +
from skimage import measure
props = measure.regionprops_table(label_image, properties=('perimeter',
'area',
'major_axis_length',
'minor_axis_length',))
# -
green_shape_features = pd.DataFrame(props)
# *Step 13: Viewing the Table*
green_shape_features
# *Step 14: Caculating the Circularity*
green_shape_features['circularity'] = 4*np.pi*green_shape_features.area/green_shape_features.perimeter**2
green_shape_features
# *Step 15: Calculating the Aspect Ratio*
green_shape_features['aspect_ratio'] = green_shape_features.major_axis_length/green_shape_features.minor_axis_length
green_shape_features
# *Step 16: Plotting some values*
green_shape_features['stain'] = 'iba1'
green_shape_features.plot(x ='perimeter', y='area', kind = 'scatter')
# *Step 17: Saving as a CSV file*
green_shape_features.to_csv('/Users/hhelmbre/Desktop/Sample_piglet_dataset/FGR_P4_2414_frontal_cortex_shape_features.csv')
# *Step 18: Individual Exploration*
# Apply these steps to a different stain, try to add new features from region props, try different plotting methods in the notebook, take the CSV and do some plotting of your own!
# Next Week: We will get into processing multiple images and into experimental treatment groups.
| pig_analysis_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Limits of Diversification
# +
import pandas as pd
import numpy as np
import matplotlib as plt
import ashmodule as ash
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# -
ash.get_hfi_returns(other_file = True)
ash.get_hfi_returns()
ash.get_hfi_returns(other_file=True, file_path = "data/ind30_m_size.csv")
idx_return = ash.get_idx_returns()
idx_nfirms = ash.get_hfi_returns(other_file = True, file_path="data/ind30_m_nfirms.csv",denominator = 1)
idx_size = ash.get_hfi_returns(other_file = True, file_path="data/ind30_m_size.csv",denominator = 1)
idx_return.shape
idx_nfirms.shape
idx_size.shape
idx_nfirms.head()
ind_mktcap = idx_nfirms * idx_size
ind_mktcap.shape
total_mktcap = ind_mktcap.sum(axis ="columns")
total_mktcap.plot(figsize = (12,6));
ind_capweight = ind_mktcap.divide(total_mktcap, axis = "rows")
ind_capweight.head()
ind_capweight["1926"].sum(axis = "columns")
ind_capweight.columns = ind_capweight.columns.str.strip()
l = ["Fin", "Books", "Beer"]
ind_capweight[l].plot(figsize = (12,6));
ind_capweight["2000":][l].plot(figsize = (12,7));
total_market_return = (ind_capweight * idx_return).sum(axis = "columns")
total_market_return.plot(figsize= (12,6))
figsize = (12,6)
total_market_return.plot(figsize=figsize);
total_market_index = ash.drawdown(total_market_return).Wealth
total_market_index.plot(figsize=figsize);
total_market_return.head()
total_market_index.head()
total_market_index.tail()
ax = total_market_index["2000":].plot(figsize = figsize);
ax.set_ylim(bottom = 0);
total_market_index["1990":].plot(figsize = figsize);
total_market_index["1990":].rolling(window = 36).mean().plot(figsize = figsize);
total_market_index["1990":].rolling(window = 12).mean().plot(figsize = figsize);
tmi_tr36rets = total_market_return.rolling(window = 36).aggregate(ash.annualize_rets,periods_per_year = 12)
tmi_tr36rets.plot(figsize=figsize, label = "Tr 36 Mo return", legend = True);
total_market_return.plot(label ="Return", legend = True);
tmi_tr36rets["2000":].plot(figsize=figsize, label = "Tr 36 Mo return", legend = True);
total_market_return["2000":].plot(label ="Return", legend = True);
# # Rolling Correlation - MultiIndixes and `.groupby`
ts_corr = idx_return.rolling(window = 36).corr()
ts_corr.tail()
ts_corr.index.names = ["date", "industry"]
ts_corr.tail()
ind_tr36corr= ts_corr.groupby(level = 'date').apply(lambda cormat: cormat.values.mean())
ind_tr36corr.tail()
ind_tr36corr.plot(figsize= figsize);
tmi_tr36rets.plot(figsize= figsize, label = "Tr 36 Mo Rets", legend = True);
ind_tr36corr.plot(figsize= figsize, label = "Tr 36 Mo Corr",legend = True, secondary_y = True);
tmi_tr36rets['2007':].plot(figsize= figsize, label = "Tr 36 Mo Rets", legend = True);
ind_tr36corr['2007':].plot(figsize= figsize, label = "Tr 36 Mo Corr",legend = True, secondary_y = True);
tmi_tr36rets.corr(ind_tr36corr)
| Introduction to Portfolio Construction and Analysis with Python/W3/.ipynb_checkpoints/Limits of Diversification-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Object Detection Using Convolutional Neural Networks
#
# So far, when we've talked about making predictions based on images,
# we were concerned only with classification.
# We asked questions like is this digit a "0", "1", ..., or "9?"
# or, does this picture depict a "cat" or a "dog"?
# Object detection is a more challenging task.
# Here our goal is not only to say *what* is in the image
# but also to recognize *where* it is in the image.
# As an example, consider the following image, which depicts two dogs and a cat together with their locations.
#
# 
#
# So object defers from image classification in a few ways.
# First, while a classifier outputs a single category per image,
# an object detector must be able to recognize multiple objects in a single image.
# Technically, this task is called *multiple object detection*,
# but most research in the area addresses the multiple object setting,
# so we'll abuse terminology just a little.
# Second, while classifiers need only to output probabilities over classes,
# object detectors must output both probabilities of class membership
# and also the coordinates that identify the location of the objects.
#
#
# On this chapter we'll demonstrate the single shot multiple box object detector (SSD),
# a popular model for object detection that was first described in [this paper](https://arxiv.org/abs/1512.02325),
# and is straightforward to implement in MXNet Gluon.
#
#
# ## SSD: Single Shot MultiBox Detector
#
# The SSD model predicts anchor boxes at multiple scales. The model architecture is illustrated in the following figure.
#
# . The class labels and the corresponding anchor boxes
# are predicted by `class_predictor` and `box_predictor`, respectively.
# We then downsample the representations to the next scale (scale 1).
# Again, at this new resolution, we predict both classes and anchor boxes.
# This downsampling and predicting routine
# can be repeated in multiple times to obtain results on multiple resolution scales.
# Let's walk through the components one by one in a bit more detail.
#
# ### Default anchor boxes
#
# Since an anchor box can have arbituary shape,
# we sample a set of anchor boxes as the candidate.
# In particular, for each pixel, we sample multiple boxes
# centered at this pixel but have various sizes and ratios.
# Assume the input size is $w \times h$,
# - for size $s\in (0,1]$, the generated box shape will be $ws \times hs$
# - for ratio $r > 0$, the generated box shape will be $w\sqrt{r} \times \frac{h}{\sqrt{r}}$
#
# We can sample the boxes using the operator `MultiBoxPrior`.
# It accepts *n* sizes and *m* ratios to generate *n+m-1* boxes for each pixel.
# The first *i* boxes are generated from `sizes[i], ratios[0]`
# if $i \le n$ otherwise `sizes[0], ratios[i-n]`.
# + attributes={"classes": [], "id": "", "n": "1"}
import mxnet as mx
from mxnet import nd
from mxnet.contrib.ndarray import MultiBoxPrior
n = 40
# shape: batch x channel x height x weight
x = nd.random_uniform(shape=(1, 3, n, n))
y = MultiBoxPrior(x, sizes=[.5, .25, .1], ratios=[1, 2, .5])
# the first anchor box generated for pixel at (20,20)
# its format is (x_min, y_min, x_max, y_max)
boxes = y.reshape((n, n, -1, 4))
print('The first anchor box at row 21, column 21:', boxes[20, 20, 0, :])
# -
# We can visualize all anchor boxes generated for one pixel on a certain size feature map.
# + attributes={"classes": [], "id": "", "n": "2"}
import matplotlib.pyplot as plt
def box_to_rect(box, color, linewidth=3):
"""convert an anchor box to a matplotlib rectangle"""
box = box.asnumpy()
return plt.Rectangle(
(box[0], box[1]), (box[2]-box[0]), (box[3]-box[1]),
fill=False, edgecolor=color, linewidth=linewidth)
colors = ['blue', 'green', 'red', 'black', 'magenta']
plt.imshow(nd.ones((n, n, 3)).asnumpy())
anchors = boxes[20, 20, :, :]
for i in range(anchors.shape[0]):
plt.gca().add_patch(box_to_rect(anchors[i,:]*n, colors[i]))
plt.show()
# -
# ### Predict classes
#
# For each anchor box, we want to predict the associated class label.
# We make this prediction by using a convolution layer.
# We choose a kernel of size $3\times 3$ with padding size $(1, 1)$
# so that the output will have the same width and height as the input.
# The confidence scores for the anchor box class labels are stored in channels.
# In particular, for the *i*-th anchor box:
#
# - channel `i*(num_class+1)` store the scores for this box contains only background
# - channel `i*(num_class+1)+1+j` store the scores for this box contains an object from the *j*-th class
# + attributes={"classes": [], "id": "", "n": "3"}
from mxnet.gluon import nn
def class_predictor(num_anchors, num_classes):
"""return a layer to predict classes"""
return nn.Conv2D(num_anchors * (num_classes + 1), 3, padding=1)
cls_pred = class_predictor(5, 10)
cls_pred.initialize()
x = nd.zeros((2, 3, 20, 20))
print('Class prediction', cls_pred(x).shape)
# -
# ### Predict anchor boxes
#
# The goal is predict how to transfer the current anchor box to the correct box. That is, assume $b$ is one of the sampled default box, while $Y$ is the ground truth, then we want to predict the delta positions $\Delta(Y, b)$, which is a 4-length vector.
#
# More specifically, the we define the delta vector as:
# [$t_x$, $t_y$, $t_{width}$, $t_{height}$], where
#
# - $t_x = (Y_x - b_x) / b_{width}$
# - $t_y = (Y_y - b_y) / b_{height}$
# - $t_{width} = (Y_{width} - b_{width}) / b_{width}$
# - $t_{height} = (Y_{height} - b_{height}) / b_{height}$
#
# Normalizing the deltas with box width/height tends to result in better convergence behavior.
#
# Similar to classes, we use a convolution layer here. The only difference is that the output channel size is now `num_anchors * 4`, with the predicted delta positions for the *i*-th box stored from channel `i*4` to `i*4+3`.
# + attributes={"classes": [], "id": "", "n": "4"}
def box_predictor(num_anchors):
"""return a layer to predict delta locations"""
return nn.Conv2D(num_anchors * 4, 3, padding=1)
box_pred = box_predictor(10)
box_pred.initialize()
x = nd.zeros((2, 3, 20, 20))
print('Box prediction', box_pred(x).shape)
# -
# ### Down-sample features
#
# Each time, we downsample the features by half.
# This can be achieved by a simple pooling layer with pooling size 2.
# We may also stack two convolution, batch normalization and ReLU blocks
# before the pooling layer to make the network deeper.
# + attributes={"classes": [], "id": "", "n": "5"}
def down_sample(num_filters):
"""stack two Conv-BatchNorm-Relu blocks and then a pooling layer
to halve the feature size"""
out = nn.HybridSequential()
for _ in range(2):
out.add(nn.Conv2D(num_filters, 3, strides=1, padding=1))
out.add(nn.BatchNorm(in_channels=num_filters))
out.add(nn.Activation('relu'))
out.add(nn.MaxPool2D(2))
return out
blk = down_sample(10)
blk.initialize()
x = nd.zeros((2, 3, 20, 20))
print('Before', x.shape, 'after', blk(x).shape)
# -
# ### Manage preditions from multiple layers
#
# A key property of SSD is that predictions are made
# at multiple layers with shrinking spatial size.
# Thus, we have to handle predictions from multiple feature layers.
# One idea is to concatenate them along convolutional channels,
# with each one predicting a correspoding value(class or box) for each default anchor.
# We give class predictor as an example, and box predictor follows the same rule.
# + attributes={"classes": [], "id": "", "n": "6"}
# a certain feature map with 20x20 spatial shape
feat1 = nd.zeros((2, 8, 20, 20))
print('Feature map 1', feat1.shape)
cls_pred1 = class_predictor(5, 10)
cls_pred1.initialize()
y1 = cls_pred1(feat1)
print('Class prediction for feature map 1', y1.shape)
# down-sample
ds = down_sample(16)
ds.initialize()
feat2 = ds(feat1)
print('Feature map 2', feat2.shape)
cls_pred2 = class_predictor(3, 10)
cls_pred2.initialize()
y2 = cls_pred2(feat2)
print('Class prediction for feature map 2', y2.shape)
# + attributes={"classes": [], "id": "", "n": "7"}
def flatten_prediction(pred):
return nd.flatten(nd.transpose(pred, axes=(0, 2, 3, 1)))
def concat_predictions(preds):
return nd.concat(*preds, dim=1)
flat_y1 = flatten_prediction(y1)
print('Flatten class prediction 1', flat_y1.shape)
flat_y2 = flatten_prediction(y2)
print('Flatten class prediction 2', flat_y2.shape)
print('Concat class predictions', concat_predictions([flat_y1, flat_y2]).shape)
# -
# ### Body network
#
# The body network is used to extract features from the raw pixel inputs. Common choices follow the architectures of the state-of-the-art convolution neural networks for image classification. For demonstration purpose, we just stack several down sampling blocks to form the body network.
# + attributes={"classes": [], "id": "", "n": "8"}
from mxnet import gluon
def body():
"""return the body network"""
out = nn.HybridSequential()
for nfilters in [16, 32, 64]:
out.add(down_sample(nfilters))
return out
bnet = body()
bnet.initialize()
x = nd.zeros((2, 3, 256, 256))
print('Body network', [y.shape for y in bnet(x)])
# -
# ### Create a toy SSD model
#
# Now, let's create a toy SSD model that takes images of resolution $256 \times 256$ as input.
# + attributes={"classes": [], "id": "", "n": "9"}
def toy_ssd_model(num_anchors, num_classes):
"""return SSD modules"""
downsamples = nn.Sequential()
class_preds = nn.Sequential()
box_preds = nn.Sequential()
downsamples.add(down_sample(128))
downsamples.add(down_sample(128))
downsamples.add(down_sample(128))
for scale in range(5):
class_preds.add(class_predictor(num_anchors, num_classes))
box_preds.add(box_predictor(num_anchors))
return body(), downsamples, class_preds, box_preds
print(toy_ssd_model(5, 2))
# -
# ### Forward
#
# Given an input and the model, we can run the forward pass.
# + attributes={"classes": [], "id": "", "n": "10"}
def toy_ssd_forward(x, body, downsamples, class_preds, box_preds, sizes, ratios):
# extract feature with the body network
x = body(x)
# for each scale, add anchors, box and class predictions,
# then compute the input to next scale
default_anchors = []
predicted_boxes = []
predicted_classes = []
for i in range(5):
default_anchors.append(MultiBoxPrior(x, sizes=sizes[i], ratios=ratios[i]))
predicted_boxes.append(flatten_prediction(box_preds[i](x)))
predicted_classes.append(flatten_prediction(class_preds[i](x)))
if i < 3:
x = downsamples[i](x)
elif i == 3:
# simply use the pooling layer
x = nd.Pooling(x, global_pool=True, pool_type='max', kernel=(4, 4))
return default_anchors, predicted_classes, predicted_boxes
# -
# ### Put all things together
# + attributes={"classes": [], "id": "", "n": "11"}
from mxnet import gluon
class ToySSD(gluon.Block):
def __init__(self, num_classes, **kwargs):
super(ToySSD, self).__init__(**kwargs)
# anchor box sizes for 4 feature scales
self.anchor_sizes = [[.2, .272], [.37, .447], [.54, .619], [.71, .79], [.88, .961]]
# anchor box ratios for 4 feature scales
self.anchor_ratios = [[1, 2, .5]] * 5
self.num_classes = num_classes
with self.name_scope():
self.body, self.downsamples, self.class_preds, self.box_preds = toy_ssd_model(4, num_classes)
def forward(self, x):
default_anchors, predicted_classes, predicted_boxes = toy_ssd_forward(x, self.body, self.downsamples,
self.class_preds, self.box_preds, self.anchor_sizes, self.anchor_ratios)
# we want to concatenate anchors, class predictions, box predictions from different layers
anchors = concat_predictions(default_anchors)
box_preds = concat_predictions(predicted_boxes)
class_preds = concat_predictions(predicted_classes)
# it is better to have class predictions reshaped for softmax computation
class_preds = nd.reshape(class_preds, shape=(0, -1, self.num_classes + 1))
return anchors, class_preds, box_preds
# -
# ### Outputs of ToySSD
# + attributes={"classes": [], "id": "", "n": "12"}
# instantiate a ToySSD network with 10 classes
net = ToySSD(2)
net.initialize()
x = nd.zeros((1, 3, 256, 256))
default_anchors, class_predictions, box_predictions = net(x)
print('Outputs:', 'anchors', default_anchors.shape, 'class prediction', class_predictions.shape, 'box prediction', box_predictions.shape)
# -
# ## Dataset
#
# For demonstration purposes, we'll build a train our model to detect Pikachu in the wild.
# We generated a a synthetic toy dataset by rendering images from open-sourced 3D Pikachu models.
# The dataset consists of 1000 pikachus with random pose/scale/position in random background images.
# The exact locations are recorded as ground-truth for training and validation.
#
# 
#
#
# ### Download dataset
# + attributes={"classes": [], "id": "", "n": "13"}
from mxnet.test_utils import download
import os.path as osp
def verified(file_path, sha1hash):
import hashlib
sha1 = hashlib.sha1()
with open(file_path, 'rb') as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
matched = sha1.hexdigest() == sha1hash
if not matched:
print('Found hash mismatch in file {}, possibly due to incomplete download.'.format(file_path))
return matched
url_format = 'https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/dataset/pikachu/{}'
hashes = {'train.rec': 'e6bcb6ffba1ac04ff8a9b1115e650af56ee969c8',
'train.idx': 'dcf7318b2602c06428b9988470c731621716c393',
'val.rec': 'd6c33f799b4d058e82f2cb5bd9a976f69d72d520'}
for k, v in hashes.items():
fname = 'pikachu_' + k
target = osp.join('data', fname)
url = url_format.format(k)
if not osp.exists(target) or not verified(target, v):
print('Downloading', target, url)
download(url, fname=fname, dirname='data', overwrite=True)
# -
# ### Load dataset
# + attributes={"classes": [], "id": "", "n": "14"}
import mxnet.image as image
data_shape = 256
batch_size = 32
def get_iterators(data_shape, batch_size):
class_names = ['pikachu']
num_class = len(class_names)
train_iter = image.ImageDetIter(
batch_size=batch_size,
data_shape=(3, data_shape, data_shape),
path_imgrec='./data/pikachu_train.rec',
path_imgidx='./data/pikachu_train.idx',
shuffle=True,
mean=True,
rand_crop=1,
min_object_covered=0.95,
max_attempts=200)
val_iter = image.ImageDetIter(
batch_size=batch_size,
data_shape=(3, data_shape, data_shape),
path_imgrec='./data/pikachu_val.rec',
shuffle=False,
mean=True)
return train_iter, val_iter, class_names, num_class
train_data, test_data, class_names, num_class = get_iterators(data_shape, batch_size)
batch = train_data.next()
print(batch)
# -
# ### Illustration
#
# Let's display one image loaded by ImageDetIter.
# + attributes={"classes": [], "id": "", "n": "15"}
import numpy as np
img = batch.data[0][0].asnumpy() # grab the first image, convert to numpy array
img = img.transpose((1, 2, 0)) # we want channel to be the last dimension
img += np.array([123, 117, 104])
img = img.astype(np.uint8) # use uint8 (0-255)
# draw bounding boxes on image
for label in batch.label[0][0].asnumpy():
if label[0] < 0:
break
print(label)
xmin, ymin, xmax, ymax = [int(x * data_shape) for x in label[1:5]]
rect = plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, edgecolor=(1, 0, 0), linewidth=3)
plt.gca().add_patch(rect)
plt.imshow(img)
plt.show()
# -
# ## Train
#
# ### Losses
#
# Network predictions will be penalized for incorrect class predictions and wrong box deltas.
# + attributes={"classes": [], "id": "", "n": "16"}
from mxnet.contrib.ndarray import MultiBoxTarget
def training_targets(default_anchors, class_predicts, labels):
class_predicts = nd.transpose(class_predicts, axes=(0, 2, 1))
z = MultiBoxTarget(*[default_anchors, labels, class_predicts])
box_target = z[0] # box offset target for (x, y, width, height)
box_mask = z[1] # mask is used to ignore box offsets we don't want to penalize, e.g. negative samples
cls_target = z[2] # cls_target is an array of labels for all anchors boxes
return box_target, box_mask, cls_target
# -
# Pre-defined losses are provided in `gluon.loss` package, however, we can define losses manually.
#
# First, we need a Focal Loss for class predictions.
# + attributes={"classes": [], "id": "", "n": "17"}
class FocalLoss(gluon.loss.Loss):
def __init__(self, axis=-1, alpha=0.25, gamma=2, batch_axis=0, **kwargs):
super(FocalLoss, self).__init__(None, batch_axis, **kwargs)
self._axis = axis
self._alpha = alpha
self._gamma = gamma
def hybrid_forward(self, F, output, label):
output = F.softmax(output)
pt = F.pick(output, label, axis=self._axis, keepdims=True)
loss = -self._alpha * ((1 - pt) ** self._gamma) * F.log(pt)
return F.mean(loss, axis=self._batch_axis, exclude=True)
# cls_loss = gluon.loss.SoftmaxCrossEntropyLoss()
cls_loss = FocalLoss()
print(cls_loss)
# -
# Next, we need a SmoothL1Loss for box predictions.
# + attributes={"classes": [], "id": "", "n": "18"}
class SmoothL1Loss(gluon.loss.Loss):
def __init__(self, batch_axis=0, **kwargs):
super(SmoothL1Loss, self).__init__(None, batch_axis, **kwargs)
def hybrid_forward(self, F, output, label, mask):
loss = F.smooth_l1((output - label) * mask, scalar=1.0)
return F.mean(loss, self._batch_axis, exclude=True)
box_loss = SmoothL1Loss()
print(box_loss)
# -
# ### Evaluation metrics
#
# Here, we define two metrics that we'll use to evaluate our performance whien training.
# You're already familiar with accuracy unless you've been naughty and skipped straight to object detection.
# We use the accuracy metric to assess the quality of the class predictions.
# Mean absolute error (MAE) is just the L1 distance, introduced in our [linear algebra chapter](../chapter01_crashcourse/linear-algebra.ipynb).
# We use this to determine how close the coordinates of the predicted bounding boxes are to the ground-truth coordinates.
# Because we are jointly solving both a classification problem and a regression problem, we need an appropriate metric for each task.
# + attributes={"classes": [], "id": "", "n": "19"}
cls_metric = mx.metric.Accuracy()
box_metric = mx.metric.MAE() # measure absolute difference between prediction and target
# + attributes={"classes": [], "id": "", "n": "20"}
### Set context for training
ctx = mx.gpu() # it may takes too long to train using CPU
try:
_ = nd.zeros(1, ctx=ctx)
# pad label for cuda implementation
train_data.reshape(label_shape=(3, 5))
train_data = test_data.sync_label_shape(train_data)
except mx.base.MXNetError as err:
print('No GPU enabled, fall back to CPU, sit back and be patient...')
ctx = mx.cpu()
# -
# ### Initialize parameters
# + attributes={"classes": [], "id": "", "n": "21"}
net = ToySSD(num_class)
net.initialize(mx.init.Xavier(magnitude=2), ctx=ctx)
# -
# ### Set up trainer
# + attributes={"classes": [], "id": "", "n": "22"}
net.collect_params().reset_ctx(ctx)
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1, 'wd': 5e-4})
# -
# ### Start training
#
# Optionally we load pretrained model for demonstration purpose. One can set `from_scratch = True` to training from scratch, which may take more than 30 mins to finish using a single capable GPU.
# + attributes={"classes": [], "id": "", "n": "23"}
epochs = 150 # set larger to get better performance
log_interval = 20
from_scratch = False # set to True to train from scratch
if from_scratch:
start_epoch = 0
else:
start_epoch = 148
pretrained = 'ssd_pretrained.params'
sha1 = 'fbb7d872d76355fff1790d864c2238decdb452bc'
url = 'https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/models/ssd_pikachu-fbb7d872.params'
if not osp.exists(pretrained) or not verified(pretrained, sha1):
print('Downloading', pretrained, url)
download(url, fname=pretrained, overwrite=True)
net.load_params(pretrained, ctx)
# + attributes={"classes": [], "id": "", "n": "24"}
import time
from mxnet import autograd as ag
for epoch in range(start_epoch, epochs):
# reset iterator and tick
train_data.reset()
cls_metric.reset()
box_metric.reset()
tic = time.time()
# iterate through all batch
for i, batch in enumerate(train_data):
btic = time.time()
# record gradients
with ag.record():
x = batch.data[0].as_in_context(ctx)
y = batch.label[0].as_in_context(ctx)
default_anchors, class_predictions, box_predictions = net(x)
box_target, box_mask, cls_target = training_targets(default_anchors, class_predictions, y)
# losses
loss1 = cls_loss(class_predictions, cls_target)
loss2 = box_loss(box_predictions, box_target, box_mask)
# sum all losses
loss = loss1 + loss2
# backpropagate
loss.backward()
# apply
trainer.step(batch_size)
# update metrics
cls_metric.update([cls_target], [nd.transpose(class_predictions, (0, 2, 1))])
box_metric.update([box_target], [box_predictions * box_mask])
if (i + 1) % log_interval == 0:
name1, val1 = cls_metric.get()
name2, val2 = box_metric.get()
print('[Epoch %d Batch %d] speed: %f samples/s, training: %s=%f, %s=%f'
%(epoch ,i, batch_size/(time.time()-btic), name1, val1, name2, val2))
# end of epoch logging
name1, val1 = cls_metric.get()
name2, val2 = box_metric.get()
print('[Epoch %d] training: %s=%f, %s=%f'%(epoch, name1, val1, name2, val2))
print('[Epoch %d] time cost: %f'%(epoch, time.time()-tic))
# we can save the trained parameters to disk
net.save_params('ssd_%d.params' % epochs)
# -
# ## Test
#
# Testing is similar to training, except that we don't need to compute gradients and training targets. Instead, we take the predictions from network output, and combine them to get the real detection output.
#
# ### Prepare the test data
# + attributes={"classes": [], "id": "", "n": "25"}
import numpy as np
def preprocess(im):
"""Takes an image and apply preprocess"""
# resize to data_shape
im = image.imresize(im, data_shape, data_shape)
# swap BGR to RGB
# im = im[:, :, (2, 1, 0)]
# convert to float before subtracting mean
im = im.astype('float32')
# subtract mean
im -= nd.array([123, 117, 104])
# organize as [batch-channel-height-width]
im = im.transpose((2, 0, 1))
im = im.expand_dims(axis=0)
return im
with open('../img/pikachu.jpg', 'rb') as f:
im = image.imdecode(f.read())
x = preprocess(im)
print('x', x.shape)
# -
# ### Network inference
#
# In a single line of code!
# + attributes={"classes": [], "id": "", "n": "26"}
# if pre-trained model is provided, we can load it
# net.load_params('ssd_%d.params' % epochs, ctx)
anchors, cls_preds, box_preds = net(x.as_in_context(ctx))
print('anchors', anchors)
print('class predictions', cls_preds)
print('box delta predictions', box_preds)
# -
# ### Convert predictions to real object detection results
# + attributes={"classes": [], "id": "", "n": "27"}
from mxnet.contrib.ndarray import MultiBoxDetection
# convert predictions to probabilities using softmax
cls_probs = nd.SoftmaxActivation(nd.transpose(cls_preds, (0, 2, 1)), mode='channel')
# apply shifts to anchors boxes, non-maximum-suppression, etc...
output = MultiBoxDetection(*[cls_probs, box_preds, anchors], force_suppress=True, clip=False)
print(output)
# -
# Each row in the output corresponds to a detection box, as in format [class_id, confidence, xmin, ymin, xmax, ymax].
#
# Most of the detection results are -1, indicating that they either have very small confidence scores, or been suppressed through non-maximum-suppression.
#
# ### Display results
# + attributes={"classes": [], "id": "", "n": "28"}
def display(img, out, thresh=0.5):
import random
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10,10)
pens = dict()
plt.clf()
plt.imshow(img.asnumpy())
for det in out:
cid = int(det[0])
if cid < 0:
continue
score = det[1]
if score < thresh:
continue
if cid not in pens:
pens[cid] = (random.random(), random.random(), random.random())
scales = [img.shape[1], img.shape[0]] * 2
xmin, ymin, xmax, ymax = [int(p * s) for p, s in zip(det[2:6].tolist(), scales)]
rect = plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False,
edgecolor=pens[cid], linewidth=3)
plt.gca().add_patch(rect)
text = class_names[cid]
plt.gca().text(xmin, ymin-2, '{:s} {:.3f}'.format(text, score),
bbox=dict(facecolor=pens[cid], alpha=0.5),
fontsize=12, color='white')
plt.show()
display(im, output[0].asnumpy(), thresh=0.45)
# -
# ## Conclusion
#
# Detection is harder than classification, since we want not only class probabilities, but also localizations of different objects including potential small objects. Using sliding window together with a good classifier might be an option, however, we have shown that with a properly designed convolutional neural network, we can do single shot detection which is blazing fast and accurate!
#
# For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)
| chapter_computer-vision/ssd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import print_function
import os
import argparse
from glob import glob
from PIL import Image
import tensorflow as tf
from model import lowlight_enhance
from utils import *
parser = argparse.ArgumentParser(description='')
parser.add_argument('--use_gpu', dest='use_gpu', type=int, default=1, help='gpu flag, 1 for GPU and 0 for CPU')
parser.add_argument('--gpu_idx', dest='gpu_idx', default="0", help='GPU idx')
parser.add_argument('--gpu_mem', dest='gpu_mem', type=float, default=0.5, help="0 to 1, gpu memory usage")
parser.add_argument('--phase', dest='phase', default='train', help='train or test')
parser.add_argument('--epoch', dest='epoch', type=int, default=100, help='number of total epoches')
parser.add_argument('--batch_size', dest='batch_size', type=int, default=16, help='number of samples in one batch')
parser.add_argument('--patch_size', dest='patch_size', type=int, default=48, help='patch size')
parser.add_argument('--start_lr', dest='start_lr', type=float, default=0.001, help='initial learning rate for adam')
parser.add_argument('--eval_every_epoch', dest='eval_every_epoch', default=20, help='evaluating and saving checkpoints every # epoch')
parser.add_argument('--checkpoint_dir', dest='ckpt_dir', default='./checkpoint', help='directory for checkpoints')
parser.add_argument('--sample_dir', dest='sample_dir', default='./sample', help='directory for evaluating outputs')
parser.add_argument('--save_dir', dest='save_dir', default='./test_results', help='directory for testing outputs')
parser.add_argument('--test_dir', dest='test_dir', default='./data/test/low', help='directory for testing inputs')
parser.add_argument('--decom', dest='decom', default=0, help='decom flag, 0 for enhanced results only and 1 for decomposition results')
args = parser.parse_args()
def lowlight_train(lowlight_enhance):
if not os.path.exists(args.ckpt_dir):
os.makedirs(args.ckpt_dir)
if not os.path.exists(args.sample_dir):
os.makedirs(args.sample_dir)
lr = args.start_lr * np.ones([args.epoch])
lr[20:] = lr[0] / 10.0
train_low_data = []
train_high_data = []
train_low_data_names = glob('./data/our485/low/*.png') + glob('./data/syn/low/*.png')
train_low_data_names.sort()
train_high_data_names = glob('./data/our485/high/*.png') + glob('./data/syn/high/*.png')
train_high_data_names.sort()
assert len(train_low_data_names) == len(train_high_data_names)
print('[*] Number of training data: %d' % len(train_low_data_names))
for idx in range(len(train_low_data_names)):
low_im = load_images(train_low_data_names[idx])
train_low_data.append(low_im)
high_im = load_images(train_high_data_names[idx])
train_high_data.append(high_im)
eval_low_data = []
eval_high_data = []
eval_low_data_name = glob('./data/eval/low/*.*')
for idx in range(len(eval_low_data_name)):
eval_low_im = load_images(eval_low_data_name[idx])
eval_low_data.append(eval_low_im)
lowlight_enhance.train(train_low_data, train_high_data, eval_low_data, batch_size=args.batch_size, patch_size=args.patch_size, epoch=args.epoch, lr=lr, sample_dir=args.sample_dir, ckpt_dir=os.path.join(args.ckpt_dir, 'Decom'), eval_every_epoch=args.eval_every_epoch, train_phase="Decom")
lowlight_enhance.train(train_low_data, train_high_data, eval_low_data, batch_size=args.batch_size, patch_size=args.patch_size, epoch=args.epoch, lr=lr, sample_dir=args.sample_dir, ckpt_dir=os.path.join(args.ckpt_dir, 'Relight'), eval_every_epoch=args.eval_every_epoch, train_phase="Relight")
def lowlight_test(lowlight_enhance):
if args.test_dir == None:
print("[!] please provide --test_dir")
exit(0)
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
test_low_data_name = glob(os.path.join(args.test_dir) + '/*.*')
test_low_data = []
test_high_data = []
for idx in range(len(test_low_data_name)):
test_low_im = load_images(test_low_data_name[idx])
test_low_data.append(test_low_im)
lowlight_enhance.test(test_low_data, test_high_data, test_low_data_name, save_dir=args.save_dir, decom_flag=args.decom)
def main(_):
if args.use_gpu:
print("[*] GPU\n")
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_idx
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_mem)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
model = lowlight_enhance(sess)
if args.phase == 'train':
lowlight_train(model)
elif args.phase == 'test':
lowlight_test(model)
else:
print('[!] Unknown phase')
exit(0)
else:
print("[*] CPU\n")
with tf.Session() as sess:
model = lowlight_enhance(sess)
if args.phase == 'train':
lowlight_train(model)
elif args.phase == 'test':
lowlight_test(model)
else:
print('[!] Unknown phase')
exit(0)
if __name__ == '__main__':
tf.app.run()
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
# Sometimes for my own understanding I like to build predictive models (almost) completely manually.
#
# I enjoy gaining a more visceral understanding by doing so.
#
# This example below is an implementation of ARIMA (Auto Regression Integrated Moving Average).
# ### this was hacked together, don't change the dimensions of the matrices
# +
t = np.arange(0, 60)
amplitude = .3
frequency = .1
noise = .1
slope = -.08
x = amplitude * np.sin((2*np.pi*frequency*t)) + slope*t + noise*np.random.randn(t.shape[0])
plt.plot(t, x)
# -
# ### ARIMA by differencing
# +
x_diff_1 = x[1:] - x[0:-1]
plt.plot(t[1:], x_diff_1)
# -
# ### first derivative to make it a stationary system
# +
x_lag_1 = x_diff_1[1:] - x_diff_1[0:-1]
x_lag_1 = x_lag_1[1:].reshape(-1, 1)
x_lag_2 = x_diff_1[2:] - x_diff_1[0: -2]
x_lag_2 = x_lag_2.reshape(-1, 1)
assert x_lag_2.shape == x_lag_1.shape
x_lags = np.concatenate((x_lag_1, x_lag_2), axis=1)
x_now = x_diff_1[x_diff_1.shape[0] - x_lags.shape[0]:].reshape(-1, 1)
x_lags = np.concatenate((x_lags, (x_lags[:, 0]**2).reshape(-1, 1)), axis=1)
x_lags = np.concatenate((x_lags, (x_lags[:, 1]**2).reshape(-1, 1)), axis=1)
x_lags = np.concatenate((x_lags, (x_lags[:, 0] * x_lags[:, 1] * 2).reshape(-1, 1)), axis=1)
x_lags_w_ones = np.concatenate((x_lags, np.ones((x_lags.shape[0], 1))), axis=1)
# -
# Using the normal equation to find the betas
# solving $X\beta = y$
#
# with $X^TX\beta = X^Ty$
x_t_x = x_lags_w_ones.T.dot(x_lags_w_ones)
x_t_y = x_lags_w_ones.T.dot(x_now)
# $(X^TX)^{-1}X^Ty = \beta$
# who has time to invert a matrix by hand anyway?
betas = np.linalg.inv(x_t_x).dot(x_t_y)
betas
# just for differences
plt.plot(t[3:], x_now, label='Actual')
x_predicted = x_lags_w_ones.dot(betas)
plt.plot(t[3:], x_predicted, label='Predicted')
plt.legend()
# but we need to integrate to get back to prediction of more than just differences
# +
plt.plot(t[4:], x[4:], label='Actual')
plt.plot(t[4:], np.cumsum(x_predicted[1: ]), label='Predicted')
plt.legend()
# -
| ARIMA_FromScratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
class SobelFilter:
def filter_v(self, img):
ksize = 3
pad = ksize // 2
K = np.array([ [1, 2, 1], [0, 0, 0], [-1, -2, -1] ], dtype=np.float32)
H, W = img.shape
input_image = np.pad(img, (1, 1), 'edge').astype(np.float32)
output_image = input_image.copy()
for i in range(H):
for j in range(W):
output_image[pad+i, pad+j] = np.mean(K * input_image[i:i+ksize, j:j+ksize])
output_image = output_image[pad:pad+H, pad:pad+W]
return output_image
def filter_h(self, img):
ksize = 3
pad = ksize // 2
K = np.array([ [1, 0, -1], [2, 0, -2], [1, 0, -1] ], dtype=np.float32)
H, W = img.shape
input_image = np.pad(img, (1, 1), 'edge').astype(np.float32)
output_image = input_image.copy()
for i in range(H):
for j in range(W):
output_image[pad+i, pad+j] = np.mean(K * input_image[i:i+ksize, j:j+ksize])
output_image = output_image[pad:pad+H, pad:pad+W]
return output_image
class GaussianFilter:
def apply(self, img, ksize=3, sigma=3):
gaussian_filter = np.zeros((ksize, ksize), dtype=np.float32)
khalf = (ksize - 1) // 2
for i in range(ksize):
for j in range(ksize):
y = i - khalf
x = j - khalf
a = -(y*y+x*x)/(2*(sigma**2))
gaussian_filter[i, j] = 1/(2*np.pi*(sigma**2)) * np.exp(a)
gaussian_filter /= np.sum(gaussian_filter)
input_img = np.pad(img, (khalf, khalf), "edge")
output_img = np.zeros_like(img, dtype=np.float32)
H, W = img.shape[:2]
for i in range(H):
for j in range(W):
output_img[i, j] = np.sum(input_img[i:i+ksize, j:j+ksize] * gaussian_filter)
return output_img
class HessianMatrix:
def __init__(self, Ix, Iy):
self.mat_dict = {}
self.mat_dict["Ix"] = Ix
self.mat_dict["Iy"] = Iy
self.mat_dict["Ix2"] = Ix**2
self.mat_dict["Iy2"] = Iy**2
self.mat_dict["Ixy"] = Ix*Iy
def get_matrix(self, y, x):
return [
[ self.mat_dict["Ix2"][y, x], self.mat_dict["Ixy"][y, x] ],
[ self.mat_dict["Ixy"][y, x], self.mat_dict["Iy2"][y, x] ]
]
def determinant(self):
return self.mat_dict["Ix2"]*self.mat_dict["Iy2"] - self.mat_dict["Ixy"]**2
def trace(self):
return self.mat_dict["Ix2"] + self.mat_dict["Iy2"]
def apply_filter(self, mat_name, filter_func, **kwargs):
self.mat_dict[mat_name] = filter_func(self.mat_dict[mat_name], **kwargs)
def __getitem__(self, mat_name):
return self.mat_dict[mat_name]
class CornerDetection:
def __init__(self):
self.sobel = SobelFilter()
self.gaussian = GaussianFilter()
def hessian(self, img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
H, W = img_gray.shape
Ix = self.sobel.filter_h(img_gray).astype(np.float32)
Iy = self.sobel.filter_v(img_gray).astype(np.float32)
hessian_mat = HessianMatrix(Ix, Iy)
det = np.pad(hessian_mat.determinant(), 1)
max_det = det.max()
output_img = np.array((img_gray, img_gray, img_gray), dtype=np.uint8)
output_img = np.transpose(output_img, (1, 2, 0))
mask = np.zeros((H, W), dtype=np.uint8)
for y in range(H):
for x in range(W):
max_value = det[y:y+3, x:x+3].max()
if det[y+1, x+1] == max_value and det[y+1, x+1] > max_det * 0.1:
output_img[y, x] = [0, 0, 255] # red
mask[y, x] = 255
return output_img, mask
def harris_part1(self, img, k_gaussian=3, sigma=3, k_harris=0.04, th=0.1):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
H, W = img_gray.shape
Ix = self.sobel.filter_h(img_gray).astype(np.float32)
Iy = self.sobel.filter_v(img_gray).astype(np.float32)
hessian_mat = HessianMatrix(Ix, Iy)
hessian_mat.apply_filter("Ix2", self.gaussian.apply, ksize=k_gaussian, sigma=sigma)
hessian_mat.apply_filter("Iy2", self.gaussian.apply, ksize=k_gaussian, sigma=sigma)
hessian_mat.apply_filter("Ixy", self.gaussian.apply, ksize=k_gaussian, sigma=sigma)
det = hessian_mat.determinant()
tra = hessian_mat.trace()
R = det - k_harris * (tra**2)
return hessian_mat, R
class Solver:
def __init__(self):
self.cd = CornerDetection()
def problem_81(self, img):
output_img, mask = self.cd.hessian(img)
plt.subplot(1, 2, 1)
plt.imshow(output_img)
plt.subplot(1, 2, 2)
plt.imshow(mask, cmap="gray")
plt.show()
def problem_82(self, img, k_gaussian=3, sigma=3, k_harris=0.04, th=0.1):
hessian_mat, _ = self.cd.harris_part1(img, k_gaussian, sigma, k_harris, th)
mat_names = ["Ix2", "Iy2", "Ixy"]
plt.figure(figsize=(10, 5))
for k, v in enumerate(mat_names):
plt.subplot(1, 3, k+1)
plt.imshow(np.clip(hessian_mat[v], 0, 255).astype(np.uint8), cmap="gray")
plt.title(v)
plt.show()
def problem_83(self, img, k_gaussian=3, sigma=3, k_harris=0.04, th=0.1):
hessian_mat, R = self.cd.harris_part1(img, k_gaussian, sigma, k_harris, th)
H, W = img.shape[:2]
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
output_img = np.array((img_gray, img_gray, img_gray))
output_img = output_img.transpose((1, 2, 0))
output_img[R >= np.max(R) * th, :] = [0, 0, 255]
plt.imshow(cv2.cvtColor(output_img, cv2.COLOR_BGR2RGB))
plt.show()
input_img = cv2.imread("../thorino.jpg")
solver = Solver()
solver.problem_83(input_img, k_gaussian=3, sigma=3, k_harris=0.04, th=0.1)
| Question_81_90/solutions_py/solution_083.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="vZYmUUGRuIg1"
# # Spark Preparation
# We check if we are in Google Colab. If this is the case, install all necessary packages.
#
# To run spark in Colab, we need to first install all the dependencies in Colab environment i.e. Apache Spark 3.2.1 with hadoop 3.2, Java 8 and Findspark to locate the spark in the system. The tools installation can be carried out inside the Jupyter Notebook of the Colab.
# Learn more from [A Must-Read Guide on How to Work with PySpark on Google Colab for Data Scientists!](https://www.analyticsvidhya.com/blog/2020/11/a-must-read-guide-on-how-to-work-with-pyspark-on-google-colab-for-data-scientists/)
# + id="ExYymIWJuIg_"
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
# + id="gezOG6MiuIhB"
if IN_COLAB:
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q https://dlcdn.apache.org/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz
# !tar xf spark-3.2.1-bin-hadoop3.2.tgz
# !mv spark-3.2.1-bin-hadoop3.2 spark
# !pip install -q findspark
# + id="r7JUdnC7uIhC"
if IN_COLAB:
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark"
# + [markdown] id="8OtyMyTOuIhD"
# # Start a Local Cluster
# Use findspark.init() to start a local cluster. If you plan to use remote cluster, skip the findspark.init() and change the cluster_url according.
# + id="-wMS2LmVuIhE"
import findspark
findspark.init()
# + id="3HWQm8NYuIhE"
cluster_url = 'local'
# + id="X1Umd4VauIhF"
from pyspark.sql import SparkSession
# + id="IW713i9wuIhG"
spark = SparkSession.builder\
.master(cluster_url)\
.appName('SparkSQL')\
.getOrCreate()
# + [markdown] id="MKX-Oo1VuIhH"
# # Spark SQL Data Preparation
#
# First, we read a csv file. We can provide option such as delimiter and header. We then rename the colume names to remove dot ('.') in the names.
# + id="ZjeU4JVduIhI" outputId="63049ffe-10ea-4581-9e5d-4be49b30836a" colab={"base_uri": "https://localhost:8080/"}
# !wget https://github.com/kaopanboonyuen/2110446_DataScience_2021s2/raw/main/code/week9_spark/bank-additional-full.csv -O bank-additional-full.csv
# + id="OZICg5xDuIhI"
path = 'bank-additional-full.csv'
# + id="nz59EECLuIhJ"
df = spark.read.option("delimiter", ";").option("header", True).csv(path)
# + id="l2KOjusluIhJ" outputId="26c80d23-c43d-42d8-edcc-941d87c9a8ce" colab={"base_uri": "https://localhost:8080/"}
df.columns
# + id="m2XB0Dg6uIhK"
cols = [c.replace('.', '_') for c in df.columns]
df = df.toDF(*cols)
# + id="e1GqVfxSuIhK" outputId="80c8dbd0-970d-4ea5-ed0f-0cce41f1f619" colab={"base_uri": "https://localhost:8080/"}
df.columns
# + [markdown] id="zTzlvMoquIhL"
# Check out data and schema
# + id="lRiJKu4euIhL" outputId="7a158d8e-04f8-4fea-cbd2-d37be37c75af" colab={"base_uri": "https://localhost:8080/"}
df.show(5)
# + id="WQhMsyzwuIhL" outputId="fcf45d09-4bfd-4c11-bf46-c81e76dbcbb4" colab={"base_uri": "https://localhost:8080/"}
df.printSchema()
# + [markdown] id="7Hz1j9hyuIhM"
# Spark SQL seems to not perform any guess on datatype. To convert to proper data type, we cast each column to proper type using **'cast'** and replace back to the same column using **'withColumn'**.
# + id="2Oj3iIXuuIhM"
df = df.withColumn('age', df.age.cast('int'))
# + id="5qXqo3QGuIhN"
from pyspark.sql.functions import col
# + id="N4tY780nuIhN"
cols = ['age', 'duration', 'campaign', 'pdays', 'previous', 'nr_employed']
for c in cols:
df = df.withColumn(c, col(c).cast('int'))
# + id="Vxb5wKe4uIhO"
cols = ['emp_var_rate', 'cons_price_idx', 'cons_conf_idx', 'euribor3m']
for c in cols:
df = df.withColumn(c, col(c).cast('double'))
# + [markdown] id="JwKe1YCXuIhP"
# Cast and also rename the column y to label
# + id="9KuStuONuIhP"
df = df.withColumn('label', df.y.cast('boolean'))
# + id="PW0kUHSAuIhP" outputId="e85916f8-6481-482e-d512-224565976e27" colab={"base_uri": "https://localhost:8080/"}
df.printSchema()
# + [markdown] id="UhaCsVP_uIhP"
# # Spark SQL Commands
#
# We can select some columns using **'select'** and select some rows using **'filter'**. Note that we can perform basic math to columns.
# + id="o4tT-7x1uIhQ" outputId="9424ca46-42c7-4a03-ae43-b4fdcb358172" colab={"base_uri": "https://localhost:8080/"}
df.select(df['job'], df['education'], df['housing']).show(5)
# + id="PLp6ISdcuIhQ" outputId="8c130739-7a43-48b8-c9a9-35a23be75f3e" colab={"base_uri": "https://localhost:8080/"}
df.select(df['age'], df['duration'], df['pdays'], df['age']*2, df['duration']+df['pdays']).show(10)
# + id="CkmrTgqauIhR" outputId="00d2af70-1424-41ff-f853-4de2027e52eb" colab={"base_uri": "https://localhost:8080/"}
df.filter(df['duration'] < 100).show(5)
# + id="k8sIBlkKuIhR" outputId="e2e26a62-970a-46f9-d003-ba7fc2a70816" colab={"base_uri": "https://localhost:8080/"}
df.filter((df['age'] > 60) & (df['age'] <= 65)).select('age', 'marital').show(5)
# + [markdown] id="rRwiFe17uIhR"
# # Aggregate and Groupby Functions
# We can use several built-in aggegrate functions. We can also use groupby for group operations
# + id="cqjTgj-buIhS"
from pyspark.sql.functions import avg, min, max, countDistinct
# + id="afPR_WrMuIhS" outputId="e4d42927-645c-4b9f-b360-b5ee0849dc4c" colab={"base_uri": "https://localhost:8080/"}
df.select(avg('age'), min('age'), max('duration')).show()
# + [markdown] id="kfxUqhI9uIhS"
# Groupby function allows us to work data in groups.
# + id="jAnGd3KduIhZ" outputId="f91c7843-42e8-4a23-f077-c435bec8a900" colab={"base_uri": "https://localhost:8080/"}
df.groupby('marital').count().show()
# + id="LOnsxfl3uIhZ" outputId="1c7b13e9-e28b-44e4-a355-b2b295d28ea4" colab={"base_uri": "https://localhost:8080/"}
df.groupby('marital', 'education').agg({'age': 'min'}).show()
# + [markdown] id="WuZQ3tI-uIhZ"
# # User-Defined Function
# We can create user-defined function using udf.
# + id="by7kXr7ruIhZ"
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
# + id="MvCWGL-_uIha"
def agegroup_mapping(age):
if age < 25:
return 'young'
if age < 55:
return 'adult'
return 'senior'
to_agegroup = udf(agegroup_mapping, StringType())
# + id="EWktrSiwuIha" outputId="8ed8629f-fdc1-4ed0-baef-9a28f52640a1" colab={"base_uri": "https://localhost:8080/"}
df.select('age', to_agegroup('age')).show(5)
# + id="wP3fEpZtuIhb" outputId="759f9db8-a132-4c1c-9d46-d6128207cb90" colab={"base_uri": "https://localhost:8080/"}
new_df = df.withColumn('agegroup', to_agegroup(df.age))
new_df.select(new_df['age'], new_df['agegroup']).show(10)
| code/backup/3_SparkSQL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 ('base')
# language: python
# name: python3
# ---
# +
# Install necessary libraries
# %pip install --upgrade pip
try:
import jax
except:
# For cuda version, see https://github.com/google/jax#installation
# %pip install --upgrade "jax[cpu]"
import jax
try:
import optax
except:
# %pip install --upgrade git+https://github.com/deepmind/optax.git
import optax
try:
import jaxopt
except:
# %pip install --upgrade git+https://github.com/google/jaxopt.git
import jaxopt
try:
import flax
except:
# %pip install --upgrade git+https://github.com/google/flax.git
import flax
try:
import distrax
except:
# %pip install --upgrade git+https://github.com/deepmind/distrax.git
import distrax
try:
import blackjax
except:
# %pip install --upgrade git+https://github.com/blackjax-devs/blackjax.git
import blackjax
try:
import jsl
except:
# %pip install git+https://github.com/probml/jsl
import jsl
try:
import rich
except:
# %pip install rich
import rich
# +
import abc
from dataclasses import dataclass
import functools
import itertools
from typing import Any, Callable, NamedTuple, Optional, Union, Tuple
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import inspect
import inspect as py_inspect
from rich import inspect as r_inspect
from rich import print as r_print
def print_source(fname):
r_print(py_inspect.getsource(fname))
# +
def print_source_old(fname):
print('source code of ', fname)
#txt = inspect.getsource(fname)
(lines, line_num) = inspect.getsourcelines(fname)
for line in lines:
print(line.strip('\n'))
# +
import jsl
import jsl.hmm.hmm_numpy_lib as hmm_lib_np
#import jsl.hmm.hmm_lib as hmm_lib_jax
normalize = hmm_lib_np.normalize_numpy
print_source(normalize)
#print_source(hmm_lib_np.normalize_numpy)
| _build/html/_sources/chapters/imports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Scipy是一个开源的python算法库和数据工具包
# scipy包含的模块有最优化,线性代数,积分,插值,特殊函数,快速傅里叶变换,信号处理和图像处理、常微分方程求解和其他科学与工程中常用的计算。
# 使用距离度量
# 研究在欧式空间和非欧式空间里的距离向量
# 欧式空间
# * Lr-Norm距离
# * 余弦距离
#
# 非欧式空间
# * Jaccard距离
# * Hamming距离
# 在Numpy中,array用于表示通用的N维空间,Matrix则特定用于线性代数
# 使用array时,运算符*用于计算数量积(点乘),函数.dot()计算矢量积
# 使用距离度量
import numpy as np
# 定义欧式距离
def euclidean_distance(x,y):
if len(x)==len(y):
return np.sqrt(np.sum(np.power((x-y),2)))
else:
print u'应该输入相同的长度'
return None
# 定义Lr-Norm距离
def lrNorm_distance(x,y,power):
if len(x)==len(y):
return np.power(np.sum(np.power((x-y),power)),(1/(1.0*power)))
else:
print u'应该输入相同的长度'
return None
# 定义余弦距离
def cosine_distance(x,y):
if len(x)==len(y):
return np.dot(x,y)/np.sqrt(np.dot(x,x)*np.dot(y,y))
else:
print u'请输入相同的长度'
return None
def jaccard_distance(x,y):
set_x = set(x)
set_y = set(y)
return 1 - len(set_x.intersection(set_y))/len(set_x.union(set_y))
#
def hamming_distance(x,y):
diff = 0
if len(x) == len(y):
for cha1,cha2 in zip(x,y):
if cha1!=cha2:
diff +=1
return diff
else:
print u'请输入相同的长度'
return None
# 主函数调用上述定义的函数
if __name__ == "__main__":
# 给出样例,给出了两个相同的点
x = np.asarray([1,2,3])
y = np.asarray([1,2,3])
# 打印输出欧式距离
print euclidean_distance(x,y)
# r=2,调用lr_Norm计算欧式距离
print lrNorm_distance(x,y,2)
# 曼哈顿距离或者街道距离
print lrNorm_distance(x,y,1)
# 计算余弦距离
x = [1,1]
y = [1,0]
print u'余弦距离'
print cosine_distance(x,y)
# 计算jaccard距离的样例数据
x = [1,2,3]
y = [1,1,1]
print jaccard_distance(x,y)
# 计算Hamming距离的样例数据
x = [11001]
y = [11011]
print hamming_distance(x,y)
| Data_Mining/distance_verctor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # %load_ext autoreload
# # %autoreload 2
# # %matplotlib inline
import os
from pathlib import Path
import warnings
import numpy as np
import nibabel as nb
import pandas as pd
import matplotlib as mpl
mpl.use('pgf')
from matplotlib import pyplot as plt
from matplotlib import gridspec
import seaborn as sn
import palettable
from niworkflows.data import get_template
from nilearn.image import concat_imgs, mean_img
from nilearn import plotting
warnings.simplefilter('ignore')
DATA_HOME = Path(os.getenv('FMRIPREP_DATA_HOME', os.getcwd())).resolve()
DS030_HOME = DATA_HOME / 'ds000030' / '1.0.3'
DERIVS_HOME = DS030_HOME / 'derivatives'
ATLAS_HOME = get_template('MNI152NLin2009cAsym')
ANALYSIS_HOME = DERIVS_HOME / 'fmriprep_vs_feat_2.0-oe'
fprep_home = DERIVS_HOME / 'fmriprep_1.0.8' / 'fmriprep'
feat_home = DERIVS_HOME / 'fslfeat_5.0.10' / 'featbids'
out_folder = Path(os.getenv('FMRIPREP_OUTPUTS') or '').resolve()
# Load MNI152 nonlinear, asymmetric 2009c atlas
atlas = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_T1w.nii.gz'))
mask1mm = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_brainmask.nii.gz')).get_data() > 0
mask2mm = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-02_brainmask.nii.gz')).get_data() > 0
data = atlas.get_data()
data[~mask1mm] = data[~mask1mm].max()
atlas = nb.Nifti1Image(data, atlas.affine, atlas.header)
# +
# sn.set_style("whitegrid", {
# 'ytick.major.size': 5,
# 'xtick.major.size': 5,
# })
# sn.set_context("notebook", font_scale=1.5)
# pgf_with_custom_preamble = {
# 'ytick.major.size': 0,
# 'xtick.major.size': 0,
# 'font.size': 30,
# 'font.sans-serif': ['HelveticaLTStd-Light'],
# 'font.family': 'sans-serif', # use serif/main font for text elements
# 'text.usetex': False, # use inline math for ticks
# }
# mpl.rcParams.update(pgf_with_custom_preamble)
pgf_with_custom_preamble = {
'text.usetex': True, # use inline math for ticks
'pgf.rcfonts': False, # don't setup fonts from rc parameters
'pgf.texsystem': 'xelatex',
'verbose.level': 'debug-annoying',
"pgf.preamble": [
r"""\usepackage{fontspec}
\setsansfont{HelveticaLTStd-Light}[
Extension=.otf,
BoldFont=HelveticaLTStd-Bold,
ItalicFont=HelveticaLTStd-LightObl,
BoldItalicFont=HelveticaLTStd-BoldObl,
]
\setmainfont{HelveticaLTStd-Light}[
Extension=.otf,
BoldFont=HelveticaLTStd-Bold,
ItalicFont=HelveticaLTStd-LightObl,
BoldItalicFont=HelveticaLTStd-BoldObl,
]
\setmonofont{Inconsolata-dz}
""",
r'\renewcommand\familydefault{\sfdefault}',
# r'\setsansfont[Extension=.otf]{Helvetica-LightOblique}',
# r'\setmainfont[Extension=.ttf]{DejaVuSansCondensed}',
# r'\setmainfont[Extension=.otf]{FiraSans-Light}',
# r'\setsansfont[Extension=.otf]{FiraSans-Light}',
]
}
mpl.rcParams.update(pgf_with_custom_preamble)
# -
def mean_std_map(pipe_home, meanmask, force=False, lazy=False, maskval=1000):
pipe_std = pipe_home / 'summary_stdev.nii.gz'
pipe_mean = pipe_home / 'summary_means.nii.gz'
if force or not pipe_mean.is_file():
print('Forced or %s not found' % pipe_mean)
all_mus = []
if lazy:
all_mus = [nb.load(str(f)) for f in pipe_home.glob(
'sub-*/func/sub-*_task-stopsignal_bold_space-MNI152NLin2009cAsym_avgpreproc.nii.gz')]
if not all_mus:
print('Generating means file')
pipe_files = list(pipe_home.glob(
'sub-*/func/sub-*_task-stopsignal_bold_space-MNI152NLin2009cAsym_preproc.nii.gz'))
all_mus = []
for f in pipe_files:
mean = mean_img(str(f))
data = mean.get_data()
sigma = np.percentile(data[meanmask], 50) / maskval
data /= sigma
all_mus.append(nb.Nifti1Image(data, mean.affine, mean.header))
meannii = concat_imgs(all_mus, auto_resample=False)
meannii.to_filename(str(pipe_mean))
force = True
if force or not pipe_std.is_file():
print('Generating standard deviation map')
meannii = nb.load(str(pipe_mean))
nb.Nifti1Image(meannii.get_data().std(3), meannii.affine, meannii.header).to_filename(str(pipe_std))
return pipe_mean, pipe_std
# +
# Use the WM mask to normalize intensities of EPI means
meanmask = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-02_class-WM_probtissue.nii.gz')).get_data() > 0.9
# Calculate average and std
fprep_mean, fprep_std = mean_std_map(fprep_home, meanmask)
feat_mean, feat_std = mean_std_map(feat_home, meanmask)
# Trick to avoid nilearn zooming in
feat_nii = nb.load(str(feat_std))
fd = feat_nii.get_data()
fd[0, 0, :] = 50
fd[0, -1, :] = 50
fd[-1, 0, :] = 50
fd[-1, -1, :] = 50
nb.Nifti1Image(fd, feat_nii.affine, feat_nii.header).to_filename('newfeat.nii.gz')
# +
df = pd.read_csv(str(ANALYSIS_HOME / 'smoothness.csv'))
plt.clf()
fig = plt.gcf()
_ = fig.set_size_inches(15, 2 * 3.1)
# gs = gridspec.GridSpec(2, 4, width_ratios=[38, 7, 60, 10], height_ratios=[1, 1], hspace=0.0, wspace=0.03)
gs = gridspec.GridSpec(2, 3, width_ratios=[42, 9, 64], height_ratios=[1, 1], hspace=0.0, wspace=0.03)
a_ax1 = plt.subplot(gs[0, 0])
a_ax2 = plt.subplot(gs[1, 0])
fmriprep_smooth = df[df.pipeline.str.contains('fmriprep')][['fwhm_pre', 'fwhm_post']]
feat_smooth = df[df.pipeline.str.contains('feat')][['fwhm_pre', 'fwhm_post']]
cols = palettable.tableau.ColorBlind_10.hex_colors
sn.distplot(fmriprep_smooth.fwhm_post, color=cols[0], ax=a_ax2,
axlabel='Smoothing', label='fMRIPrep')
sn.distplot(feat_smooth.fwhm_post, color=cols[1], ax=a_ax2,
axlabel='Smoothing', label=r'\texttt{feat}')
sn.distplot(fmriprep_smooth.fwhm_pre, color=cols[0], ax=a_ax1,
axlabel='Smoothing', label='fMRIPrep')
sn.distplot(feat_smooth.fwhm_pre, color=cols[1], ax=a_ax1,
axlabel='Smoothing', label=r'\texttt{feat}')
a_ax2.set_xlim([3, 8.8])
a_ax2.set_xticks([])
a_ax2.set_xticklabels([])
a_ax2.xaxis.tick_bottom()
a_ax2.grid(False)
a_ax2.set_xlabel('')
a_ax2.set_ylim([-1.1, 9.9])
a_ax2.set_yticks([])
a_ax2.spines['left'].set_visible(False)
a_ax1.set_ylabel(r'\noindent\parbox{4.8cm}{\centering\textbf{Before smoothing} fraction of images}',
size=13)
a_ax1.yaxis.set_label_coords(-0.1, 0.4)
a_ax2.set_ylabel(r'\noindent\parbox{4.8cm}{\centering\textbf{After smoothing} fraction of images}',
size=13)
a_ax2.yaxis.set_label_coords(-0.1, 0.6)
# ax4.spines['bottom'].set_position(('outward', 20))
a_ax2.invert_yaxis()
a_ax2.spines['top'].set_visible(False)
a_ax2.spines['bottom'].set_visible(False)
a_ax2.spines['left'].set_visible(False)
a_ax2.spines['right'].set_visible(False)
a_ax1.set_xlim([3, 8.8])
a_ax1.set_ylim([-0.6, 10.4])
a_ax1.grid(False)
a_ax1.set_xlabel('(mm)')
a_ax1.xaxis.set_label_coords(0.95, 0.1)
a_ax1.set_yticks([])
a_ax1.set_yticklabels([])
a_ax1.set_xticks([3, 4, 5, 6 , 7, 8])
a_ax1.set_xticklabels([3, 4, 5, 6 , 7, 8])
a_ax1.tick_params(axis='x', zorder=100, direction='inout')
a_ax1.spines['left'].set_visible(False)
a_ax1.spines['right'].set_visible(False)
a_ax1.spines['top'].set_visible(False)
a_ax1.spines['bottom'].set_visible(True)
a_ax1.zorder = 100
# a_ax2.xaxis.set_label_position('top')
# a_ax2.xaxis.set_label_coords(0.45, 0.95)
a_ax1.annotate(
r'\noindent\parbox{6.8cm}{\centering\textbf{Estimated smoothness} full~width~half~maximum~(mm)}',
xy=(0.15, 0.8), xycoords='axes fraction', xytext=(.0, .0),
textcoords='offset points', va='center', color='k', size=13,
)
legend = a_ax2.legend(ncol=2, loc='upper center', bbox_to_anchor=(0.5, 0.45), prop={'size': 15})
legend.get_frame().set_facecolor('w')
legend.get_frame().set_edgecolor('none')
# a_ax2.annotate(
# r'\noindent\parbox{15cm}{Panels A, B present statistics derived from N=257 biologically independent participants}',
# xy=(-0.2, 0.03), xycoords='axes fraction', xytext=(.0, .0),
# textcoords='offset points', va='center', color='k', size=11,
# )
###### PANEL B
b_ax1 = plt.subplot(gs[0, 2])
b_ax2 = plt.subplot(gs[1, 2])
thres = 20
vmin = 50
vmax = 200
disp = plotting.plot_anat(str(fprep_std), display_mode='z', annotate=False,
cut_coords=[-5, 10, 20], cmap='cividis', threshold=thres, vmin=vmin, vmax=vmax,
axes=b_ax1)
disp.annotate(size=12, left_right=True, positions=True)
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_class-CSF_probtissue.nii.gz'), colors=['k'], levels=[0.8])
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_class-WM_probtissue.nii.gz'), colors=['w'], levels=[0.8], linewidths=[1], alpha=0.7)
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_brainmask.nii.gz'), colors=['k'], levels=[0.8], linewidths=[3], alpha=.7)
disp = plotting.plot_anat('newfeat.nii.gz', display_mode='z', annotate=False,
cut_coords=[-5, 10, 20], cmap='cividis', threshold=thres, vmin=vmin, vmax=vmax,
axes=b_ax2)
disp.annotate(size=12, left_right=False, positions=False)
disp.annotate(size=12, left_right=False, positions=False, scalebar=True,
loc=3, size_vertical=2, label_top=False, frameon=True, borderpad=0.1)
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_class-CSF_probtissue.nii.gz'), colors=['k'], levels=[0.8])
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_class-WM_probtissue.nii.gz'), colors=['w'], levels=[0.8], linewidths=[1], alpha=0.7)
disp.add_contours(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_brainmask.nii.gz'), colors=['k'], levels=[0.8], linewidths=[3], alpha=.7)
b_ax1.annotate(
'fMRIPrep',
xy=(0., .5), xycoords='axes fraction', xytext=(-20, .0),
textcoords='offset points', va='center', color='k', size=15,
rotation=90)
b_ax2.annotate(
r'\texttt{feat}',
xy=(0., .5), xycoords='axes fraction', xytext=(-20, .0),
textcoords='offset points', va='center', color='k', size=12,
rotation=90)
# inner_grid = gridspec.GridSpecFromSubplotSpec(1, 2, width_ratios=[1, 15],
# subplot_spec=gs[:, -1], wspace=0.01)
# b_ax3 = fig.add_subplot(inner_grid[0])
# gradient = np.hstack((np.zeros((50,)), np.linspace(0, 1, 120), np.ones((130,))))[::-1]
# gradient = np.vstack((gradient, gradient))
# b_ax3.imshow(gradient.T, aspect='auto', cmap=plt.get_cmap('cividis'))
# b_ax3.xaxis.set_ticklabels([])
# b_ax3.xaxis.set_ticks([])
# b_ax3.yaxis.set_ticklabels([])
# b_ax3.yaxis.set_ticks([])
# b_ax4 = fig.add_subplot(inner_grid[1])
# sn.distplot(nb.load(str(fprep_std)).get_data()[mask2mm], label='fMRIPrep',
# vertical=True, ax=b_ax4, kde=False, norm_hist=True)
# sn.distplot(nb.load(str(feat_std)).get_data()[mask2mm], label=r'\texttt{feat}', vertical=True,
# color='darkorange', ax=b_ax4, kde=False, norm_hist=True)
# # plt.gca().set_ylim((0, 300))
# plt.legend(prop={'size': 15}, edgecolor='none')
# b_ax4.xaxis.set_ticklabels([])
# b_ax4.xaxis.set_ticks([])
# b_ax4.yaxis.set_ticklabels([])
# b_ax4.yaxis.set_ticks([])
# plt.axis('off')
# b_ax3.axis('off')
# b_ax4.axis('off')
a_ax1.set_title('A', fontdict={'fontsize': 24}, loc='left', x=-0.2);
b_ax1.set_title('B', fontdict={'fontsize': 24}, loc='left');
plt.savefig(str(out_folder / 'figure03.pdf'),
format='pdf', bbox_inches='tight', pad_inches=0.2, dpi=300)
# +
coords = [-27, 0, 7]
thres = 20
vmin = 50
vmax = 200
# Plot
plt.clf()
fig = plt.figure(figsize=(20,10))
plotting.plot_anat('newfeat.nii.gz', cut_coords=coords, colorbar=True, cmap='cividis',
threshold=thres, vmin=vmin, vmax=vmax, title='feat',
axes=plt.subplot(2,2,1)
);
plotting.plot_anat(str(fprep_std), cut_coords=coords, colorbar=True, cmap='cividis',
threshold=thres, vmin=vmin, vmax=vmax, title='fmriprep',
axes=plt.subplot(2,2,3)
);
plotting.plot_glass_brain(str(feat_std), threshold=200, colorbar=True, title='feat',
axes=plt.subplot(2,2,2));
plotting.plot_glass_brain(str(fprep_std), threshold=200, colorbar=True, title='fmriprep',
axes=plt.subplot(2,2,4));
# -
| 02 - Figure 3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python_defaultSpec_1600024692573
# ---
# # Lab Two
# ---
#
# For this lab we're going to get into logic
#
# Our Goals are:
# - Using Conditionals
# - Using Loops
# - Creating a Function
# - Using a Class
# + tags=[]
# Create an if statement
has_bread = True
has_chicken = True
if has_bread and has_chicken:
print("I can make a chicken sandwich")
# + tags=[]
# Create an if else statement
passed_class = True
if passed_class:
print("Good job, you passed!")
else:
print("Better luck next time")
# + tags=[]
# Create an if elif else statement
covered_spread = False
patriots_won = True
won_bet = False
if covered_spread:
print("This is awesome!")
elif patriots_won and won_bet:
print("At least the patriots one...")
else:
print("This day is awful")
# + tags=[]
# Create a for loop using range(). Go from 0 to 9. Print out each number.
for value in range(10):
print(value)
# + tags=[]
# Create a for loop iterating through this list and printing out the value.
arr = ['Blue', 'Yellow', 'Red', 'Green', 'Purple', 'Magenta', 'Lilac']
for item in arr:
print(item)
# Get the length of the list above and print it.
# + tags=[]
# Create a while loop that ends after 6 times through. Print something for each pass.
import random
secret_number = random.randint(0,6)
position = 0
while position != secret_number:
print("Looking for the secret number")
position += 1
print("We found the secret number, it was:", secret_number)
# + tags=[]
# Create a function to add 2 numbers together. Print out the number
a = 5
b = 7
sum_two_numbers = a + b
print(sum_two_numbers)
# + tags=[]
# Create a function that tells you if a number is odd or even and print the result.
a = 7
is_odd = True
if is_odd:
print("a is odd")
else:
print("a is not odd")
# + tags=[]
# Initialize an instance of the following class. Use a variable to store the object and then call the info function to print out the attributes.
class Dog(object):
def __init__(self, name, height, weight, breed):
self.name = name
self.height = height
self.weight = weight
self.breed = breed
def info(self):
print("Name:", self.name)
print("Weight:", str(self.weight) + " Pounds")
print("Height:", str(self.height) + " Inches")
print("Breed:", self.breed)
| JupyterNotebooks/Labs/Lab 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Synthetic Dataset 1b: ReLU
# + code_folding=[0]
# Import libraries and modules
import numpy as np
import pandas as pd
import xgboost as xgb
from xgboost import plot_tree
from sklearn.metrics import r2_score, classification_report, confusion_matrix, \
roc_curve, roc_auc_score, plot_confusion_matrix, f1_score, \
balanced_accuracy_score, accuracy_score, mean_squared_error, \
log_loss
from sklearn.datasets import make_friedman1
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LogisticRegression, LinearRegression, SGDClassifier, \
Lasso, lasso_path
from sklearn.preprocessing import StandardScaler, LabelBinarizer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn_pandas import DataFrameMapper
import scipy
from scipy import stats
import os
import shutil
from pathlib import Path
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import albumentations as A
from albumentations.pytorch import ToTensorV2
import cv2
import itertools
import time
import tqdm
import copy
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.models as models
from torch.utils.data import Dataset
import PIL
import joblib
import json
# import mysgd
# + code_folding=[0]
# Import user-defined modules
import sys
import imp
sys.path.append('/Users/arbelogonzalezw/Documents/ML_WORK/LIBS/Lockout_copy/')
import tools_general as tg
import tools_pytorch as tp
import lockout as ld
imp.reload(tg)
imp.reload(tp)
imp.reload(ld)
# -
# ## Generate and save data
# + code_folding=[0]
# Generate train, valid, & test datasets
torch.manual_seed(42)
samples = 500
n_features = 100
pi = torch.Tensor([np.pi])
A1 = 2.0
A2 = -3.0
A3 = 4.0
xtrain = torch.rand(samples,n_features)
ytrain = torch.zeros(samples)
ytrain[:] = A1*xtrain[:,0] + A2*xtrain[:,1] + A3*xtrain[:,2]
torch.relu_(ytrain)
xvalid = torch.rand(samples,n_features)
yvalid = torch.zeros(samples)
yvalid[:] = A1*xvalid[:,0] + A2*xvalid[:,1] + A3*xvalid[:,2]
torch.relu_(yvalid)
xtest = torch.rand(samples,n_features)
ytest = torch.zeros(samples)
ytest[:] = A1*xtest[:,0] + A2*xtest[:,1] + A3*xtest[:,2]
torch.relu_(ytest)
y_std = ytrain.std()
print("MEAN of 'ytrain' before adding noise =", ytrain.mean().item())
print("STD of 'ytrain' before adding noise =", y_std.item())
y_std = 1.0*y_std
y_mean = 0.0
print("\nGaussian noise added to 'ytrain with:")
print("- mean =", y_mean)
print("- std =", y_std.item())
ynoise1 = torch.normal(mean=y_mean, std=y_std, size=(samples, 1))
ytrain[:] += ynoise1[:,0]
ynoise2 = torch.normal(mean=y_mean, std=y_std, size=(samples, 1))
yvalid[:] += ynoise2[:,0]
# + code_folding=[0]
# Convert to Pandas DataFrames
cols_X = [str(i) for i in range(1, n_features+1)]
df_xtrain = pd.DataFrame(xtrain.numpy(), columns=cols_X)
df_xvalid = pd.DataFrame(xvalid.numpy(), columns=cols_X)
df_xtest = pd.DataFrame(xtest.numpy(), columns=cols_X)
cols_X = df_xtrain.columns.tolist()
cols_Y = ['target']
df_ytrain = pd.DataFrame(ytrain.numpy(), columns=cols_Y)
df_yvalid = pd.DataFrame(yvalid.numpy(), columns=cols_Y)
df_ytest = pd.DataFrame(ytest.numpy(), columns=cols_Y)
# + code_folding=[0]
# Save data set
tg.save_data(df_xtrain, df_xtrain, df_xvalid, df_xtest,
df_ytrain, df_ytrain, df_yvalid, df_ytest, 'dataset_b/')
tg.save_list(cols_X, 'dataset_b/X.columns')
tg.save_list(cols_Y, 'dataset_b/Y.columns')
#
print("- xtrain size: {}".format(df_xtrain.shape))
print("- xvalid size: {}".format(df_xvalid.shape))
print("- xtest size: {}".format(df_xtest.shape))
# -
# ## Load Data
# + code_folding=[0]
# Select type of processor to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if device == torch.device('cuda'):
print("-Type of precessor to be used: 'gpu'")
# !nvidia-smi
else:
print("-Type of precessor to be used: 'cpu'")
# Choose device
# torch.cuda.set_device(6)
# + code_folding=[0]
# Read data
_, x_train, x_valid, x_test, _, y_train, y_valid, y_test = tp.load_data_reg('dataset_b/')
cols_X = tg.read_list('dataset_b/X.columns')
cols_Y = tg.read_list('dataset_b/Y.columns')
# + code_folding=[0]
# Normalize data
xtrain, xvalid, xtest, ytrain, yvalid, ytest = tp.normalize_xy(x_train, x_valid, x_test,
y_train, y_valid, y_test)
# + code_folding=[0]
# Create dataloaders
dl_train, dl_valid, dl_test = tp.make_DataLoaders(xtrain, xvalid, xtest, ytrain, yvalid, ytest,
tp.dataset_tabular, batch_size=10000)
# + code_folding=[0]
# NN architecture with its corresponding forward method
class MyNet(nn.Module):
# .Network architecture
def __init__(self, features, layer_sizes):
super(MyNet, self).__init__()
self.fc1 = nn.Linear(features, layer_sizes[0], bias=False)
self.relu = nn.ReLU(inplace=True)
self.bias = nn.Parameter(torch.randn(layer_sizes[0]), requires_grad=True)
# .Forward function
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = x + self.bias
return x
# + code_folding=[0]
# Instantiate model
n_features = len(cols_X)
n_layers = [1]
model = MyNet(n_features, n_layers)
model.eval()
# -
# ## Unregularized
# + code_folding=[]
# TRAIN FORWARD
lockout_unconstraint = ld.Lockout(model, lr=5e-3, loss_type=1, optim_id=1,
save_weights=(True, 'fc1.weight'))
lockout_unconstraint.train(dl_train, dl_valid, epochs=10000, early_stop=20, tol_loss=1e-6,
train_how="unconstraint", reset_weights=True)
lockout_unconstraint.path_data.plot(x="iteration",
y=['train_loss', 'valid_loss'],
figsize=(8,6))
plt.show()
# + code_folding=[0]
# Save model, data
tp.save_model(lockout_unconstraint.model_best_valid, 'outputs_b/model_forward_valid_min.pth')
tp.save_model(lockout_unconstraint.model_last, 'outputs_b/model_forward_last.pth')
lockout_unconstraint.path_data.to_csv('outputs_b/data_forward.csv')
lockout_unconstraint.weight_iters.to_csv('outputs_b/w_vs_iters_forward.csv', header=None, index=False)
# + code_folding=[0]
# Accuracy
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('./outputs_b/model_forward_valid_min.pth'))
mm.eval()
xtrain = xtrain.to(device)
ypred = mm(xtrain)
r2 = r2_score(ytrain.detach().numpy(), ypred.detach().numpy())
print("Train R2 = {:.4f}".format(r2))
xvalid = xvalid.to(device)
ypred = mm(xvalid)
r2 = r2_score(yvalid.detach().numpy(), ypred.detach().numpy())
print("Valid R2 = {:.4f}".format(r2))
xtest = xtest.to(device)
ypred = mm(xtest)
r2 = r2_score(ytest.detach().numpy(), ypred.detach().numpy())
print("Test R2 = {:.4f}".format(r2))
# + code_folding=[]
# Weight importance (layer 1)
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('./outputs_b/model_forward_valid_min.pth'))
mm.eval()
importance = tp.get_features_importance(mm, 'fc1.weight')
fig, axes = plt.subplots(figsize=(9,6))
x_pos = np.arange(len(importance))
axes.bar(x_pos, importance, zorder=2)
# axes.set_xticks(x_pos)
# axes.set_xticklabels(feature_importance_sorted1.index[idx], rotation='vertical')
axes.set_xlim(-1,len(x_pos))
axes.set_ylabel('Performance')
axes.set_title('Feature Importance (Forward): layer 1')
axes.grid(True, zorder=1)
plt.tight_layout()
plt.savefig('outputs_b/feature_importance_forward.pdf', bbox_inches='tight')
plt.show()
print("Non zero features: {}".format(len(importance)))
# + code_folding=[0]
# Plot weights vs iters
ww_iter = pd.read_csv('outputs_b/w_vs_iters_forward.csv', header=None)
ncols = ww_iter.shape[1]
iters = ww_iter.index.tolist()
fig, axes = plt.subplots(figsize=(9,6))
for i in range(ncols):
if i < 3:
axes.plot(iters, ww_iter[i], label="w{}".format(i+1), linewidth=3)
else:
axes.plot(iters, ww_iter[i])
axes.set_xlabel("iteration")
axes.set_title("Forward: Linear")
axes.legend()
axes.grid(True, zorder=2)
plt.savefig("outputs_b/w_vs_iters_forward.pdf", bbox_inches='tight')
plt.show()
# -
# ## Lockout
# +
# TRAIN WITH LOCKOUT
model = MyNet(n_features, n_layers)
model.load_state_dict(torch.load('./outputs_b/model_forward_last.pth'))
model.eval()
regul_type = [('fc1.weight', 1)]
regul_path = [('fc1.weight', True)]
lockout_reg = ld.Lockout(model, lr=5e-3,
regul_type=regul_type,
regul_path=regul_path,
loss_type=1, tol_grads=1e-2,
save_weights=(True, 'fc1.weight'))
# -
lockout_reg.train(dl_train, dl_valid, dl_test, epochs=20000, early_stop=20, tol_loss=1e-5,
train_how="decrease_t0")
# + code_folding=[0]
# Save model, data
tp.save_model(lockout_reg.model_best_valid, 'outputs_b/model_lockout_valid_min.pth')
tp.save_model(lockout_reg.model_last, 'outputs_b/model_lockout_last.pth')
lockout_reg.path_data.to_csv('outputs_b/data_lockout.csv')
lockout_reg.weight_iters.to_csv('outputs_b/w_vs_iters_lockout.csv', header=None, index=False)
# + code_folding=[0]
# Plot unconstrained + lockout loss vs iteration
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15,6))
df1 = pd.read_csv('outputs_b/data_forward.csv')
df2 = pd.read_csv('outputs_b/data_lockout.csv')
axes[0].set_ylim(0.3, 1.3)
axes[0].plot(df1["iteration"], df1["train_loss"], label="Training", linewidth=4)
axes[0].plot(df1["iteration"], df1["valid_loss"], label="Validation", linewidth=4)
axes[0].legend(fontsize=16)
axes[0].set_xlabel("iteration", fontsize=16)
axes[0].set_ylabel("Mean Squared Error", fontsize=16)
axes[0].set_yticks(np.arange(0.3, 1.4, 0.3))
axes[0].tick_params(axis='both', which='major', labelsize=14)
axes[0].set_title("Unregularized (ReLU): Best Validation Loss = {:.2f}".format(df1["valid_loss"].min()),
fontsize=16)
axes[0].grid(True, zorder=2)
axes[1].set_ylim(0.3, 1.3)
axes[1].plot(df2["iteration"], df2["train_loss"], label="Training", linewidth=4)
axes[1].plot(df2["iteration"], df2["valid_loss"], label="Validation", linewidth=4)
axes[1].legend(fontsize=16)
axes[1].set_xlabel("iteration", fontsize=16)
axes[1].set_yticks(np.arange(0.3, 1.4, 0.3))
axes[1].tick_params(axis='both', which='major', labelsize=14)
axes[1].set_yticklabels([])
axes[1].set_xticks(np.linspace(0, 20000, 5, endpoint=True))
axes[1].set_title("Lockout (ReLU): Best Validation Loss = {:.2f}".format(df2["valid_loss"].min()),
fontsize=16)
axes[1].grid(True, zorder=2)
plt.tight_layout()
plt.savefig("outputs_b/loss_vs_iter_b.pdf", bbox_inches='tight')
plt.show()
# + code_folding=[0]
# Plot unconstrained + lockout loss vs iteration
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15,6))
df1 = pd.read_csv('outputs_b/data_forward.csv')
df2 = pd.read_csv('outputs_b/data_lockout.csv')
axes[0].set_ylim(0.3, 1.3)
axes[0].plot(df1["iteration"], df1["train_loss"], label="Training", linewidth=4)
axes[0].plot(df1["iteration"], df1["valid_loss"], label="Validation", linewidth=4, color="tab:orange")
axes[0].plot(958, df1["valid_loss"].min(), "o", linewidth=4, markersize=11, color="black",
label="Validation Minimum: {:.2f}".format(df1["valid_loss"].min()))
axes[0].legend(fontsize=16)
axes[0].set_xlabel("iteration", fontsize=16)
axes[0].set_ylabel("Mean Squared Error", fontsize=16)
axes[0].set_yticks(np.arange(0.3, 1.4, 0.3))
axes[0].tick_params(axis='both', which='major', labelsize=14)
axes[0].set_title("Unregularized (ReLU)",
fontsize=16)
axes[0].grid(True, zorder=2)
axes[1].set_ylim(0.3, 1.3)
axes[1].plot(df2["iteration"], df2["train_loss"], label="Training", linewidth=4)
axes[1].plot(df2["iteration"], df2["valid_loss"], label="Validation", linewidth=4, color="tab:orange")
axes[1].plot(15700, df2["valid_loss"].min(), "o", linewidth=4, markersize=11,
color="black",
label="Validation Minimum: {:.2f}".format(df2["valid_loss"].min()))
axes[1].legend(fontsize=16)
axes[1].set_xlabel("iteration", fontsize=16)
axes[1].set_yticks(np.arange(0.3, 1.4, 0.3))
axes[1].tick_params(axis='both', which='major', labelsize=14)
axes[1].set_yticklabels([])
axes[1].set_xticks(np.linspace(0, 20000, 5, endpoint=True))
axes[1].set_title("Lockout (ReLU)",
fontsize=16)
axes[1].grid(True, zorder=2)
plt.tight_layout()
plt.savefig("outputs_b/loss_vs_iter_b.pdf", bbox_inches='tight')
plt.show()
# + code_folding=[0]
# Plot weights vs iters
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15,7))
# Forward
ww_iter = pd.read_csv('outputs_b/w_vs_iters_forward.csv', header=None)
ncols = ww_iter.shape[1]
iters = ww_iter.index.tolist()
for i in range(ncols):
if i < 3:
axes[0].plot(iters, ww_iter[i], label="$\omega_{}$".format(i+1), linewidth=4)
else:
axes[0].plot(iters, ww_iter[i])
axes[0].set_xlabel("iteration", fontsize=16)
axes[0].set_ylabel("Coefficient", fontsize=16)
axes[0].set_title("Unregularized (ReLU)", fontsize=16)
axes[0].tick_params(axis='both', which='major', labelsize=14)
axes[0].legend(fontsize=16)
axes[0].grid(True, zorder=2)
# lockout
ww_iter = pd.read_csv('outputs_b/w_vs_iters_lockout.csv', header=None)
ncols = ww_iter.shape[1]
iters = ww_iter.index.tolist()
for i in range(ncols):
if i < 3:
axes[1].plot(iters, ww_iter[i], label="$\omega_{}$".format(i+1), linewidth=4)
else:
axes[1].plot(iters, ww_iter[i])
axes[1].set_xlabel("iteration", fontsize=16)
axes[1].set_title("Lockout (ReLU)", fontsize=16)
axes[1].legend(fontsize=16)
axes[1].set_yticklabels([])
axes[1].set_xticks(np.linspace(0, 20000, 5, endpoint=True))
axes[1].tick_params(axis='both', which='major', labelsize=14)
axes[1].grid(True, zorder=2)
axes[1].plot([15308, 15308],[-.68, .78], linewidth=3, color='black')
plt.tight_layout()
plt.savefig("outputs_b/w_vs_iters_b.pdf", bbox_inches='tight')
plt.show()
# + code_folding=[]
# Features importance (layer 1)
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('./outputs_b/model_lockout_valid_min.pth'))
mm.eval()
importance = tp.get_features_importance(mm, 'fc1.weight')
idx = list(importance.index+1)
string_labels = []
for i in idx:
string_labels.append(r"$x_{}{}{}$".format('{',i,'}'))
fig, axes = plt.subplots(figsize=(9,6))
x_pos = np.arange(len(importance))
axes.bar(x_pos[0], importance.iloc[0], zorder=2, color="tab:green")
axes.bar(x_pos[1], importance.iloc[1], zorder=2, color="tab:orange")
axes.bar(x_pos[2], importance.iloc[2], zorder=2, color="tab:blue")
axes.bar(x_pos[3:], importance.iloc[3:], zorder=2, color="gray")
axes.set_xticks(x_pos)
axes.set_xticklabels(string_labels)
axes.set_xlim(-1,len(x_pos))
axes.tick_params(axis='both', which='major', labelsize=14)
axes.set_ylabel('Importance', fontsize=16)
axes.set_xlabel('feature', fontsize=16)
axes.set_title('Lockout (ReLU)', fontsize=16)
axes.grid(True, zorder=1)
plt.tight_layout()
plt.savefig('outputs_b/feature_importance_lockout_b.pdf', bbox_inches='tight')
plt.show()
print("Non zero features: {}".format(len(importance)))
# + code_folding=[0]
# Accuracy
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('outputs_b/model_lockout_valid_min.pth'))
mm.eval()
print("Lockout:")
xtrain = xtrain.to(device)
ypred = mm(xtrain)
r2 = r2_score(ytrain.numpy(), ypred.detach().numpy())
print("Train R2 = {:.3f}".format(r2))
xvalid = xvalid.to(device)
ypred = mm(xvalid)
r2 = r2_score(yvalid.numpy(), ypred.detach().numpy())
print("Valid R2 = {:.3f}".format(r2))
xtest = xtest.to(device)
ypred = mm(xtest)
r2 = r2_score(ytest.numpy(), ypred.detach().numpy())
print("Test R2 = {:.3f}".format(r2))
# + code_folding=[0]
# Accuracy
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('./outputs_b/model_forward_valid_min.pth'))
mm.eval()
print("Early Stopping:")
xtrain = xtrain.to(device)
ypred = mm(xtrain)
r2 = r2_score(ytrain.detach().numpy(), ypred.detach().numpy())
print("Train R2 = {:.3f}".format(r2))
xvalid = xvalid.to(device)
ypred = mm(xvalid)
r2 = r2_score(yvalid.detach().numpy(), ypred.detach().numpy())
print("Valid R2 = {:.3f}".format(r2))
xtest = xtest.to(device)
ypred = mm(xtest)
r2 = r2_score(ytest.detach().numpy(), ypred.detach().numpy())
print("Test R2 = {:.3f}".format(r2))
# + code_folding=[0]
# Error
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('outputs_b/model_lockout_valid_min.pth'))
mm.eval()
print("Lockout:")
xvalid = xvalid.to(device)
ypred = mm(xvalid)
r2 = r2_score(yvalid.numpy(), ypred.detach().numpy())
r2 = np.sqrt(1.0 - r2)
print("Valid Error = {:.3f}".format(r2))
xtest = xtest.to(device)
ypred = mm(xtest)
r2 = r2_score(ytest.numpy(), ypred.detach().numpy())
r2 = np.sqrt(1.0 - r2)
print("Test Error = {:.3f}".format(r2))
# + code_folding=[0]
# Error
mm = MyNet(n_features, n_layers)
mm.load_state_dict(torch.load('./outputs_b/model_forward_valid_min.pth'))
mm.eval()
print("Early Stopping:")
xvalid = xvalid.to(device)
ypred = mm(xvalid)
r2 = r2_score(yvalid.detach().numpy(), ypred.detach().numpy())
r2 = np.sqrt(1.0 - r2)
print("Valid Error = {:.3f}".format(r2))
xtest = xtest.to(device)
ypred = mm(xtest)
r2 = r2_score(ytest.detach().numpy(), ypred.detach().numpy())
r2 = np.sqrt(1.0 - r2)
print("Test Error = {:.3f}".format(r2))
| Synthetic_Data1/lockout01_b.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.1
# language: julia
# name: julia-1.3
# ---
# # Chapter 2 Random Variables Part V
#
# #### *<NAME>*
#
# Feb 15, 2020 *Rev 1*
# ## Mathematical Expectations
# Probability function provides full information about a random variable by telling us what possible value it can take and its corresponding probability or probability density around it, but when a random variable hasn't realized, what should we *expect* it to be?
#
# If we are to use one number to describe multiple numbers we may choose arithmetic mean of those numbers, as for random variable we use expectation, or mean.
# ### Definition 11 Expectation
# The expectation of a random variable $X$ is denoted as $E(X)$ or $\mu_{X}$.
#
# For a discrete random variable $X$ with PMF $f_{X}(x)$, the expectation of $X$ is
# $$E(X)=\sum_{x \in \Omega_{X}}xf_{X}(x)$$
#
# For a continuous random variable $X$ with PDF $f_{X}(x)$, the expectation of $X$ is
# $$E(X)=\int_{-\infty}^{+\infty}xf_{X}(x) \: dx $$
#
# Note that the expectation may not exist as long as the "summation" or "integral" in continuous sense doesn't converge.
#
# The expectation of a random variable measures the center point of all possible values and their associated probabilities, or the value we can expect from the random variable.
# ### Intuition of Expectation
# The expectation of a random variable tells us where is the balance point of it. To better understand this statement, try to think of a see-saw with graduations marked on it.
# * If there is a 20*kg* at -2, and 20*kg* at 2, we know that if the fulcrum is at 0, the see-saw is balanced.
# * If there is a 20*kg* at -2, and 30*kg* at 1, where should the fulcrum be to keep this see-saw's balance? That would be $-2 \times 20 + 1 \times 30 = -10$
# * Same idea is applied to multiple weights at multiple places.
#
# Since random variable takes on values in the real line with probability assigned to it, we can take its values as positions and the associated probabilities as weights. Check the definition of expectation of a random variable, we can find this is exactly what we call the balance point of a random variable.
# ### Properties of Expectation
# 1. *(Linearity of Expectation)* Expectation is a linear transformation. If $f_1(\cdot)$ and $f_2(\cdot)$ are two measurable mappings of $X$, and two constants $a, b \in (-\infty,+\infty)$, then the expectation of $af_1(X)+bf_2(X)$ is
# $$E[af_1(X)+bf_2(X)]=aE(f_1(X))+bE(f_2(X))$$
# In particular, $E[f(X)+a)]=E[f(X)]+a$, which means if the see-saw is moved in one direction at a certain distance, the balanced point should move in the same direction at same distance.
# 2. If there are two random variable $X$ and $Y$, regardless of being independent or not, the expected value of $X \pm Y$ is that
# $$E(X \pm Y)=E(X) \pm E(Y)$$
# It indicates that if we are to combine two see-saws of real line, adding(removing) weights of one see-saw to(from ) another at the corresponding graduation, then the new balance of the new see-saw would be the arithmetic plus(minus) of two balances of the original see-saws.
# ## Variance
# Use mean to describe a bounch of numbers is straightforward, but can be not sufficient. For example, $[0,0,0,0]$ has the same mean as that of $[1,2,3,-6]$, what should we expect from it? We know that these two bunches of numbers have the same balance at the real line, but the former are four exactly the same numbers and the latter is relatively more spred out. So is the same for a random variable.
#
# **Variance** of a random variable $X$ measures how spread out the realizations of $X$ is around its mean $E(X)$.
# ### Definition 12 Variance
# The variance of a random variable $X$, denoted as $\sigma_{X}^2$ is defined as
# $$\sigma_{X}^2=E(X-\mu_{X})^2$$
#
# For a discrete random variable $X$ with PMF $f_{X}(x)$,
# $$\sigma_{X}^2=E(X-\mu_{X})^2=\sum_{x \in \Omega_{X}}(x-\mu_{X})^2 f_{X}(x)$$
#
# For a continuous random variable $X$ with PDF $f_{X}(x)$,
# $$\sigma_{X}^2=E(X-\mu_{X})^2=\int_{-\infty}^{+\infty}(x-\mu_{X})^2f_{X}(x)\:dx$$
#
# where $\Omega_{X}$ denotes the support of $X$.
#
# The term $X-\mu_{X}$ is called the **deviation from the mean**, it evaluates the distance between $X$'s realization and its mean. We can see that the variance is the **expectation of the squared deviation from the mean**.
#
# ---
# Note that variance is the expectation of a square, it is not the same magnitude as mean, so we can take squre root of $\sigma_{X}^2$, yielding
# $$\sigma_{X}=\sqrt{\sigma_{X}^2}$$
# which is called the **standard deviation** of $X$, which measures on average, what distance is the realizations of random variable away from its mean.
#
# The variance is defined as the expectation of the squared distance between the random variable and its mean, so it's always non-negative, with zero variance indicating perfect concentration of realizations of a random variable, and higher variance meaning further away its realizations from its mean.
using Plots
using Distributions
using StatsPlots
plt = plot(0)
plt1 = plot!(Normal(0,1),
legend=true, label="E=0; V=1",
ylim=(0,0.4),
fill=0, α=0.6,
color=:lightblue)
plt2 = plot!(Normal(0,2),
legend=true, label="E=0; V=2",
ylim=(0,0.4),
fill=0, α=0.6,
color=:orange)
plt3 = plot!(Normal(3,2),
legend=true, label="E=3; V=2",
ylim=(0,0.4),
fill=0, α=0.6,
color=:green)
# The graph above draws three normal distributions.
# * The blue one and orange one has the same mean, but the orange one have higehr variance, making it more diverse hence more shorter and fatter.
# * The orange one and the green one have the same variance, but the green one has higer mean, so they are of the same shape, but the green one is like shifting towards right because of the higher mean.
# The mean measures the center point of the random varible, whereas the variance measures the degree at which its values are round its mean.
#
# The mean is a location parameter for the distribution of $X$ because it indicates the balance point all possible outcomes scatter around, and variance of a random variable is a scale parameter for the distribution of $X$ because it indicates the "shape" of the distribution, whether it being tall*(concentrated)* or short*(spred out)*.
# ## Moments
# To gain intuition of the mean and variance of the random variable, we say a mean describes the position of balance of a bunch of numbers, and the variance describes how spred out they are. Do these full characterize a bunch of numbers? Let's take another example, $[1,2,3,-6]$ and $[-1,-2,-3,6]$ have same mean and variance but three of the former four numbers is positive and three of the latter negative.
#
# We can construct a new metrics, say, $d$, such that
# $$d=E(X^3)$$
# When more weights are added to negative part of the see-saw than to positive part, $d$ is negative and vice versa.
#
# Or another metrics like $c$, such that
# $$c=E[(X-\mu_{X})^3]$$
# When more weights are added to the part whose values are smaller than mean than those to values bigger than mean, $c$ is negative and vice versa.
#
# By these metrics, we can split the real line at $0$ or at the mean, depending on our inclination, into two parts, and compare density associated with those two parts. The definition of $d$ is the expectation of $X$ to the third power, and the definition of $c$ is the expectation of *deviation from the mean to the third power*.
#
# We can actually by constructing expectation of higher power of the deviation from the mean cultivate new aspect of the distribution of a random variable. That is what we call **moments**.
# ### Definition 13 *k*-th Moment and *k*-th central Moment
# Depending on whether we choose $0$ or mean as benchmark as the central position, and the power to which the expectation of deviation is raised, there are two kinds of moments: *k*-th moment and *k*-th central moment.
#
# The *k*-th moment of random variable $X$ with PMF or PDF $f_{X}(x)$ is
#
# $$
# E\left(X^{k}\right)=\left\{\begin{array}{ll}
# {\sum_{x \in \Omega_{X}} x^{k} f_{X}(x),} & {\text { if } X \text { is a DRV }} \\
# {\int_{-\infty}^{\infty} x^{k} f_{X}(x) d x,} & {\text { if } X \text { is a CRV }}
# \end{array}\right.
# $$
# where $\Omega_{X}$ is the support of $X$.
#
# The *k*-th central moment of random variable $X$ with PMF or PDF $f_{X}(x)$ is
#
# $$
# E\left(X-\mu_{X}\right)^{k}=\left\{\begin{array}{ll}
# {\sum_{x \in \Omega_{X}}\left(x-\mu_{X}\right)^{k} f_{X}(x),} & {\text { if } X \text { is a DRV, }} \\
# {\int_{-\infty}^{\infty}\left(x-\mu_{X}\right)^{k} f_{X}(x) d x,} & {\text { if } X \text { is a CRV. }}
# \end{array}\right.
# $$
#
# In fact, $E(X)$ is the $1^{st}$ order moment of $X$, and $\sigma_{X}^2$ is the $2^{nd}$ order central moment of $X$. The metrics $d$ and $c$ we defined before are $3^{rd}$ order moment and $3^{rd}$ central moment of $X$ respectively.
| Chapter 2 Random Variables/Chapter 2 Random Variables Part V.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
target_image = 'sample_pics/G1.png'
src_dir_path = 'sample_pics'
# ---
# +
import sys
import argparse
import cv2
import yaml
import numpy as np
import os
import os.path as pth
from FaceBoxes import FaceBoxes
from TDDFA import TDDFA
from utils.functions import get_suffix
from utils.pose import calc_pose
# -
config = 'configs/mb1_120x120.yml'
onnx = True
mode = 'cpu'
# +
cfg = yaml.load(open(config), Loader=yaml.SafeLoader)
if onnx:
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
os.environ['OMP_NUM_THREADS'] = '4'
from FaceBoxes.FaceBoxes_ONNX import FaceBoxes_ONNX
from TDDFA_ONNX import TDDFA_ONNX
face_boxes = FaceBoxes_ONNX()
tddfa = TDDFA_ONNX(**cfg)
else:
gpu_mode = mode == 'gpu'
tddfa = TDDFA(gpu_mode=gpu_mode, **cfg)
face_boxes = FaceBoxes()
# +
def get_pose_ypr(img_fp, tddfa):
img = cv2.imread(img_fp)
boxes = face_boxes(img)
n = len(boxes)
if n == 0:
print(f'No face detected, exit')
sys.exit(-1)
area_list = [(x2-x1)*(y2-x2) for x1, y1, x2, y2, _ in boxes]
largest_area_idx = np.argmax(area_list)
param_lst, roi_box_lst = tddfa(img, [boxes[largest_area_idx]])
P, pose = calc_pose(param_lst[0])
return pose # yaw, pitch, roll
def extract_pose(target_image):
return get_pose_ypr(target_image, tddfa)
# +
target_pose_array = np.array(extract_pose(target_image))
src_path_array = np.array([
pth.join(src_dir_path, each_file) for each_file in os.listdir(src_dir_path)
if each_file.lower().endswith('.png') or each_file.lower().endswith('.jpg')
])
src_poses_array = np.array([extract_pose(src_path) for src_path in src_path_array])
mse_dist_array = ((src_poses_array-target_pose_array)**2).mean(axis=1)
# -
top10_close_filename_list = src_path_array[np.argsort(mse_dist_array)][:10]
top10_close_filename_list
| extract_top_10_close_pose_pic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Plotting
#
# We will plot with 3 datasets this week. Let's load them.
# +
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader as pdr # IF NECESSARY, from terminal: pip install pandas_datareader
import seaborn as sns
from numpy.random import default_rng
# these three are used to open the CCM dataset:
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
pd.set_option("display.max_rows", 10) # display option for pandas
# more here: https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html
# -
# ## Load macro_data
# +
# LOAD DATA AND CONVERT TO ANNUAL
start = 1990 # pandas datareader can infer these are years
end = 2018
macro_data = pdr.data.DataReader(['CAUR','MIUR','PAUR', # unemployment
'LXXRSA','DEXRSA','WDXRSA', # case shiller index in LA, Detroit, DC (no PA available!)
'MEHOINUSCAA672N','MEHOINUSMIA672N','MEHOINUSPAA672N'], #
'fred', start, end)
macro_data = macro_data.resample('Y').first() # get's the first observation for each variable in a given year
# CLEAN UP THE FORMATING SOMEWHAT
macro_data.index = macro_data.index.year
macro_data.columns=pd.MultiIndex.from_tuples([
('Unemployment','CA'),('Unemployment','MI'),('Unemployment','PA'),
('HouseIdx','CA'),('HouseIdx','MI'),('HouseIdx','PA'),
('MedIncome','CA'),('MedIncome','MI'),('MedIncome','PA')
])
# +
year_state_tall = macro_data.stack().reset_index().rename(columns={'level_1':'state'}).sort_values(['state','DATE'])
year_state_wide = macro_data
# one level names
year_state_wide.columns=[
'Unemployment_CA','Unemployment_MI','Unemployment_PA',
'HouseIdx_CA','HouseIdx_MI','HouseIdx_PA',
'MedIncome_CA','MedIncome_MI','MedIncome_PA'
]
# -
# ## And load CCM data
#
# First, load the data
# +
url = 'https://github.com/LeDataSciFi/ledatascifi-2022/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
# + [markdown] tags=[]
# ## Sidebar: Here's a fun EDA hack:
#
# https://github.com/pandas-profiling/pandas-profiling#examples
#
# Notes
# - Slow with huge datasets
# - Doesn't wrk with multiindex column names (must be "one level")
# +
# install new package (run this one time only)
# # !pip install pandas-profiling[notebook]
# +
from pandas_profiling import ProfileReport
# create the report:
# profile = ProfileReport(macro_data, title="Pandas Profiling Report")
# profile
### THIS WON'T RUN ON macro_data yet:
### NEED TO ADJUST THIS DATASET TO RUN A PROFILE
### COLUMN NAMES NEED TO BE SINGLE LEVEL STRINGS (NOT MULTIINDEX COL NAMES)
### OR CONVERT TO TALL SHAPE
# -
# From the `year_state` data (wide or tall):
#
# - Q0. How has median income has evolved over time for PA?
# - 920am: Wasti and Lana
# - 1045am: Jake and Cole
# - Q1. How has unemployment changes has evolved over time for PA?
# - Q2. How has unemployment changes has evolved over time for all states (view as one var)?
# - Q3. How has unemployment changes has evolved over time for all states (separately)
# - Q4. How does unemployment changes vary with median income growth?
#
# From the `ccm` data:
#
# - Q5. Plot the distribution of R&D (`xrd_a`). Bonuses:
# - deal with outliers
# - add a title
# - change the x and y axis titles
# - Q6: Compare R&D and CAPX. Bonuses:
# - don't plot outliers
# - avoid oversaturated plot
# +
# ccm['xrd_a'].describe()
sns.displot(data=ccm.query('xrd_a < .5 & xrd_a > 0')['xrd_a'], # reduced the data to make plot more visible
kde=False)
# plt.title('R&D Ratio Vs. Frequency', fontsize=18)
# plt.xlabel('R&D Ratio', fontsize =14)
# plt.ylabel('Frequency', fontsize = 14)
| handouts/Plotting exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tjido/woodgreen/blob/master/Woodgreen_Week_6_Capstone_Project_Data_Analysis_%26_Visualization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="6q335toNRrPF"
# <h1>Fireside Analytics Tutorial - Woodgreen Summer Camp_Week 6</h> <h2>Capstone Project</h2>
#
#
# <h4>Data science is the process of ethically acquiring, engineering, analyzing, visualizing and ultimately, creating value with data.
#
# <p>In this tutorial, participants will gain experience analyzing a data set in a Python cloud environment using Jupiter notebook in Google Colab.</p> </h4>
# <p>For more information about this tutorial or other tutorials by Fireside Analytics, contact: <EMAIL></p>
#
# <h3><strong>Capstone Project Instructions</h3>
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <ol>
# <li>Run this Python code and interpret the results</li>
# <Li>What do you learn from the data?</li>
# <li>What courses should Fireside Analytics offer next summer?</li>
# </ol>
# </div>
# <br>
# <hr>
# + [markdown] id="FlwuUW6L6fV3"
# **Recap: This page you are reading is not regular website, it is an interactive computer programming environment called a Colab notebook that lets you write and execute code in Python.**
# + [markdown] id="EC4QnPjS41xc"
# # 1. How does a computer work?
# + [markdown] id="rgDnpC-zc0kD"
# ## A computer is a machine composed of hardware and software components. A computer receives data through an input device based on the instructions it is given and after it processes the data, it sends it back through an output device. How does this come together to make the computer work?
#
# ## Once the data has been received by the computer through any of the Input devices, the central processing unit (CPU) along with the help of other components, takes over and processes the data into meaningul information and it will be sent back through an output device which can be a monitor, speaker, printer, ports, etc.
#
# ## To better imagine how a computer works, knowing what’s inside will make it easier. Here are the main components of a computer:
#
# INPUT DEVICES
# 1. Keyboard
# 2. Mouse
# 3. Touch screen
# 4. Scanner
#
# PROCESSES
# 1. <b> CPU - Central Processing Unit </b>- The most important component in a computer It handles most operations that make it function, by processing instructions and giving signals out to other components.
#
# 2. <b> RAM – Random Access Memory </b> is a computer component where data used by the operating system and software applications store data so that the CPU can process them quickly. Everything stored on RAM is lost if the computer is shut off.
#
# 3. <b> HDD – Hard Disk Drive </b> it is the component where photos, apps, documents are kept. Although they are still being used, we have much faster types of storage devices such as <b>solid state drives (SSD)</b> that are also more reliable.
#
# 4. <b> Motherboard </b>– There is no acronym for this component but without it, there can’t be a computer. The Motherboard acts as the home for all other components, allows them to communicate with each other and gives them power in order to function.
#
# OUTPUT DEVICES
# 1. Monitor/Screen
# 2. Printer
# 3. Speaker
#
# + [markdown] id="fX1ZV9H9RrPH"
# # 2. What is "data"?
# + [markdown] id="G0Mqvml-i8br"
# ## Data is information processed or stored by a computer. This information may be in the form of text documents, images, audio clips, software programs, or other types of data. Computer data may be processed by the computer's CPU and is stored in files and folders on the computer's hard disk.
# ---
# Computers use binary - the digits 0 and 1 - to store data. A binary digit, or bit, is the smallest unit of data in computing. It is represented by a 0 or a 1. Binary numbers are made up of binary digits (bits), eg the binary number 1001.
#
# Computers convert text and other data into binary by using an assigned ASCII value. Once the ASCII value is known that value can be converted into binary.
#
# Combinations of Ones and Zeros in a computer, represent whole words and numbers, symbols and even pictures in the real world
#
#
# Information stored in ones and zeros, in bits and bytes, is data!
#
# * The letter a = 0110 0001
# * The letter b = 0110 0010
# * The letter A = 0100 0001
# * The letter B = 0100 0010
# * The symbol @ = 1000 0000
#
#
# + [markdown] id="BV854zZSq7uD"
# # 3. Analyzing Survey Data
# + [markdown] id="wuwMUpcaRrPI"
# ### Recap: Data science is the process of ethically acquiring, engineering, analyzing, visualizing and ultimately, creating value with data.
#
# Data is useful if it can tell us something, if we can learn something from the charts and visualizations. In the chart below we learn that more people knew how a computer works than did not.
# We can also say, 60% of the people who responded to the survey said they knew how a computer works.
# + [markdown] id="OldE5zdcRrP6"
# ## Recap: Remember this? Survey responses to question 2 ('yes' or 'no')
# + id="9_4z2CeGRrRm"
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
data = [3,2]
labels = ['yes', 'no']
plt.xticks(range(len(data)), labels)
plt.xlabel('Responses')
plt.ylabel('Number of People')
plt.title('Woodgreen Data Science Program: Survey Results for Questions 2: "Do you know how a computer works?"')
plt.bar(range(len(data)), data)
plt.show()
# + [markdown] id="7x52iv9rJmjW"
# <H3> <b>
# Let's analyse the responses recorded in a Google Survey Form </b></H3>
# + id="2DKDc5rfwZrt"
# Now let's import a survey response csv file and analyze the data
import pandas as pd
# + id="mJI5oJYGfZ6f"
url = 'https://raw.githubusercontent.com/tjido/woodgreen/master/Survey%20Data%20(Responses)%20-%20Form%20Responses%201.csv'
dataset = pd.read_csv(url)
dataset
# + id="ac_qeE0OxsNE"
# Let's have a look at the top 5 records in our dataset
# By default head and tail methods display 5 records in the dataset
dataset.head()
# + id="Wucn2j6TxZd0"
dataset.tail ()
# + id="rJQDkFVadOzI"
# To have a look at the last 3 records in our dataset
dataset.tail(10)
# + id="SgLldo6udO2X"
# To find out how many responses are recorded in our survey form, we need to find the number of records in the dataset
dataset.shape
# + id="qnAu56RzdO59"
# To find out the Names of fields
dataset.columns
# + [markdown] id="7LgDTrShKoO_"
# <h4>Let's have a look into the statistical information of the dataset </h4>
# + id="jn_wksy0dO91"
dataset.describe()
# + [markdown] id="oZlZ7nl3LHvi"
# The statistical information is available only for the numeric data.
# count - The total number of responses recorded in that particular field. In the survey form few fields are mandatory, so the count of those fields will be equal to the total number of records. For Ex: "How old are you?" is a mandatory field in our survey form, so the count of that field is equal to the total number of responses
# + id="VSXnX99JdPBk"
# Change the column names to make it easier in our analysis
dataset=dataset.rename(columns={'(1) What is your first name?':'First_Name', '(2) How old are you?':'Age',
'(3) Do you know how a computer works?':'How_Computer_Works',
'(4) Do you know what data science is?':'What_is_DataScience',
'(5) What do you mainly use the internet for?':'Internet_Purpose',
'(6) What technology job do you think is exciting?':'Interesting_Technology',
'(7) How good are your data analysis skills?':'DataAnalysis_Skills',
'(8) Any additional comments regarding data analysis or data science?':'Comments'})
# + id="yoKilgZCdPFo"
dataset.head(2)
# + id="cdqk-1c_OcZq"
# From this we can find out which columns contain missing value
dataset.count()
# + [markdown] id="xE77qJfeN0Wu"
# Checking the datatypes for each column
# + id="4bt8G21rdPJP"
dataset.dtypes
# + id="HHdvh7NkdPMZ"
# The First column Timestamp donot contribute any information to the dataset. So lets drop it
dataset=dataset.drop(['Timestamp'],axis=1)
# + id="gPCAwsIgPQoL"
dataset.head(3)
# + id="OFgUJawbPlKk"
# Lets find out the age of people that participated in our survey using bar chart
a = dataset['Age'].value_counts()
b=a.plot(kind='bar', figsize=(25,8), title="Which Age group is interested in participating our Survey", color='green')
b.set_xlabel("Age_Group", fontsize=20)
b.set_ylabel("Number of people", fontsize=20)
# + [markdown] id="DiIfXlgrUmQ1"
# This graph is tough to analyse, as the value counts for each and every entry in the age field is displayed. It would be much easier to analyse if the ages were grouped into "20-30","30-40","40-50"....
#
# So lets create bins with this Age Group and then plot the graph!!!
# + id="iK11gZp4PlNX"
dataset['Age_Group'] = pd.cut(dataset['Age'],bins=[20,30,40,50,60,70], labels=["20-30","30-40","40-50","50-60","60-70"])
print (dataset)
# + [markdown] id="gai_mS05sYm6"
# See now we have a new column "Age_Group"
# + id="4U3iozGCPlQQ"
a = dataset['Age_Group'].value_counts()
b=a.plot(kind='bar', figsize=(20,8), title="Which Age group participated in our Survey", color='Red', fontsize=15)
b.set_xlabel("Age_Group", fontsize=15)
b.set_ylabel("Number of people", fontsize=15)
# + id="oowXrZDHPlTL"
# How many people know how a computer works
A=dataset['How_Computer_Works'].value_counts() # Value_counts provides the frequency of each entry in a field or column
A.plot(kind='bar', figsize=(10,4), title="How many people know how a computer works?", color='Cyan', fontsize=15)
# + id="C-MN6HupPlWE"
# Create a Pie-chart to see how many people know about Data Science
h = dataset['What_is_DataScience'].value_counts()
j=h.plot(kind='pie', figsize=(15,8), title="How many people out of the total responses know 'What is Data science?'")
j.set_ylabel("", fontsize=20)
# + [markdown] id="DYZi2EM7btZa"
# <h4> Find out which is the interesting technology as per the responses
# + id="cMDQjTT9Plee"
dataset['Interesting_Technology']
# + id="_9qW8JMVPlY2"
# Creating a wordcloud to visualize the interesting technology
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
wordcloud = WordCloud(background_color="white").generate(' '.join(dataset['Interesting_Technology']))
plt.figure( figsize=(10,10) )
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# + id="hW_uMV8ckj8r"
# The field "Internet Purpose" is not a mandatory field in our survey. So it has missing values.
# Wordcloud doesn't work with columns containing missing values, so lets remove the missing values and then create word cloud
x=dataset['Internet_Purpose'].dropna()
# + id="TKPLUGngPlb2"
# Creating wordcloud to see the purpose of internet
# The default background color of word cloud is black
wordcloud = WordCloud().generate(' '.join(x))
plt.figure( figsize=(10,10) )
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# + [markdown] id="t6RyBvlbmMkF"
# <H1><B> Something interesting to think about. Do you older people know about data science more than younger people?</B></H1>
# + id="KUX9xRajPlhZ"
crosstab1=pd.crosstab(dataset['Age_Group'], dataset['What_is_DataScience'])
crosstab1
# + id="JTkR4YR9PlkI"
sorted_ct = pd.DataFrame(crosstab1.sort_values(by = ['Yes','No'],ascending = [False,False] ))
sorted_ct.plot(kind="bar", figsize=(15,8), title="Data Science influence on Age_Group", fontsize=15)
# + [markdown] id="q8iTpX7_AaJl"
# # Analyze the data and make changes to the charts to learn more.
#
# + [markdown] id="PHZO8QQjGrVz"
# ## Practice
# * What can you say to Fireside Analytics about the people that took the survey?
# * Should Fireside Analytics make data science courses for younger people?
# * Make sure the labels and headings of your charts make sense
# * Submit your write up with pictures from your charts
# + [markdown] id="ozJX_TD9D7mU"
# ie#Conclusion
#
# 1. Data science is the process of ethically acquiring, engineering, analyzing and visualizing data.
# 2. Computer programming is a set of instructions we give a computer and computers must process the instructions in 'binary', i.e., in ones and zeros.
# 3. Anything 'digital' is data.
# 4. Data can be used to answer questions, solve problems and create value in the world.
# 5. Data science and python skills are useful, no matter what careers learners choose to pursue.
#
#
#
#
#
# + [markdown] id="sipRcJ84f_bZ"
# # Contact Information
# + [markdown] id="w3tU4uFDbQ3y"
# Congratulations, you have completed a tutorial in the Python Programming language!
#
#
#
# Fireside Analytics Inc. |
# Instructor: <NAME> (Twitter: @tjido) |
# Woodgreen Community Services Summer Camp 2020 |
# Contact: <EMAIL> or [www.firesideanalytics.com](www.firesideanalytics.com)
#
# Never stop learning!
#
#
| Woodgreen_Week_6_Capstone_Project_Data_Analysis_&_Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <style>
# @font-face {
# font-family: CharisSILW;
# src: url(files/CharisSIL-R.woff);
# }
# @font-face {
# font-family: CharisSILW;
# font-style: italic;
# src: url(files/CharisSIL-I.woff);
# }
# @font-face {
# font-family: CharisSILW;
# font-weight: bold;
# src: url(files/CharisSIL-B.woff);
# }
# @font-face {
# font-family: CharisSILW;
# font-weight: bold;
# font-style: italic;
# src: url(files/CharisSIL-BI.woff);
# }
#
# div.cell, div.text_cell_render{
# max-width:1000px;
# }
#
# h1 {
# text-align:center;
# font-family: Charis SIL, CharisSILW, serif;
# }
#
# .rendered_html {
# font-size: 130%;
# line-height: 1.3;
# }
#
# .rendered_html li {
# line-height: 2;
# }
#
# .rendered_html h1{
# line-height: 1.3;
# }
#
# .rendered_html h2{
# line-height: 1.2;
# }
#
# .rendered_html h3{
# line-height: 1.0;
# }
#
# .text_cell_render {
# font-family: Charis SIL, CharisSILW, serif;
# line-height: 145%;
# }
#
# li li {
# font-size: 85%;
# }
# </style>
# # End-to-End Data Science in Python
#
# <img src="scikit-learn.png" />
# ## Introduction
#
# This is the workbook for the "End-to-End Data Analysis in Python" workshop
# at the Open Data Science Conference 2015, in beautiful San Francisco.
# This notebook contains starter code only; the goal is that we will fill in the
# gaps together as we progress through the workshop. If, however, you're doing this
# asynchronously or you get stuck, you can reference the solutions workbook.
#
# The objective is to complete the "Pump it Up: Mining the Water Table" challenge
# on [drivendata.org](www.drivendata.org/competitions/7/); the objective here is to predict
# African wells that are non-functional or in need of repair. Per the rules of the
# competition, you should register for an account with drivendata.org, at which point you
# can download the training set values and labels. We will be working with those datasets
# during this workshop. You should download those files to the directory in which this
# notebook lives, and name them wells_features.csv and wells_labels.csv (to be consistent
# with our nomenclature). You are also encouraged to continue developing your solution
# after this workshop, and/or to enter your solution in the competition on the drivendata
# website!
#
# ### Code requirements
# Here's the environment you'll need to work with this code base:
#
# * python 3 (2.x may work with minor changes, but no guarantees)
# * pandas
# * scikit-learn
# * numpy
#
# # First Draft of an Analysis
#
# +
import pandas as pd
import numpy as np
features_df = pd.DataFrame.from_csv("well_data.csv")
labels_df = pd.DataFrame.from_csv("well_labels.csv")
print( labels_df.head(20) )
# -
# One nice feature of ipython notebooks is it's easy to make small changes to code and
# then re-execute quickly, to see how things change. For example, printing the first 5 lines
# of the labels dataframe (which is the default) isn't really ideal here, since there's a label
# ("functional needs repair") which doesn't appear in the first five lines. Type 20 in the
# parentheses labels_df.head(), so it now reads labels_df.head(20), and press shift-enter to
# rerun the code. See the difference?
#
# Now take a quick look at the features, again by calling .head() (set up for you in the code box
# below, or add your own code to the code box above). You can print or as few
# rows as you like. Take a quick look at the data--approximately how many features are there?
# Are they all numeric, or will you have to do work to transform non-numeric features into
# numbers?
print( features_df.head() )
# ### Transforming string labels into integers
# The machine learning algorithms downstream are not going to handle it well if the class labels
# used for training are strings; instead, we'll want to use integers. The mapping that we'll use
# is that "non functional" will be transformed to 0, "functional needs repair" will be 1, and
# "functional" becomes 2.
#
# There are a number of ways to do this; the framework below uses applymap() in pandas.
# [Here's](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.applymap.html)
# the documentation for applymap(); in the code below, you should fill in the function body for
# label_map(y) so that if y is "functional", label_map returns 2; if y is "functional needs
# repair" then it should return 1, and "non functional" is 0.
# There's a print statement there to help you confirm that the label transformation is working
# properly.
#
# As an aside, you could also use apply() here if you like. The difference between apply()
# and applymap() is that applymap() operates on a whole dataframe while apply() operates on a series
# (or you can think of it as operating on one column of your dataframe). Since labels_df only has
# one column (aside from the index column), either one will work here.
#
def label_map(y):
if y=="functional":
return 2
elif y=="functional needs repair":
return 1
else:
return 0
labels_df = labels_df.applymap(label_map)
print( labels_df.head() )
# ### Transforming string features into integers
#
# Now that the labels are ready, we'll turn our attention to the features. Many of the features
# are categorical, where a feature can take on one of a few discrete values, which are not ordered.
# Fill in the function body of ``transform_feature( df, column )`` below so that it takes our ``features_df`` and
# the name of a column in that dataframe, and returns the same dataframe but with the indicated
# feature encoded with integers rather than strings.
#
# We've provided code to wrap your transformer function in a loop iterating through all the columns that should
# be transformed.
#
# Last, add a line of code at the bottom of the block below that removes the ``date_recorded`` column from ``features_df``. Time-series information like dates and times need special treatment, which we won't be going into today.
# +
def transform_feature( df, column_name ):
unique_values = set( df[column_name].tolist() )
transformer_dict = {}
for ii, value in enumerate(unique_values):
transformer_dict[value] = ii
def label_map(y):
return transformer_dict[y]
df[column_name] = df[column_name].apply( label_map )
return df
### list of column names indicating which columns to transform;
### this is just a start! Use some of the print( labels_df.head() )
### output upstream to help you decide which columns get the
### transformation
names_of_columns_to_transform = ["funder", "installer", "wpt_name", "basin", "subvillage",
"region", "lga", "ward", "public_meeting", "recorded_by",
"scheme_management", "scheme_name", "permit",
"extraction_type", "extraction_type_group",
"extraction_type_class",
"management", "management_group",
"payment", "payment_type",
"water_quality", "quality_group", "quantity", "quantity_group",
"source", "source_type", "source_class",
"waterpoint_type", "waterpoint_type_group"]
for column in names_of_columns_to_transform:
features_df = transform_feature( features_df, column )
print( features_df.head() )
### remove the "date_recorded" column--we're not going to make use
### of time-series data today
features_df.drop("date_recorded", axis=1, inplace=True)
print(features_df.columns.values)
# -
# Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
# ###Predicting well failures with logistic regression
#
# The cheapest and easiest way to train on one portion of your dataset and test on another, and to get a measure of model quality at the same time, is to use ``sklearn.cross_validation.cross_val_score()``. This splits your data into 3 equal portions, trains on two of them, and tests on the third. This process repeats 3 times. That's why 3 numbers get printed in the code block below.
#
# You don't have to add anything to the code block, it's ready to go already. However, use it for reference in the next part of the tutorial, where you will be looking at other sklearn algorithms.
#
# Heads up: it can be a little slow. This took a minute or two to evaluate on my MacBook Pro.
# +
import sklearn.linear_model
import sklearn.cross_validation
clf = sklearn.linear_model.LogisticRegression()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
# -
# ###Comparing logistic regression to tree-based methods
#
# We have a baseline logistic regression model for well failures. Let's compare to a couple of other classifiers, a decision tree classifier and a random forest classifier, to see which one seems to do the best.
#
# Code this up on your own. You can use the code in the box above as a kind of template, and just drop in the new classifiers. The sklearn documentation might also be helpful:
# * [Decision tree classifier](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html)
# * [Random forest classifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
#
# We will talk about all three of these models more in the next part of the tutorial.
# +
import sklearn.tree
import sklearn.ensemble
clf = sklearn.tree.DecisionTreeClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
# -
# Congratulations! You have a working data science setup, in which you have:
# * read in data
# * transformed features and labels to make the data amenable to machine learning
# * made a train/test split (this was done implicitly when you called ``cross_val_score``)
# * evaluated several models for identifying wells that are failed or in danger of failing
# ##Paying down technical debt and tuning the models
#
# We got things running really fast, which is great, but at the cost of being a little quick-and-dirty about some details. First, we got the features encoded as integers, but they really should be dummy variables. Second, it's worth going through the models a little more thoughtfully, to try to understand their performance and if there's any more juice we can get out of them.
#
# ###One-hot encoding to make dummy variables
# A problem with representing categorical variables as integers is that integers are ordered, while categories are not. The standard way to deal with this is to use dummy variables; one-hot encoding is a very common way of dummying. Each possible category becomes a new boolean feature. For example, if our dataframe looked like this:
# ``index country
# 1 "United States"
# 2 "Mexico"
# 3 "Mexico"
# 4 "Canada"
# 5 "United States"
# 6 "Canada"``
# then after dummying it will look something like this:
# ``index country_UnitedStates country_Mexico country_Canada
# 1 1 0 0
# 2 0 1 0
# 3 0 1 0
# 4 0 0 1
# 5 1 0 0
# 6 0 0 1``
# Hopefully the origin of the name is clear--each variable is now encoded over several boolean columns, one of which is true (hot) and the others are false.
#
# Now we'll write a hot-encoder function that takes the data frame and the title of a column, and returns the same data frame but one-hot encoding performed on the indicated feature.
#
# Protip: sklearn has a [one-hot encoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) function available that will be your friend here.
# +
import sklearn.preprocessing
def hot_encoder(df, column_name):
column = df[column_name].tolist()
column = np.reshape( column, (len(column), 1) ) ### needs to be an N x 1 numpy array
enc = sklearn.preprocessing.OneHotEncoder()
enc.fit( column )
new_column = enc.transform( column ).toarray()
column_titles = []
### making titles for the new columns, and appending them to dataframe
for ii in range( len(new_column[0]) ):
this_column_name = column_name+"_"+str(ii)
df[this_column_name] = new_column[:,ii]
return df
# -
# Now we'll take the ``to_transform`` list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
#
# One note before you code that up: one-hot encoding comes with the baggage that it makes your dataset bigger--sometimes a lot bigger. In the countries example above, one column that encoded the country has now been expanded out to three columns. You can imagine that this can sometimes get really, really big (imagine a column encoding all the counties in the United States, for example).
#
# There are some columns in this example that will really blow up the dataset, so we'll remove them before proceeding with the one-hot encoding.
# +
print(features_df.columns.values)
features_df.drop( "funder", axis=1, inplace=True )
features_df.drop( "installer", axis=1, inplace=True )
features_df.drop( "wpt_name", axis=1, inplace=True )
features_df.drop( "subvillage", axis=1, inplace=True )
features_df.drop( "ward", axis=1, inplace=True )
names_of_columns_to_transform.remove("funder")
names_of_columns_to_transform.remove("installer")
names_of_columns_to_transform.remove("wpt_name")
names_of_columns_to_transform.remove("subvillage")
names_of_columns_to_transform.remove("ward")
for feature in names_of_columns_to_transform:
features_df = hot_encoder( features_df, feature )
print( features_df.head() )
# -
# Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
# +
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print(score)
# -
# ##End-to-end workflows using Pipeline and GridSearchCV
#
# So far we have made a nice workflow using a few ideas assembled in a script-like workflow. A few spots remain where we can tighten things up though:
#
# * the best model, the random forest, has a lot of parameters that we'd have to work through if we really wanted to tune it
# * after dummying, we have _lots_ of features, probably only a subset of which are really offering any discriminatory power (this is a version of the bias-variance tradeoff)
# * maybe there's a way to make the code more streamlined (hint: there is)
#
# We will solve all these with two related and lovely tools in sklearn: Pipeline and GridSearchCV.
#
# Pipeline in sklearn is a tool for chaining together multiple pieces of a workflow into a single coherent analysis. In our example, we will chain together a tool for feature selection, to will address the second point, which then feeds our optimized feature set into the random forest model, all in a few lines of code (which addresses the third point).
#
# To get to the first point, about finding the best parameters--that's where the magic of GridSearchCV comes in. But first we need to get the feature selector and pipeline up and running, so let's do that now.
#
# In ``sklearn.feature_selection`` there is a useful tool, ``SelectKBest`` [(link)](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html) that you should use. By default, this will select the 10 best features; that seems like it might be too few features to do well on this problem, so change the number of features to 100.
# +
import sklearn.feature_selection
select = sklearn.feature_selection.SelectKBest(k=100)
selected_X = select.fit_transform(X, y)
print( selected_X.shape )
# -
# ### Pipeline
#
# After selecting the 100 best features, the natural next step would be to run our random forest again to see if it does a little better with fewer features. So we would have ``SelectKBest`` doing selection, with the output of that process going straight into a classifier. A [Pipeline](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) packages the transformation step of ``SelectKBest`` with the estimation step of ``RandomForestClassifier`` into a coherent workflow.
#
# Why might you want to use ``Pipeline`` instead of keeping the steps separate?
#
# * makes code more readable
# * don't have to worry about keeping track data during intermediate steps, for example between transforming and estimating
# * makes it trivial to move ordering of the pipeline pieces, or to swap pieces in and out
# * *Allows you to do GridSearchCV on your workflow*
#
# This last point is, in my opinion, the most important. We will get to it very soon, but first let's get a pipeline up and running that does ``SelectKBest`` followed by ``RandomForestClassifier``.
#
# In the code box below, I've also set up a slightly better training/testing structure, where I am explicitly splitting the data into training and testing sets which we'll use below. The training/testing split before was handled automatically in ``cross_val_score,`` but we'll be using a different evaluation metric from here forward, the classification report, which requires us to handle the train/test split ourselves.
#
# Note: when you do ``SelectKBest``, you might see a warning about a bunch of features that are constant. This isn't a problem. It's giving you a heads up that the indicated features don't show any variation, which could be a signal that something is wrong or that ``SelectKBest`` might be doing something unexpected. I
# +
import sklearn.pipeline
select = sklearn.feature_selection.SelectKBest(k=100)
clf = sklearn.ensemble.RandomForestClassifier()
steps = [('feature_selection', select),
('random_forest', clf)]
pipeline = sklearn.pipeline.Pipeline(steps)
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(X, y, test_size=0.33, random_state=42)
### fit your pipeline on X_train and y_train
pipeline.fit( X_train, y_train )
### call pipeline.predict() on your X_test data to make a set of test predictions
y_prediction = pipeline.predict( X_test )
### test your predictions using sklearn.classification_report()
report = sklearn.metrics.classification_report( y_test, y_prediction )
### and print the report
print(report)
# -
# ### Reading the classification report
#
# A brief aside--we've switched from ``cross_val_score`` to ``classification_report`` for evaluation, mostly to show you two different ways to evaluating a model. The classification report has the advantage of giving you a lot more information, and if (for example) one class is more important to get right than the others (say you're trying to zero in on non-functional wells, so finding those correctly is more important than getting the functional wells right).
#
# For more information, the [sklearn docs](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) on ``classification_report`` are, like all the sklearn docs, incredibly helpful. For interpreting the various metrics, [this page](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html) may also help.
#
# ###GridSearchCV
# We're in the home stretch now. When we decided to select the 100 best features, setting that number to 100 was kind of a hand-wavey decision. Similarly, the RandomForestClassifier that we're using right now has all its parameters set to their default values, which might not be optimal.
#
# So, a straightforward thing to do now is to try different values of ``k`` and any ``RandomForestClassifier`` parameters we want to tune (for the sake of concreteness, let's play with n_estimators and min_samples_split). Trying lots of values for each of these free parameters is tedious, and there can sometimes be interactions between the choices you make in one step and the optimal value for a downstream step. In other words, to avoid local optima, you should try all the combinations of parameters, and not just vary them independently. So if you want to try 5 different values each for ``k``, ``n_estimators`` and ``min_samples_split``, that means 5 x 5 x 5 = 125 different combinations to try. Not something you want to do by hand.
#
# ``GridSearchCV`` allows you to construct a grid of all the combinations of parameters, tries each combination, and then reports back the best combination/model.
# +
import sklearn.grid_search
#import warnings
#warnings.filterwarnings("ignore")
parameters = dict(feature_selection__k=[100, 200],
random_forest__n_estimators=[50],
random_forest__min_samples_split=[4])
cv = sklearn.grid_search.GridSearchCV(pipeline, param_grid=parameters)
print(pipeline.named_steps)
cv.fit(X_train, y_train)
y_predictions = cv.predict(X_test)
report = sklearn.metrics.classification_report( y_test, y_predictions )
### and print the report
print(report)
# -
| notebook/african_wells_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3.6
# ---
# # Jupyter Basics
#
# This notebook will walk you through some of the basic features and cell types in Jupyter.
#
# # Markdown Cells
# Below this cell is an example of a Markdown cell, with the title "Edit this cell".
#
# <div class="alert alert-info"><ol>
# <li>Double click the cell below to see its Markdown text.</li>
#
# <li>Once you see the Markdown, click inside it to enable te buttons to its left.</li>
# <li>Switch to the rich text editor by clicking on the File-looking icon next to the cell's <b>Play</b> button.</li>
#
# <li>Click the file icon to see this text in the rich text editor instead.Perform some edits to the text using the rich text editor.</li>
#
# <li>Finally click on this cell's play button to go close the editor.</li>
# </div>
# <h1>Edit this cell</h1>
#
# <p>Follow the instructions above to edit this cell a;<u>sdlfaisdjf</u></p>
#
# # Create a new Markdown Cell
# <div class="alert alert-info"><ol>
# <li>Go to the <b>"Insert"</b> menu for this page. Select <b>"Insert cell below"</b></li>
# <li>Change the cell to a Markdown cell. Go to the <b>"Cell"</b> menu and select <b>"Cell Type"</b> and then <b>"Markdown"</b></li>
# <li>Put some text in the new cell and <b>"Play"</b> it to see your changes</li>
# </ol></div>
# Note that the new cell is inserted next to the one that currently has the cursor in it; not necessarily the one you are looking at.
# <p>zcvzxcv</p>
#
# # Code Cells
#
# Code cells are used to execute code in your kernel. Since this is a Python notebook we can enter python in a code cell and have the kernel execute it.
# <div class="alert alert-info">
# Play the two code cells below
# </div>
#
# The result of playing a code cell is usually displayed right beneath the cell.
print("Hello World")
x = 1 + 1
x
# # Cells and Ordering
#
# Cells appear in a notebook in a specific order, and generally its best to play them in that same order. If you jump around a notebook playing cells, the results might be confusing
#
# <div class="alert alert-info">Play the following 3 cells one at a time experimenting with different orderings</div>
# When you start, X probably equals 2 from the cell above...
x = x + 2
print(x)
x = 10
# # Running cells
#
# When a cell is running, you will see a <b>[*]</b> next to it. Only one cell at a time can be run by the kernel. If you hit the play button several times for a slow cell while it is already running, it will just run it (slowly) as many times as you clicked the play button.
#
# <div class="alert alert-info">Run the cell below. Notice the <b>[*]</b> beside the cell while it is executing.</div>
# + genepattern={"output_variable": "", "param_values": {"column_name": "TCGA-A7-A0CE-11.htseq", "dataframe": " ", "gct": " "}, "show_code": false, "type": "uibuilder"}
import time
print("Sleeping for 10 seconds")
time.sleep(10)
print("Execution complete")
# -
| 2017-12-15_CCMI_workshop/notebooks/2017-12-15_04_CCMI_Jupyter_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''azure-ml'': conda)'
# name: python3710jvsc74a57bd0997d78c58e50c7e82caf235e89e34cacc8682ee4ffa6d82aeb88f0bb088fa105
# ---
# # Work with Data
#
# Data is the foundation on which machine learning models are built. Managing data centrally in the cloud, and making it accessible to teams of data scientists who are running experiments and training models on multiple workstations and compute targets is an important part of any professional data science solution.
#
# In this notebook, you'll explore two Azure Machine Learning objects for working with data: *datastores*, and *datasets*.
# ## Connect to your workspace
#
# To get started, connect to your workspace.
#
# > **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
# +
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
for compute_name in ws.compute_targets:
compute = ws.compute_targets[compute_name]
print("(",compute.status.state ,") ",compute.name, ":", compute.type)
# -
# ## Work with datastores
#
# In Azure ML, *datastores* are references to storage locations, such as Azure Storage blob containers. Every workspace has a default datastore - usually the Azure storage blob container that was created with the workspace. If you need to work with data that is stored in different locations, you can add custom datastores to your workspace and set any of them to be the default.
#
# ### View datastores
#
# Run the following code to determine the datastores in your workspace:
# +
# Get the default datastore
default_ds = ws.get_default_datastore()
# Enumerate all datastores, indicating which is the default
for ds_name in ws.datastores:
print(ds_name, "- Default =", ds_name == default_ds.name)
# -
# You can also view and manage datastores in your workspace on the **Datastores** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com).
#
# ### Upload data to a datastore
#
# Now that you have determined the available datastores, you can upload files from your local file system to a datastore so that it will be accessible to experiments running in the workspace, regardless of where the experiment script is actually being run.
default_ds.upload_files(files=['../data/diabetes.csv', '../data/diabetes2.csv'], # Upload the csv files in ../data
relative_root= '../data/', # Sets the relative root path
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
# ## Work with datasets
#
# Azure Machine Learning provides an abstraction for data in the form of *datasets*. A dataset is a versioned reference to a specific set of data that you may want to use in an experiment. Datasets can be *tabular* or *file*-based.
#
# ### Create a tabular dataset
#
# Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *tabular* dataset.
# +
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
# -
# As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques.
#
# ### Create a file Dataset
#
# The dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
# +
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
# -
# ### Register datasets
#
# Now that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.
#
# We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
# +
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
# -
# You can view and manage datasets on the **Datasets** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com). You can also get a list of datasets from the workspace object:
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, '(version', dataset.version,")")
# The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:
#
# ```python
# dataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)
# ```
# # Train a model from a tabular dataset
#
# Now that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the [diabetes_training_tabular script](../scripts/diabetes_training_tabular.py).
#
# **Note**: In the script, the dataset is passed as a parameter (or argument). In the case of a tabular dataset, this argument will contain the ID of the registered dataset; so you could write code in the script to get the experiment's workspace from the run context, and then get the dataset using its ID; like this:
#
# ```python
# run = Run.get_context()
# ws = run.experiment.workspace
# dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)
# diabetes = dataset.to_pandas_dataframe()
# ```
#
# However, Azure Machine Learning runs automatically identify arguments that reference named datasets and add them to the run's **input_datasets** collection, so you can also retrieve the dataset from this collection by specifying its "friendly name" (which as you'll see shortly, is specified in the argument definition in the script run configuration for the experiment). This is the approach taken in the [diabetes_training_tabular script](../scripts/diabetes_training_tabular.py).
#
# Now you can run a script as an experiment, defining an argument for the training dataset, which is read by the script.
#
# > **Note**: The **Dataset** class depends on some components in the **azureml-dataprep** package, which includes optional support for **pandas** that is used by the **to_pandas_dataframe()** method. So you need to include this package in the environment where the training experiment will be run.
#
# ## Get the training dataset
# +
diabetes_ds = ws.datasets.get("diabetes dataset")
dataset_input = diabetes_ds.as_named_input('training_data')
script_name = 'diabetes_training_tabular.py'
model_tag = 'Tabular dataset'
# -
# # Train a model from a file dataset
#
# You've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?
#
# When you're using a file dataset, the dataset argument passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.
#
# Just as with tabular datasets, you can retrieve a file dataset from the **input_datasets** collection by using its friendly name. You can also retrieve it from the script argument, which in the case of a file dataset contains a mount path to the files (rather than the dataset ID passed for a tabular dataset).
#
# Next we need to change the way we pass the dataset to the script - it needs to define a path from which the script can read the files. You can use either the **as_download** or **as_mount** method to do this. Using **as_download** causes the files in the file dataset to be downloaded to a temporary location on the compute where the script is being run, while **as_mount** creates a mount point from which the files can be streamed directly from the datasetore.
#
# You can combine the access method with the **as_named_input** method to include the dataset in the **input_datasets** collection in the experiment run (if you omit this, for example by setting the argument to `diabetes_ds.as_mount()`, the script will be able to access the dataset mount point from the script arguments, but not from the **input_datasets** collection).
#
# ## Get the training dataset
# +
diabetes_ds = ws.datasets.get("diabetes file dataset")
dataset_input = diabetes_ds.as_named_input('training_files').as_download()
script_name = 'diabetes_training_file.py'
model_tag = 'File dataset'
# -
# # Preparing the Experiment Run
# +
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Create a Python environment for the experiment
experiment_env = Environment("diabetes-classification")
# Ensure the required packages are installed (we need scikit-learn, Azure ML defaults, and Azure ML dataprep)
packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'],
pip_packages=['azureml-defaults','azureml-dataprep[pandas]'])
experiment_env.python.conda_dependencies = packages
# -
# ## Running on Local Machine
experiment_env.python.user_managed_dependencies = True
computeTarget = "local" # Default
# ## Running on a Compute Instance in Azure
experiment_env.python.user_managed_dependencies = False # Default Value
computeTarget = ws.compute_targets["basic-instance"]#Environment.get(workspace=ws, name='basic-instance')
# ## Create a Script Config
#
# > **Note:** The **--input-data** argument passes the dataset as a *named input* that includes a *friendly name* for the dataset, which is used by the script to read it from the **input_datasets** collection in the experiment run. The string value in the **--input-data** argument is actually the registered dataset's ID. As an alternative approach, you could simply pass `diabetes_ds.id`, in which case the script can access the dataset ID from the script arguments and use it to get the dataset from the workspace, but not from the **input_datasets** collection.
# Create a script config
script_config = ScriptRunConfig(source_directory="../scripts",
script=script_name,
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', dataset_input], # Reference to dataset
compute_target = computeTarget, # Assigning Compute Targets. Default = Local Machine
environment=experiment_env)
# ## Submit the experiment
#
# The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.
#
# When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run.
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
# ## Wait for Run End
print("Run ID: ",run.wait_for_completion()["runId"])
# ## Register the trained model
#
# As with any training experiment, you can retrieve the trained model and register it in your Azure Machine Learning workspace.
# +
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':model_tag}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# -
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
# > **More Information**: For more information about training with datasets, see [Training with Datasets](https://docs.microsoft.com/azure/machine-learning/how-to-train-with-datasets) in the Azure ML documentation.
| notebooks/03 - Working with Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf2
# language: python
# name: tf2
# ---
# + [markdown] colab_type="text" id="wuu5HXaWLWSM"
# ## One Hot Encoding of text
#
# This implements one hot encoding.
#
# In real world projects one mostly uses scikit -learn’s implementation of one hot encoding
# +
# To install only the requirements of this notebook, uncomment the lines below and run this cell
# ===========================
# !pip install scikit-learn==0.21.3
# ===========================
# +
# To install the requirements for the entire chapter, uncomment the lines below and run this cell
# ===========================
# try :
# import google.colab
# # !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch3/ch3-requirements.txt | xargs -n 1 -L 1 pip install
# except ModuleNotFoundError :
# # !pip install -r "ch3-requirements.txt"
# ===========================
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GvndsBIzLWSQ" outputId="8935d592-e9a6-450c-b4e3-6ea022d31abb"
documents = ["Dog bites man.", "Man bites dog.", "Dog eats meat.", "Man eats food."]
processed_docs = [doc.lower().replace(".","") for doc in documents]
processed_docs
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="dWJcOZLBLWSW" outputId="b8a5ceaa-5e1c-4d64-d246-509f5f7e1daf"
#Build the vocabulary
vocab = {}
count = 0
for doc in processed_docs:
for word in doc.split():
if word not in vocab:
count = count +1
vocab[word] = count
print(vocab)
# + colab={} colab_type="code" id="4pesdRwpLWSc"
#Get one hot representation for any string based on this vocabulary.
#If the word exists in the vocabulary, its representation is returned.
#If not, a list of zeroes is returned for that word.
def get_onehot_vector(somestring):
onehot_encoded = []
for word in somestring.split():
temp = [0]*len(vocab)
if word in vocab:
temp[vocab[word]-1] = 1 # -1 is to take care of the fact indexing in array starts from 0 and not 1
onehot_encoded.append(temp)
return onehot_encoded
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="JELqSh4gLWSg" outputId="eb503558-33ce-48d8-db1e-a7c9cecde69d"
print(processed_docs[1])
get_onehot_vector(processed_docs[1]) #one hot representation for a text from our corpus.
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="PVQExJUGLWSm" outputId="6e3581d4-10d5-4a21-9612-0c334f895e2a"
get_onehot_vector("man and dog are good")
#one hot representation for a random text, using the above vocabulary
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="_xb8azVwLWSs" outputId="6c4e6bf5-d4b9-45cd-ffcd-85c2d323e67a"
get_onehot_vector("man and man are good")
# + [markdown] colab_type="text" id="ANj41SQ4L7xI"
# ## One-hot encoding using scikit -learn
# ##### We encode our corpus as a one-hot numeric array using scikit-learn's OneHotEncoder.
# ##### We will demostrate:
#
# * One Hot Encoding: In one-hot encoding, each word w in corpus vocabulary is given a unique integer id wid that is between 1 and |V|, where V is the set of corpus vocab. Each word is then represented by a V-dimensional binary vector of 0s and 1s.
#
# * Label Encoding: In Label Encoding, each word w in our corpus is converted into a numeric value between 0 and n-1 (where n refers to number of unique words in our corpus).
#
# ##### Link for the official documentation of both can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) and [here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) respectively.
#
#
#
#
#
#
#
# + colab={} colab_type="code" id="sAPkk-fZLh4W"
S1 = 'dog bites man'
S2 = 'man bites dog'
S3 = 'dog eats meat'
S4 = 'man eats food'
# + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="OYCRHl5SLWSy" outputId="ef8252e6-9f6a-4ec3-a2d5-76c154621b41"
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
data = [S1.split(), S2.split(), S3.split(), S4.split()]
values = data[0]+data[1]+data[2]+data[3]
print("The data: ",values)
#Label Encoding
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(values)
print("Label Encoded:",integer_encoded)
#One-Hot Encoding
onehot_encoder = OneHotEncoder()
onehot_encoded = onehot_encoder.fit_transform(data).toarray()
print("Onehot Encoded Matrix:\n",onehot_encoded)
# -
| Ch3/01_OneHotEncoding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing required libraries
# +
from sklearn.ensemble import RandomForestRegressor
import pandas as pd
import numpy as np
from sklearn import metrics
# -
# # Initializing Feature and Response matrices stored as .CSV files earlier
# ## Training Dataset
# +
X_train = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\X_train_30_day_window_scaled_+_pastDroughtValues_+_FeatureSelection.csv', delimiter=',')
y_target_train = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\y_target_train_30_day_window.csv', delimiter=',')
# +
X_train.shape
y_target_train.shape
# -
# ## Validation Dataset
# +
X_valid = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\X_valid_30_day_window_scaled_+_pastDroughtValues_+_FeatureSelection.csv', delimiter=',')
y_target_valid = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\y_target_valid_30_day_window.csv', delimiter=',')
# +
X_valid.shape
y_target_valid.shape
# -
# ## Testing Dataset
# +
X_test = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\X_test_30_day_window_scaled_+_pastDroughtValues_+_FeatureSelection.csv', delimiter=',')
y_target_test = np.loadtxt(r'R:\Ryerson\Misc\Datasets\Preprocessed Data Files\New + pastDroughtValues + FeatureSelection\30_Day_Window_Scaled + pastDroughtValues + FeatureSelection\y_target_test_30_day_window.csv', delimiter=',')
# +
X_test.shape
y_target_test.shape
# -
# ##### _Concatenating X_valid & X_test into a single test set_
X_Test = np.concatenate((X_valid, X_test), axis=0)
X_Test.shape
# ##### _Concatenating y_target_valid & y_target_test into a single array_
y_Target = np.concatenate((y_target_valid, y_target_test), axis=0)
y_Target.shape
# # Training a Random Forest with 100 Trees
rf_100_trees = RandomForestRegressor(n_estimators=100, random_state=10)
rf_100_trees.fit(X_train, y_target_train)
yPredicted_rf100 = rf_100_trees.predict(X_Test)
# ## Printing Metrics
print('Testing Metrics:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_Target, yPredicted_rf100))
print('Mean Squared Error:', metrics.mean_squared_error(y_Target, yPredicted_rf100))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_Target, yPredicted_rf100)))
print('R^2 Score:', metrics.r2_score(y_Target, yPredicted_rf100))
# # Training a Random Forest with 500 Trees
rf_500_trees = RandomForestRegressor(n_estimators=500, random_state=12)
rf_500_trees.fit(X_train, y_target_train)
yPredicted_rf500 = rf_500_trees.predict(X_Test)
print('Testing Metrics:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_Target, yPredicted_rf500))
print('Mean Squared Error:', metrics.mean_squared_error(y_Target, yPredicted_rf500))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_Target, yPredicted_rf500)))
print('R^2 Score:', metrics.r2_score(y_Target, yPredicted_rf500))
# # Training a Random Forest with 1000 Trees
# +
rf_1000_trees = RandomForestRegressor(n_estimators=1000, random_state=42)
rf_1000_trees.fit(X_train, y_target_train)
# -
yPredicted_rf1000 = rf_1000_trees.predict(X_Test)
# ## Printing Metrics
print('Testing Metrics:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_Target, yPredicted_rf1000))
print('Mean Squared Error:', metrics.mean_squared_error(y_Target, yPredicted_rf1000))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_Target, yPredicted_rf1000)))
print('R^2 Score:', metrics.r2_score(y_Target, yPredicted_rf1000))
| [Experiment 4]: 30-day short-term drought prediction with Past Drought Values and Selected Features/[Experiment 4]_Random_Forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LucasColas/Coding-Problems/blob/main/Island%20Perimeter/Island_Perimeter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="oainrI9-I079"
# Island Perimeter
#
# From [LeetCode](https://leetcode.com/problems/island-perimeter/)
# + colab={"base_uri": "https://localhost:8080/"} id="muRcryen-y05" outputId="0877309a-e17c-4224-96da-b4520a7a0de1"
def Perimeter(grid):
Perimeter = 0
for i, row in enumerate(grid):
for j, cell in enumerate(row):
if cell==1:
Perimeter += 4
if i > 0:
if cell == 1 and grid[i-1][j] == 1:
#print("cell == 1 and row[i-1] == 1",i,j, "next door box :", i-1,j, "box value : ", row[i-1])
Perimeter -= 1
if i < len(grid)-1:
#print(i+1)
if cell == 1 and grid[i+1][j] == 1:
#print("cell == 1 and row[i+1] == 1",i,j, "next door box :", i+1,j)
Perimeter -= 1
if j > 0:
if cell == 1 and row[j-1] == 1:
#print("cell == 1 and row[j-1] == 1",i,j,"next door box :", i,j-1)
Perimeter -= 1
if j < len(row)-1:
if cell == 1 and row[j+1] == 1:
#print("cell == 1 and row[j+1] == 1",i,j, "next door box :", i,j+1)
Perimeter -= 1
return Perimeter
Perimeter([[1,0]])
| Island Perimeter/Island_Perimeter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# #%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import rc
#******************************************************
rc('font',**{'family':'serif','serif':['Times']})
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
rc('text', usetex=True)
#LaTeX
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
#*******************************************************
# %matplotlib notebook
# +
#test 1
chain1=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__1.txt')
chain2=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__2.txt')
chain3=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__3.txt')
chain4=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__4.txt')
chain5=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__5.txt')
chain6=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__6.txt')
chain7=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__7.txt')
chain8=np.loadtxt('../chains/N1_nierika/nrk_11-9_f2.4/2018-11-09_250000__8.txt')
# +
#first run with 240,000 steps
chain11=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__1.txt')
chain12=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__2.txt')
chain13=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__3.txt')
chain14=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__4.txt')
chain15=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__5.txt')
chain16=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__6.txt')
chain17=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__7.txt')
chain18=np.loadtxt('./../chains/N1_nierika/planck_bao_jla/nrk_12-1_cluster1_f1.7/2018-12-01_800000__8.txt')
l11=np.arange(len(chain11))
# -
chain1.shape
print(chain1[:,0])
l11=np.arange(len(chain11))
l12=np.arange(len(chain12))
l13=np.arange(len(chain13))
l14=np.arange(len(chain14))
l15=np.arange(len(chain15))
l16=np.arange(len(chain16))
l17=np.arange(len(chain17))
l18=np.arange(len(chain18))
print l1
print l2
print l1
# +
fig, ax = plt.subplots(figsize=(9,7))
# Param 1
ax.plot(l1,chain1[:,2],label=r'$H_0$',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,3],label=r'$10^{2} \omega_{b}$',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,4],'b',label=r'$\omega_{cdm}$',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,5],label=r'10^{9} A_{s }',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,6],label=r'n_s',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,7],label=r'\tau_{reio }',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,8],label=r'b0_{fld } ',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,9],label=r'b1_{fld } ',alpha=1,linewidth=1)
ax.plot(l1,chain1[:,10],label=r'\Omega_{fld}',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'parameter value',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'9-parameter exploration MCMC, JLA+BAO, chain 1');
plt.savefig("plot1_1_jlabao_oct12_240k.pdf")
# +
fig, ax = plt.subplots(figsize=(9,7))
# Param 1
ax.plot(l2,chain2[:,2],label=r'$H_0$',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,3],label=r'$10^{2} \omega_{b }$',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,4],'b',label=r'$\omega_{cdm}$',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,5],label=r'10^{9} A_{s }',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,6],label=r'n_s',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,7],label=r'\tau_{reio }',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,8],label=r'b0_{fld } ',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,9],label=r'b1_{fld } ',alpha=1,linewidth=1)
ax.plot(l2,chain2[:,10],label=r'\Omega_{fld}',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'parameter value',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'9-parameter exploration MCMC, JLA+BAO, chain 2');
plt.savefig("plot1_2_jlabao_oct12_240k.pdf")
# +
fig, ax = plt.subplots(figsize=(9,7))
ax.plot(l3,chain3[:,2],label=r'$H_0$',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,3],label=r'$10^{2} \omega_{b }$',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,4],'b',label=r'$\omega_{cdm}$',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,5],label=r'10^{9} A_{s }',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,6],label=r'n_s',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,7],label=r'\tau_{reio }',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,8],label=r'b0_{fld } ',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,9],label=r'b1_{fld } ',alpha=1,linewidth=1)
ax.plot(l3,chain3[:,10],label=r'\Omega_{fld}',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'parameter value',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'9-parameter exploration MCMC, JLA+BAO, chain 3');
plt.savefig("plot1_3_jlabao_oct12_240k.pdf")
# +
fig, ax = plt.subplots(figsize=(9,7))
ax.plot(l11,chain11[:,6],'b',label=r'chain 1',alpha=1,linewidth=1)
#ax.plot(l12,chain12[:,6],'b',label=r'chain 2',alpha=1,linewidth=1)
#ax.plot(l13,chain13[:,6],'b',label=r'chain 3',alpha=1,linewidth=1)
#ax.plot(l14,chain14[:,6],'b',label=r'chain 4',alpha=1,linewidth=1)
#ax.plot(l15,chain15[:,6],'b',label=r'chain 5',alpha=1,linewidth=1)
#ax.plot(l16,chain16[:,6],'b',label=r'chain 6',alpha=1,linewidth=1)
ax.plot(l17,chain17[:,6],'b',label=r'chain 7',alpha=1,linewidth=1)
ax.plot(l18,chain18[:,6],'b',label=r'chain 8',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'10^{-9}A_{s }',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'wellness of exploration for 1 parameter using all 8 (120k-step) chains');
#plt.savefig("plot2_1_jlabao_oct12_240k.pdf")
# +
fig, ax = plt.subplots(figsize=(9,7))
ax.plot(l1,chain1[:,6],'b',label=r'chain 1',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'10^{-9}A_{s }',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'wellness of exploration for 1 parameter using ONE chain');
plt.savefig("plot3_1_jlabao_oct12_240k.pdf")
# -
print chain1
print chain1[0,:]
print chain1[:,0]
# +
promedios=[]
for i in range(len(chain1)): #para cada fila del array chain1....
#print i
fila_i=chain1[i,:] #aquí ya escogimos la fila
j=2
suma=0
while j<13: #ahora le decimos que empiece a barrer todos los elementos de la fila
suma = suma + fila_i[j] #cuya posición esté entre el valor 2(segundo) y el 17(último)
#print suma
j=j+1
if j==13:
break
promedio = suma/15
print promedio
promedios.append(promedio)
print promedios
# -
lprom=np.arange(len(promedios))
print lprom
# +
fig, ax = plt.subplots(figsize=(9,7))
ax.plot(lprom,promedios,'b',alpha=1,linewidth=1)
#--------------------------------------------------------------------------------------
#ax.legend(loc='best', fancybox=True, framealpha=1)
ax.set_xlabel(r'steps',fontsize=15)
ax.set_ylabel(r'media muestral',fontsize=15)
#ax.set_ylim(bottom=10**-5)
#ax.set_xlim(left=2*10**-5,right=1)
ax.grid(which='major',ls=":", c='black',alpha=0.4)
ax.set_title(r'media muestral de la cadena 1');
# -
| jupyter/mcmc_convergence_&_exploration_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
# # Question 5: Longest Palindromic Substring
#
#
# [longest-palindromic-substring](https://leetcode.com/problems/longest-palindromic-substring/)
from nbdev.showdoc import *
from nbdev.imports import *
# Given a string s, find the longest palindromic substring in s. You may assume that the maximum length of s is 1000.
#
# Example 1:
#
# ```bash
# Input: "babad"
# Output: "bab"
# Note: "aba" is also a valid answer.
# Example 2:
#
# Input: "cbbd"
# Output: "bb"
# ```
# ## Brute Force O(n^3)
def isPalindrome(s):
for i in range(len(s)//2):
if s[i] != s[len(s)-i-1]:
return False
return True
assert isPalindrome("")
assert isPalindrome("a")
assert isPalindrome("aba")
assert isPalindrome("abba")
assert not isPalindrome("abbb")
assert not isPalindrome("abb")
def longestPalindromeBruteForce(s: str) -> str:
longest = ""
for i in range(len(s)):
for j in range(i+1,len(s)+1):
if isPalindrome(s[i:j]):
if (j-i) > len(longest):
longest = s[i:j]
return longest
assert longestPalindromeBruteForce("babad") == "bab"
assert longestPalindromeBruteForce("orgeeksskeegfo") == "geeksskeeg"
assert longestPalindromeBruteForce("abcbabcbabcba") == 'abcbabcbabcba'
# ## Center Character O(n^2)
# +
import math
def longestPalindromeCenterString(s: str) -> str:
longest = ""
for i in range(2*len(s)+1):
start = math.floor(i/2)
stop = math.ceil(i/2)
while start >= 0 and stop < len(s):
if s[start] != s[stop]:
break
if stop-start + 1 > len(longest):
longest = s[start:stop+1]
start -= 1
stop += 1
return longest
# -
assert longestPalindromeCenterString("a") == "a"
assert longestPalindromeCenterString("aba") == "aba"
assert longestPalindromeCenterString("babad") == "bab"
assert longestPalindromeCenterString("orgeeksskeegfo") == "geeksskeeg"
assert longestPalindromeCenterString("abcbabcbabcba") == 'abcbabcbabcba'
# ## Memoize O(n^2)
def longestPalindromeMemo(s: str) -> str:
longest = (0,0)
memo = {}
goo = {"total":0,
"memo":0}
def helper(i,j):
if i == j or j < i:
return True
if (i,j) in memo:
return memo[(i,j)]
if s[i] != s[j]:
memo[(i,j)] = False
elif helper(i+1, j-1):
memo[(i,j)] = True
else:
memo[(i,j)] = False
return memo[(i,j)]
for i in range(len(s)):
for j in range(i, len(s)):
if helper(i,j):
if j - i > longest[1] - longest[0] :
longest = (i,j)
return s[longest[0]:longest[1]+1]
assert longestPalindromeMemo("a") == "a"
assert longestPalindromeMemo("aba") == "aba"
assert longestPalindromeMemo("babad") == "bab"
assert longestPalindromeMemo("orgeeksskeegfo") == "geeksskeeg"
assert longestPalindromeMemo("abcbabcbabcba") == 'abcbabcbabcba'
# +
def longestPalindromeMemo2(s: str) -> str:
memo = {}
longest = (0,0)
count = 0
def update(i,j,evaluate):
nonlocal longest
if evaluate:
if j - i > longest[1] - longest[0]:
longest = (i, j)
def helper(i,j):
if i == j or i > j:
return True
if (i,j) in memo:
return memo[(i,j)]
if s[i] != s[j]:
memo[(i,j)] = False
memo[(i+1, j)] = helper(i+1, j)
update(i+1, j, memo[(i+1,j)])
memo[(i,j-1)] = helper(i,j-1)
update(i,j-1, memo[i, j-1])
else:
memo[(i,j)] = helper(i+1, j-1)
update(i, j, memo[i,j])
return memo[(i,j)]
helper(0,len(s)-1)
return s[longest[0]:longest[1]+1]
# -
assert longestPalindromeMemo2("a") == "a"
assert longestPalindromeMemo2("aba") == "aba"
assert longestPalindromeMemo2("babad") == "aba"
assert longestPalindromeMemo2("orgeeksskeegfo") == 'geeksskeeg'
assert longestPalindromeMemo2("abcbabcbabcba") == 'abcbabcbabcba'
def longestPalindromeMemo3(s: str) -> str:
memo = {}
def helper(i,j):
if (i,j) in memo:
return True
if i == j:
memo[(i,j)] = True
return True
if j < i:
return True
if s[i] != s[j]:
memo[(i+1,j)] = helper(i+1, j)
memo[(i, j-1)] = helper(i,j-1)
memo[(i, j)] = False
else:
memo[(i,j)] = helper(i+1, j-1)
return memo[(i,j)]
helper(0,len(s)-1)
longest = 0
longest_idxs = (0,0)
for idxs ,is_palindrome in memo.items():
if is_palindrome:
if idxs[1] - idxs[0] > longest_idxs[1]-longest_idxs[0]:
longest_idxs = idxs
return s[longest_idxs[0]:longest_idxs[1]+1]
assert longestPalindromeMemo3("a") == "a"
assert longestPalindromeMemo3("aba") == "aba"
assert longestPalindromeMemo3("babad") == "aba"
assert longestPalindromeMemo3("orgeeksskeegfo") == 'geeksskeeg'
assert longestPalindromeMemo3("abcbabcbabcba") == 'abcbabcbabcba'
# ## Bottom Up O(n^2)
# +
def longestPalindromeBottomUp(s: str) -> str:
result = [[0] * len(s) for _ in range(len(s))]
indices = (0,0)
longest = 0
for i in range(len(s)):
for j in range(len(s)-i):
row = j
col = j+i
if row == col:
result[row][col] = 1
elif i == 1:
if s[row] == s[col]:
result[row][col] = 2
else:
if result[row+1][col-1] > 0 and s[row] == s[col]:
result[row][col] = i+1
if result[row][col] > longest:
indices = (row,col+1)
return s[indices[0]:indices[1]]
# -
assert longestPalindromeBottomUp("a") == "a"
assert longestPalindromeBottomUp("aba") == "aba"
assert longestPalindromeBottomUp("babad") == "aba"
assert longestPalindromeBottomUp("orgeeksskeegfo") == 'geeksskeeg'
assert longestPalindromeBottomUp("abcbabcbabcba") == 'abcbabcbabcba'
| nbs/longest-palindrome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/abey79/vsketch/blob/master/examples/google_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="fie8YrVfRT-U"
# ### info
# + [markdown] colab_type="text" id="y4u_kroEQFkP"
# This notebook is a basic setup for running [vsketch](https://github.com/abey79/vsketch) on Google Colab. Execute the following line once to setup vsketch in the notebook.
#
# Notebook made by [dark fractures.](https://www.darkfractures.com)
# + colab={} colab_type="code" id="ojK154g7bGlq"
# !pip install git+https://github.com/abey79/vsketch#egg=vsketch[colab]
# + [markdown] colab_type="text" id="B8G7B0v0QToC"
# ### sketch
# + colab={} colab_type="code" id="BvYUlUXXbsod"
# example sketch, taken from the 'detail.ipynb' notebook in 'examples'.
import math
import numpy as np
import vsketch
vsk = vsketch.Vsketch()
vsk.size("a5", landscape=True)
vsk.scale("1cm")
# high level of detail
vsk.detail("0.1mm")
vsk.circle(0, 0, 1)
vsk.circle(0, 0, 2)
with vsk.pushMatrix():
vsk.scale(4)
vsk.circle(0, 0, 1)
# rough level of detail
vsk.translate(7, 0)
vsk.detail("5mm")
vsk.circle(0, 0, 1)
vsk.circle(0, 0, 2)
with vsk.pushMatrix():
vsk.scale(4)
vsk.circle(0, 0, 1)
# hardly usable level of detail
vsk.translate(7, 0)
vsk.detail("2cm")
vsk.circle(0, 0, 1)
vsk.circle(0, 0, 2)
with vsk.pushMatrix():
vsk.scale(4)
vsk.circle(0, 0, 1)
vsk.display()
vsk.save("detail.svg")
| examples/_notebooks/google_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 9.0
# language: sage
# name: sagemath
# ---
from itertools import permutations
def is_neutral(G):
one = G.one()
for g in G:
if g * one != g:
return False
return True
def is_inverse(G):
one = G.one()
for g in G:
if g * (g ** -1) != one:
return False
return True
def is_associativity(G):
for var in permutations(G, int(3)):
g, h, x = var
if (g * h) * x != g * (h * x):
return False
return True
G = SymmetricGroup(3)
if is_neutral(G) and is_inverse(G) and is_associativity(G):
print("The theory is true!")
| Lab 1. General.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/datascience-1.0
# ---
# <h1> <b> Comprehend Primitives and Pre-Built API's </b></h1>
# Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. We will explore 6 pre-trained APIs: Identifying Named Entities, Extracting Key Phrases, Identifying the Dominant Language, Determining Emotional sentiment, Determining Syntax, Detecting Detect Personally Identifiable Information (PII).
#
# - Selected kernal: Python3(Data Science)
# - IAM setting: ComprehendFullAccess
import boto3
import pprint
import pandas as pd
import numpy as np
#intialize the comprehend client with Boto3
comprehend = boto3.client(service_name='comprehend')
#sample text we will be using with Comprehend
sample_text = '''
Hello <NAME>. Your AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0000 has a minimum payment of $24.53 that is due by July 31st.
Based on your autopay settings, we will withdraw your payment on the due date from your bank account XXXXXX1111 with the routing number XXXXX0000.
Your latest statement was mailed to 100 Main Street, Anytown, WA 98121.
After your payment is received, you will receive a confirmation text message at 206-555-0100.
If you have questions about your bill, AnyCompany Customer Service is available by phone at 206-555-0199 or email at <EMAIL>.
'''
#
# <h1>Identifying Named Entities</h1>
#
# A named entity is a real-world object (persons, places, locations, organizations, etc.) that can be denoted with a proper name. Amazon Comprehend can extract named entities from a document or text. This can be useful, for example, for indexing, document labeling or search. For more information, see Detect Entities). The API used to extract these entities is the DetectEntities API. For each entity detected Amazon Comprehend returns both the type, for instance "Person" or "Date", as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values.
#
# <h4>Important Terminologies</h4>
# <img src="comprehend_terminology.png" alt="comprehend_terminology" width="1000"/>
#detect and print entities
detected_entities = comprehend.detect_entities(Text=sample_text, LanguageCode='en')
pprint.pprint(detected_entities['Entities'][0:5])
# To understand the entity detection ouput, you will see 5 elements. BeginOffset and EndOffset are the place in the document the text in relation to characters. Ex. <NAME> starts at character 7 and finishes at character 16. Score is the confidence of the predicition, text is the text that was classified and type is the entity that was detected.This response pattern is common across nearly all Amazon Comprehend API commands.You can build <b>Custom Entity Detection </b> and we will touch on this in a later workshop.
#
# <h4>Explain Entity Output</h4>
# <img src="entity_output.png" alt="entity_output" width="1000"/>
#displaying values in a more huamn readable way
detectec_entities_df = pd.DataFrame([ [entity['Text'], entity['Type'], entity['Score']] for entity in detected_entities['Entities']],
columns=['Text', 'Type', 'Score'])
display (detectec_entities_df)
#
# <h1>Detecting Key Phrases</h1>
#
# Amazon Comprehend can extract **key noun phrases** that appear in a document. For example, a document about a basketball game might return the names of the teams, the name of the venue, and the final score. This can be used, for example, for indexing or summarization. For more information, see Detect Key Phrases.
#
# The API used to extract these key phrases is the DetectKeyPhrases API.
#
# Amazon Comprehend returns the key phrases, as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values.
#
#Call detect key phrases API
detected_key_phrases = comprehend.detect_key_phrases(Text=sample_text, LanguageCode='en')
pprint.pprint(detected_key_phrases['KeyPhrases'][0:3])
#displaying values in a more huamn readable way
detected_key_phrases_df = pd.DataFrame([ [entity['Text'], entity['Score']] for entity in detected_key_phrases['KeyPhrases']],
columns=['Text', 'Score'])
display(detected_key_phrases_df)
#
# <h1>Identifying the Dominant Language</h1>
#
# Amazon Comprehend identifies the dominant language in a document. Amazon Comprehend can currently identify many languages. This can be useful as a first step before further processing, for example when phone call transcripts can be in different languages. For more information, including which languages can be identified, see Detect the Dominant Language.
#
# The API used to identify the dominant language is the DetectDominantLanguage API.
#
# Amazon Comprehend returns the dominant language, as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values. If more than one language is detected, it will return each detected language and its corresponding confidence score.
#
#Calling the detect lanaguage
detected_language = comprehend.detect_dominant_language(Text=sample_text)
pprint.pprint(detected_language)
#Making it more human readable
detected_language_df = pd.DataFrame([ [code['LanguageCode'], code['Score']] for code in detected_language['Languages']],
columns=['Language Code', 'Score'])
display (detected_language_df)
# <h1>Determining Emotional Sentiment</h1>
#
# Amazon Comprehend determines the emotional sentiment of a document. Sentiment can be positive, neutral, negative, or mixed. For more information, see Determine Sentiment. This can be useful for example to analyze the content of reviews or transcripts from call centres. For more information, see Detecting Sentiment.
#
# The API used to extract the emotional sentiment is the DetectSentiment API.
#
# Amazon Comprehend returns the different sentiments and the related confidence score for each of them, which indicates how confident the model is in this detection. The sentiment with the highest confidence score can be seen as the predominant sentiment in the text.
#
#calling detect_sentiment
detected_sentiment = comprehend.detect_sentiment(Text=sample_text, LanguageCode='en')
pprint.pprint(detected_sentiment)
#Finding predmoninant sentiment and making it more human readable
predominant_sentiment = detected_sentiment['Sentiment']
detected_sentiments_df = pd.DataFrame([ [sentiment, detected_sentiment['SentimentScore'][sentiment]] for sentiment in detected_sentiment['SentimentScore']],
columns=['Language Code', 'Score'])
#Sentiment across Document
display(detected_sentiments_df)
#Predominant Senitment
display(predominant_sentiment)
#
# <h1>Detecting Personally Identifiable Information (PII)</h1>
#
# Amazon Comprehend analyzes documents to detect personal data that could be used to identify an individual, such as an address, bank account number, or phone number. This can be usefull, for example, for information extraction and indexing, and to comply with legal requirements around data protection. For more information, see Detect Personally Identifiable Information (PII).
#
# Amazon Comprehend can help you identify the location of individual PII in your document or help you label documents that contain PII.
# Identify the location of PII in your text documents
#
# Amazon Comprehend can help you identify the location of individual PII in your document. Select "Offsets" in the Personally identifiable information (PII) analysis mode.
#
# The API used to identify the location of individual PII is the DetectPiiEntities API.
#
# Amazon Comprehend returns the different PII and the related confidence score for each of them, which indicates how confident the model is in this detection.
#
#Calling the Detect PII API
detected_pii_entities = comprehend.detect_pii_entities(Text=sample_text, LanguageCode='en')
pprint.pprint(detected_pii_entities)
#Make more human readable
detected_pii_entities_df = pd.DataFrame([ [entity['Type'], entity['Score']] for entity in detected_pii_entities['Entities']],
columns=['Type', 'Score'])
display (detected_pii_entities_df)
#
# <h2>Label text documents with PII</h2>
#
# Amazon Comprehend can help you label documents that contain PII. Select "Labels" in the Personally identifiable information (PII) analysis mode.
#
# The API used to extract the PII enties in the document. We used the ContainsPiiEntities API.
#
# Amazon Comprehend returns the different PII labels and the related confidence score for each of them, which indicates how confident the model is in this detection. These labels indicate the presence of these types of PII in the document.
#
#Labelelling Text in a Document
detected_pii_labels = comprehend.contains_pii_entities(Text=sample_text, LanguageCode='en')
pprint.pprint(detected_pii_labels)
#Make more human readable
detected_pii_labels_df = pd.DataFrame([ [entity['Name'], entity['Score']] for entity in detected_pii_labels['Labels']],
columns=['Name', 'Score'])
display (detected_pii_labels_df)
| Module 2 - Comprehend NLP/Comprehend Walkthrough.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
# +
class ChessBoard:
blue = [0,0,1]
red = [1,0,0]
redQueen=None
blueQueen=None
red_x=None
red_y=None
blue_x=None
blue_y=None
under_attack=True
def __init__(self):
self.board = np.ones([8,8,3])
for i in np.ndindex(8,8):
row = i[0]
col = i[1]
if row%2 != col%2:
self.board[row, col] = [0,0,0]
print(i, self.board[i])
def add_red(self, row, col):
"""
add red queen in this way (x,y)
"""
self.board[row, col] = ChessBoard.red
ChessBoard.red_x=row
ChessBoard.red_y=col
ChessBoard.redQueen=[row, col]
def add_blue(self, row, col):
"""
add blue queen in this way (x,y)
"""
self.board[row, col] = ChessBoard.blue
ChessBoard.blue_x=row
ChessBoard.blue_y=col
ChessBoard.blueQueen=[row, col]
def render(self):
plt.imshow(self.board)
def is_under_attack(self):
x_sub=abs(ChessBoard.red_x-ChessBoard.blue_x)
y_sub=abs(ChessBoard.red_y-ChessBoard.blue_y)
if ChessBoard.red_x==ChessBoard.blue_x:
ChessBoard.under_attack=True
return ChessBoard.under_attack
elif ChessBoard.red_y==ChessBoard.blue_y:
ChessBoard.under_attack=True
return ChessBoard.under_attack
elif x_sub==y_sub:
ChessBoard.under_attack=True
return ChessBoard.under_attack
else:
ChessBoard.under_attack = False
return ChessBoard.under_attack
# -
chess_board = ChessBoard()
# chess_board.board
# +
chess_board.render()
# -
# check vertical attack
board1 = ChessBoard()
board1.add_red(1,7)
board1.add_blue(6,7)
board1.render()
print(board1.is_under_attack())
assert board1.is_under_attack() == True
print('success !!! yes its vertical attack')
# +
## check horizontal attack
board2 = ChessBoard()
board2.add_red(2,2)
board2.add_blue(2,6)
board2.render()
print(board2.is_under_attack())
assert board2.is_under_attack() == True
print('success !!! yes its horizantal attack')
# +
## check diagonal attack
board3 = ChessBoard()
board3.add_red(0,7)
board3.add_blue(7,0)
board3.render()
print(board3.is_under_attack())
assert board3.is_under_attack() == True
print('success !!! yes its diagonal attack')
# +
#check if there is not under attack
board4 = ChessBoard()
board4.add_red(2,3)
board4.add_blue(3,6)
board4.render()
print(board4.is_under_attack())
assert board4.is_under_attack() == False
print('success !!! its not under attack')
# -
| chess_board.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# This example illustrates how to detect Red Blood cells using YOLO.
# # Initialization
# + code_folding=[]
from keras.models import Sequential, Model
from keras.layers import Reshape, Activation, Conv2D, Input, MaxPooling2D, BatchNormalization, Flatten, Dense, Lambda
from keras.layers.advanced_activations import LeakyReLU
from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
from keras.optimizers import SGD, Adam, RMSprop
from keras.layers.merge import concatenate
import matplotlib.pyplot as plt
import keras.backend as K
import tensorflow as tf
import imgaug as ia
from imgaug import augmenters as iaa
from tqdm import tqdm_notebook
import numpy as np
import json
import pickle
import os, cv2
from preprocessing import parse_annotation, BatchGenerator
from utils import WeightReader, decode_netout, draw_boxes
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
# %matplotlib inline
# +
LABELS = ['RBC']
IMAGE_H, IMAGE_W = 416, 416
GRID_H, GRID_W = 13 , 13
BOX = 5
CLASS = len(LABELS)
CLASS_WEIGHTS = np.ones(CLASS, dtype='float32')
OBJ_THRESHOLD = 0.3
NMS_THRESHOLD = 0.3
ANCHORS = [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828]
NO_OBJECT_SCALE = 1.0
OBJECT_SCALE = 5.0
COORD_SCALE = 1.0
CLASS_SCALE = 1.0
BATCH_SIZE = 16
WARM_UP_BATCHES = 100
TRUE_BOX_BUFFER = 50
# -
wt_path = 'yolo.weights'
# # Explore the dataset
generator_config = {
'IMAGE_H' : IMAGE_H,
'IMAGE_W' : IMAGE_W,
'GRID_H' : GRID_H,
'GRID_W' : GRID_W,
'BOX' : BOX,
'LABELS' : LABELS,
'CLASS' : len(LABELS),
'ANCHORS' : ANCHORS,
'BATCH_SIZE' : BATCH_SIZE,
'TRUE_BOX_BUFFER' : 50,
}
# +
image_path = '/home/andy/data/dataset/JPEGImages/'
annot_path = '/home/andy/data/dataset/Annotations/'
all_imgs, seen_labels = parse_annotation(annot_path, image_path)
# add extensions to image name
for img in all_imgs:
img['filename'] = img['filename'] + '.jpg'
# -
# ** Sanity check: show a few images with ground truth boxes overlaid **
batches = BatchGenerator(all_imgs, generator_config)
image = batches[0][0][0][0]
plt.imshow(image.astype('uint8'))
# ** Split the dataset into the training set and the validation set **
def normalize(image):
return image/255.
# +
train_valid_split = int(0.8*len(all_imgs))
train_batch = BatchGenerator(all_imgs[:train_valid_split], generator_config)
valid_batch = BatchGenerator(all_imgs[train_valid_split:], generator_config, norm=normalize)
# -
# # Construct the network
# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
def space_to_depth_x2(x):
return tf.space_to_depth(x, block_size=2)
# + code_folding=[]
input_image = Input(shape=(IMAGE_H, IMAGE_W, 3))
true_boxes = Input(shape=(1, 1, 1, TRUE_BOX_BUFFER , 4))
# Layer 1
x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
x = BatchNormalization(name='norm_1')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 2
x = Conv2D(64, (3,3), strides=(1,1), padding='same', name='conv_2', use_bias=False)(x)
x = BatchNormalization(name='norm_2')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 3
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_3', use_bias=False)(x)
x = BatchNormalization(name='norm_3')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 4
x = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_4', use_bias=False)(x)
x = BatchNormalization(name='norm_4')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 5
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_5', use_bias=False)(x)
x = BatchNormalization(name='norm_5')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 6
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_6', use_bias=False)(x)
x = BatchNormalization(name='norm_6')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 7
x = Conv2D(128, (1,1), strides=(1,1), padding='same', name='conv_7', use_bias=False)(x)
x = BatchNormalization(name='norm_7')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 8
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_8', use_bias=False, input_shape=(416,416,3))(x)
x = BatchNormalization(name='norm_8')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 9
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_9', use_bias=False)(x)
x = BatchNormalization(name='norm_9')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 10
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_10', use_bias=False)(x)
x = BatchNormalization(name='norm_10')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 11
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_11', use_bias=False)(x)
x = BatchNormalization(name='norm_11')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 12
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_12', use_bias=False)(x)
x = BatchNormalization(name='norm_12')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 13
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_13', use_bias=False)(x)
x = BatchNormalization(name='norm_13')(x)
x = LeakyReLU(alpha=0.1)(x)
skip_connection = x
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 14
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_14', use_bias=False)(x)
x = BatchNormalization(name='norm_14')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 15
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_15', use_bias=False)(x)
x = BatchNormalization(name='norm_15')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 16
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_16', use_bias=False)(x)
x = BatchNormalization(name='norm_16')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 17
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_17', use_bias=False)(x)
x = BatchNormalization(name='norm_17')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 18
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_18', use_bias=False)(x)
x = BatchNormalization(name='norm_18')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 19
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_19', use_bias=False)(x)
x = BatchNormalization(name='norm_19')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 20
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_20', use_bias=False)(x)
x = BatchNormalization(name='norm_20')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 21
skip_connection = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_21', use_bias=False)(skip_connection)
skip_connection = BatchNormalization(name='norm_21')(skip_connection)
skip_connection = LeakyReLU(alpha=0.1)(skip_connection)
skip_connection = Lambda(space_to_depth_x2)(skip_connection)
x = concatenate([skip_connection, x])
# Layer 22
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_22', use_bias=False)(x)
x = BatchNormalization(name='norm_22')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 23
x = Conv2D(BOX * (4 + 1 + CLASS), (1,1), strides=(1,1), padding='same', name='conv_23')(x)
output = Reshape((GRID_H, GRID_W, BOX, 4 + 1 + CLASS))(x)
# small hack to allow true_boxes to be registered when Keras build the model
# for more information: https://github.com/fchollet/keras/issues/2790
output = Lambda(lambda args: args[0])([output, true_boxes])
model = Model([input_image, true_boxes], output)
# -
model.summary()
# # Load pretrained weights
# **Load the weights originally provided by YOLO**
weight_reader = WeightReader(wt_path)
# +
weight_reader.reset()
nb_conv = 23
for i in range(1, nb_conv+1):
conv_layer = model.get_layer('conv_' + str(i))
if i < nb_conv:
norm_layer = model.get_layer('norm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = weight_reader.read_bytes(size)
gamma = weight_reader.read_bytes(size)
mean = weight_reader.read_bytes(size)
var = weight_reader.read_bytes(size)
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
# -
# **Randomize weights of the last layer**
# +
layer = model.layers[-4] # the last convolutional layer
weights = layer.get_weights()
new_kernel = np.random.normal(size=weights[0].shape)/(GRID_H*GRID_W)
new_bias = np.random.normal(size=weights[1].shape)/(GRID_H*GRID_W)
layer.set_weights([new_kernel, new_bias])
# -
# # Perform training
# **Loss function**
# $$\begin{multline}
# \lambda_\textbf{coord}
# \sum_{i = 0}^{S^2}
# \sum_{j = 0}^{B}
# L_{ij}^{\text{obj}}
# \left[
# \left(
# x_i - \hat{x}_i
# \right)^2 +
# \left(
# y_i - \hat{y}_i
# \right)^2
# \right]
# \\
# + \lambda_\textbf{coord}
# \sum_{i = 0}^{S^2}
# \sum_{j = 0}^{B}
# L_{ij}^{\text{obj}}
# \left[
# \left(
# \sqrt{w_i} - \sqrt{\hat{w}_i}
# \right)^2 +
# \left(
# \sqrt{h_i} - \sqrt{\hat{h}_i}
# \right)^2
# \right]
# \\
# + \sum_{i = 0}^{S^2}
# \sum_{j = 0}^{B}
# L_{ij}^{\text{obj}}
# \left(
# C_i - \hat{C}_i
# \right)^2
# \\
# + \lambda_\textrm{noobj}
# \sum_{i = 0}^{S^2}
# \sum_{j = 0}^{B}
# L_{ij}^{\text{noobj}}
# \left(
# C_i - \hat{C}_i
# \right)^2
# \\
# + \sum_{i = 0}^{S^2}
# L_i^{\text{obj}}
# \sum_{c \in \textrm{classes}}
# \left(
# p_i(c) - \hat{p}_i(c)
# \right)^2
# \end{multline}$$
# + code_folding=[0.0]
def custom_loss(y_true, y_pred):
mask_shape = tf.shape(y_true)[:4]
cell_x = tf.to_float(tf.reshape(tf.tile(tf.range(GRID_W), [GRID_H]), (1, GRID_H, GRID_W, 1, 1)))
cell_y = tf.transpose(cell_x, (0,2,1,3,4))
cell_grid = tf.tile(tf.concat([cell_x,cell_y], -1), [BATCH_SIZE, 1, 1, 5, 1])
coord_mask = tf.zeros(mask_shape)
conf_mask = tf.zeros(mask_shape)
class_mask = tf.zeros(mask_shape)
seen = tf.Variable(0.)
total_recall = tf.Variable(0.)
"""
Adjust prediction
"""
### adjust x and y
pred_box_xy = tf.sigmoid(y_pred[..., :2]) + cell_grid
### adjust w and h
pred_box_wh = tf.exp(y_pred[..., 2:4]) * np.reshape(ANCHORS, [1,1,1,BOX,2])
### adjust confidence
pred_box_conf = tf.sigmoid(y_pred[..., 4])
### adjust class probabilities
pred_box_class = y_pred[..., 5:]
"""
Adjust ground truth
"""
### adjust x and y
true_box_xy = y_true[..., 0:2] # relative position to the containing cell
### adjust w and h
true_box_wh = y_true[..., 2:4] # number of cells accross, horizontally and vertically
### adjust confidence
true_wh_half = true_box_wh / 2.
true_mins = true_box_xy - true_wh_half
true_maxes = true_box_xy + true_wh_half
pred_wh_half = pred_box_wh / 2.
pred_mins = pred_box_xy - pred_wh_half
pred_maxes = pred_box_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_box_wh[..., 0] * true_box_wh[..., 1]
pred_areas = pred_box_wh[..., 0] * pred_box_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
true_box_conf = iou_scores * y_true[..., 4]
### adjust class probabilities
true_box_class = tf.argmax(y_true[..., 5:], -1)
"""
Determine the masks
"""
### coordinate mask: simply the position of the ground truth boxes (the predictors)
coord_mask = tf.expand_dims(y_true[..., 4], axis=-1) * COORD_SCALE
### confidence mask: penelize predictors + penalize boxes with low IOU
# penalize the confidence of the boxes, which have IOU with some ground truth box < 0.6
true_xy = true_boxes[..., 0:2]
true_wh = true_boxes[..., 2:4]
true_wh_half = true_wh / 2.
true_mins = true_xy - true_wh_half
true_maxes = true_xy + true_wh_half
pred_xy = tf.expand_dims(pred_box_xy, 4)
pred_wh = tf.expand_dims(pred_box_wh, 4)
pred_wh_half = pred_wh / 2.
pred_mins = pred_xy - pred_wh_half
pred_maxes = pred_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_wh[..., 0] * true_wh[..., 1]
pred_areas = pred_wh[..., 0] * pred_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
best_ious = tf.reduce_max(iou_scores, axis=4)
conf_mask = conf_mask + tf.to_float(best_ious < 0.6) * (1 - y_true[..., 4]) * NO_OBJECT_SCALE
# penalize the confidence of the boxes, which are reponsible for corresponding ground truth box
conf_mask = conf_mask + y_true[..., 4] * OBJECT_SCALE
### class mask: simply the position of the ground truth boxes (the predictors)
class_mask = y_true[..., 4] * tf.gather(CLASS_WEIGHTS, true_box_class) * CLASS_SCALE
"""
Warm-up training
"""
no_boxes_mask = tf.to_float(coord_mask < COORD_SCALE/2.)
seen = tf.assign_add(seen, 1.)
true_box_xy, true_box_wh, coord_mask = tf.cond(tf.less(seen, WARM_UP_BATCHES),
lambda: [true_box_xy + (0.5 + cell_grid) * no_boxes_mask,
true_box_wh + tf.ones_like(true_box_wh) * np.reshape(ANCHORS, [1,1,1,BOX,2]) * no_boxes_mask,
tf.ones_like(coord_mask)],
lambda: [true_box_xy,
true_box_wh,
coord_mask])
"""
Finalize the loss
"""
nb_coord_box = tf.reduce_sum(tf.to_float(coord_mask > 0.0))
nb_conf_box = tf.reduce_sum(tf.to_float(conf_mask > 0.0))
nb_class_box = tf.reduce_sum(tf.to_float(class_mask > 0.0))
loss_xy = tf.reduce_sum(tf.square(true_box_xy-pred_box_xy) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_wh = tf.reduce_sum(tf.square(true_box_wh-pred_box_wh) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_conf = tf.reduce_sum(tf.square(true_box_conf-pred_box_conf) * conf_mask) / (nb_conf_box + 1e-6) / 2.
loss_class = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=true_box_class, logits=pred_box_class)
loss_class = tf.reduce_sum(loss_class * class_mask) / (nb_class_box + 1e-6)
loss = loss_xy + loss_wh + loss_conf + loss_class
nb_true_box = tf.reduce_sum(y_true[..., 4])
nb_pred_box = tf.reduce_sum(tf.to_float(true_box_conf > 0.5) * tf.to_float(pred_box_conf > 0.3))
"""
Debugging code
"""
current_recall = nb_pred_box/(nb_true_box + 1e-6)
total_recall = tf.assign_add(total_recall, current_recall)
loss = tf.Print(loss, [tf.zeros((1))], message='Dummy Line \t', summarize=1000)
loss = tf.Print(loss, [loss_xy], message='Loss XY \t', summarize=1000)
loss = tf.Print(loss, [loss_wh], message='Loss WH \t', summarize=1000)
loss = tf.Print(loss, [loss_conf], message='Loss Conf \t', summarize=1000)
loss = tf.Print(loss, [loss_class], message='Loss Class \t', summarize=1000)
loss = tf.Print(loss, [loss], message='Total Loss \t', summarize=1000)
loss = tf.Print(loss, [current_recall], message='Current Recall \t', summarize=1000)
loss = tf.Print(loss, [total_recall/seen], message='Average Recall \t', summarize=1000)
return loss
# -
# **Setup a few callbacks and start the training**
# + code_folding=[]
early_stop = EarlyStopping(monitor='val_loss',
min_delta=0.001,
patience=3,
mode='min',
verbose=1)
checkpoint = ModelCheckpoint('weights_blood.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=1)
# +
#model.load_weights('weights_blood.h5')
# +
tb_counter = len([log for log in os.listdir(os.path.expanduser('~/logs/')) if 'blood' in log]) + 1
tensorboard = TensorBoard(log_dir=os.path.expanduser('~/logs/') + 'blood' + '_' + str(tb_counter),
histogram_freq=0,
write_graph=True,
write_images=False)
optimizer = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#optimizer = SGD(lr=1e-4, decay=0.0005, momentum=0.9)
#optimizer = RMSprop(lr=1e-5, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss=custom_loss, optimizer=optimizer)
model.fit_generator(generator = train_batch,
steps_per_epoch = len(train_batch),
epochs = 100,
verbose = 1,
validation_data = valid_batch,
validation_steps = len(valid_batch),
callbacks = [early_stop, checkpoint, tensorboard],
max_queue_size = 3)
# -
# # Perform detection on image
# +
#model.load_weights("weights_blood.h5")
dummy_array = np.zeros((1,1,1,1,TRUE_BOX_BUFFER,4))
# +
image = cv2.imread('/home/andy/data/dataset/JPEGImages/BloodImage_00032.jpg')
#image = cv2.imread('/home/andy/data/coco/val2014/COCO_val2014_000000000196.jpg')
#image = cv2.imread(all_imgs[train_valid_split:][28]['filename'])
plt.figure(figsize=(10,10))
input_image = cv2.resize(image, (416, 416))
input_image = input_image / 255.
input_image = input_image[:,:,::-1]
input_image = np.expand_dims(input_image, 0)
netout = model.predict([input_image, dummy_array])
boxes = decode_netout(netout[0],
obj_threshold=0.5,
nms_threshold=NMS_THRESHOLD,
anchors=ANCHORS,
nb_class=CLASS)
image = draw_boxes(image, boxes, labels=LABELS)
plt.imshow(image[:,:,::-1]); plt.show()
| examples/Blood Cell Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
#
# Artifact Correction with SSP
# ============================
#
# This tutorial explains how to estimate Signal Subspace Projectors (SSP)
# for correction of ECG and EOG artifacts.
#
# See `sphx_glr_auto_examples_io_plot_read_proj.py` for how to read
# and visualize already present SSP projection vectors.
#
# +
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# -
# Compute SSP projections
# -----------------------
#
# First let's do ECG.
#
#
# +
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=0, average=True)
print(projs)
ecg_projs = projs[-2:]
mne.viz.plot_projs_topomap(ecg_projs)
# -
# Now let's do EOG. Here we compute an EEG projector, and need to pass
# the measurement info so the topomap coordinates can be created.
#
#
# +
projs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, average=True)
print(projs)
eog_projs = projs[-3:]
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
# -
# Apply SSP projections
# ---------------------
#
# MNE is handling projections at the level of the info,
# so to register them populate the list that you find in the 'proj' field
#
#
raw.info['projs'] += eog_projs + ecg_projs
# Yes this was it. Now MNE will apply the projs on demand at any later stage,
# so watch out for proj parameters in functions or to it explicitly
# with the ``.apply_proj`` method
#
#
# Demonstrate SSP cleaning on some evoked data
# --------------------------------------------
#
#
# +
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# this can be highly data dependent
event_id = {'auditory/left': 1}
epochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj=False, baseline=(None, 0), reject=reject)
epochs_no_proj.average().plot(spatial_colors=True, time_unit='s')
epochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True,
baseline=(None, 0), reject=reject)
epochs_proj.average().plot(spatial_colors=True, time_unit='s')
# -
# Looks cool right? It is however often not clear how many components you
# should take and unfortunately this can have bad consequences as can be seen
# interactively using the delayed SSP mode:
#
#
# +
evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj='delayed', baseline=(None, 0),
reject=reject).average()
# set time instants in seconds (from 50 to 150ms in a step of 10ms)
times = np.arange(0.05, 0.15, 0.01)
fig = evoked.plot_topomap(times, proj='interactive', time_unit='s')
# -
# now you should see checkboxes. Remove a few SSP and see how the auditory
# pattern suddenly drops off
#
#
| stable/_downloads/0cd97a6d68ec19255d6658b4ecac3774/plot_artifacts_correction_ssp.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# +
# Exercício - Construindo intervalo de confiança
x = c(177, 122, 128, 191, 180, 142, 197, 196, 67, 160, 167, 138, 107, 188, 102, 116, 138, 114, 188, 176, 148, 175, 169, 203, 135, 142, 168, 181, 168, 150, 132, 196, 88, 177, 164, 118, 178, 102, 156, 114)
# +
media = mean(x)
raiz = sqrt(40)
desvio = sd(x)
# Z para 90% 95% 99%
z1 = 1.645
z2 = 1.96
z3 = 2.576
# +
x1 = media - desvio*z1/raiz
x2 = media + desvio*z1/raiz
x1
x2
# +
x1 = media - desvio*z2/raiz
x2 = media + desvio*z2/raiz
x1
x2
# +
x1 = media - desvio*z3/raiz
x2 = media + desvio*z3/raiz
x1
x2
# -
qnorm(0.95)
qnorm(0.975)
qnorm(0.995)
# z para 85% de índice de confiança
qnorm(0.925)
# +
# Exercício t de student
xt = c(-0.04, -0.19, 0.14, -0.09, -0.14, 0.19, 0.04, 0.09)
# -
t = qt(0.95, 7)
# +
x1 = mean(xt) - sd(xt)*t/sqrt(8)
x2 = mean(xt) + sd(xt)*t/sqrt(8)
x1
x2
# +
# Exercicio sobre diferença das médias
A = c(5.36, 16.57, 0.62, 1.41, 0.64, 7.76, 3.64, 2.48, 12.1)
B = c(19.12, 3.52, 3.38, 2.5, 3.6, 1.74)
# -
difMedia = mean(A) - mean(B)
S = sqrt(var(A)/9 + var(B)/6)
difMedia
v = ((var(A)/9 + var(B)/6)^2)/(1/8*(var(A)/9)^2+1/5*(var(B)/6)^2) - 2
v
x1 = difMedia - S*qt(0.95, v)
x2 = difMedia + S*qt(0.95, v)
x1
x2
# +
# A diferença não é significativa
# -
# Número de amostras
s = 5
z = 1.96
m = 20
r = 5
n = ((100*s*z)/(r*m))^2
n
# +
# Logo o tamnho da amostra necessária é N = 97
| Aula_AVD_Evoney_12_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Python data types and basic syntax
#
# - [Basic data types](#Basic-data-types)
# - [Strings](#Strings)
# - [Numbers](#Numbers)
# - [Booleans](#Booleans)
# - [Variable assignment](#Variable-assignment)
# - [String methods](#String-methods)
# - [Comments](#Comments)
# - [The print() function](#The-print()-function)
# - [Doing math in Python](#Doing-math-in-Python)
# - [Collections of data](#Collections-of-data)
# - [Lists](#Lists)
# - [Dictionaries](#Dictionaries)
# - [`for` loops](#for-loops)
# - [`if` statements](#if-statements)
# - [List comprehensions](#List-comprehensions)
# - [Dealing with errors](#Dealing-with-errors)
# ### Basic data types
# Just like Excel and other data processing software, Python recognizes a variety of data types, including three we'll focus on here:
# - Strings (text)
# - Numbers (integers, numbers with decimals and more)
# - Booleans (`True` and `False`).
#
# You can use the [`type()`](https://docs.python.org/3/library/functions.html#type) function to check the data type of a value.
# #### Strings
#
# A string is a group of characters -- letters, numbers, whatever -- enclosed within single or double quotes (doesn't matter as long as they match). The code in these notebooks uses single quotes. (The Python style guide doesn't recommend one over the other: ["Pick a rule and stick to it."](https://www.python.org/dev/peps/pep-0008/#string-quotes))
#
# If your string _contains_ apostrophes or quotes, you have two options: _Escape_ the offending character with a forward slash `\`:
#
# ```python
# 'Isn\'t it nice here?'
# ```
#
# ... or change the surrounding punctuation:
#
# ```python
# "Isn't it nice here?"
# ```
#
# The style guide recommends the latter over the former.
#
# When you check the `type()` of a string, Python will return `str`.
#
# Calling [`str()`](https://docs.python.org/3/library/stdtypes.html#str) on a value will return the string version of that value (see example below).
'Investigative Reporters & Editors'
type('hello!')
'45'
str(45)
'True'
# If you "add" strings together with a plus sign `+`, it will concatenate them:
'IRE' + ' & ' + 'NICAR'
# #### Numbers
#
# Python recognizes a variety of numeric data types. Two of the most common are integers (whole numbers) and floats (numbers with decimals).
#
# Calling `int()` on a piece of numeric data, or a number stored as a string, will attempt to coerce it to an integer; calling `float()` will try to convert it to a float.
12
12.4
type(12)
int('35')
type(12.4)
int(35.6)
float('35.6')
float(46)
# #### Booleans
#
# Just like in Excel, which has `TRUE` and `FALSE` data types, Python has boolean data types. They are `True` and `False` -- note that only the first letter is capitalized, and you don't surround them with quotes.
#
# Boolean values are returned when you're evaluating a logical statement.
4 > 6
'ell' in 'Hello'
type(True)
# ### Variable assignment
#
# The `=` sign assigns a value to a variable name that you choose. Later, you can retrieve that value by referencing its variable name. Variable names can be pretty much anything you want ([as long as you follow some basic rules](https://thehelloworldprogram.com/python/python-variable-assignment-statements-rules-conventions-naming/)).
#
# In a Jupyter notebook, any value assigned to a variable will be available once you _run_ the cell. Otherwise it won't be available.
#
# This can be a tricky concept at first! For more detail, [here's a pretty good explainer from Digital Ocean](https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-python-3).
my_name = 'Cody'
my_name
# You can also _reassign_ a different value to a variable name, though it's (usually) better practice to create a new variable.
my_name = 'Jacob'
my_name
# For reference, here's a list of "Python keywords" that you should not use as variable names:
# +
import keyword
print(keyword.kwlist)
# -
# ### String methods
#
# Let's go back to strings for a second. String objects have a number of useful [methods](https://docs.python.org/3/library/stdtypes.html#string-methods) -- let's use an example string to demonstrate a few common ones.
my_cool_string = ' Hello, Perth!'
# `upper()` converts the string to uppercase:
my_cool_string.upper()
# `lower()` converts to lowercase:
my_cool_string.lower()
# `replace()` will replace a piece of text with other text that you specify:
my_cool_string.replace('Perth', 'Alice Springs')
# `count()` will count the number of occurrences of a character or group of characters:
my_cool_string.count('H')
# Note that `count()` is case-sensitive. If your task is "count all the h's," convert your original string to upper or lowercase first:
my_cool_string.upper().count('H')
# [`split()`](https://docs.python.org/3/library/stdtypes.html#str.split) will split the string into a [_list_](#Lists) (more on these in a second) on a given delimiter (if you don't specify a delimiter, it'll default to splitting on a space):
my_cool_string.split()
my_cool_string.split(',')
my_cool_string.split('Per')
# `strip()` removes whitespace from either side of your string (but not internal whitespace):
my_cool_string.strip()
# You can use a cool thing called "method chaining" to combine methods -- just tack 'em onto the end. Let's say we wanted to strip whitespace from our string _and_ make it uppercase:
my_cool_string.strip().upper()
# Notice, however, that our original string is unchanged:
my_cool_string
# Why? Because we haven't assigned the results of anything we've done to a variable. A common thing to do, especially when you're cleaning data, would be to assign the results to a new variable:
my_cool_string_clean = my_cool_string.strip().upper()
my_cool_string_clean
# ### Comments
# A line with a comment -- a note that you don't want Python to interpret -- starts with a `#` sign. These are notes to collaborators and to your future self about what's happening at this point in your script, and why.
#
# Typically you'd put this on the line right above the line of code you're commenting on:
# coercing this to an int because we don't need any decimal precision
avg_settlement = 40827348.34328237
int(avg_settlement)
# Multi-line comments are sandwiched between triple quotes (or triple apostrophes):
#
# `'''
# this
# is a long
# comment
# '''`
#
# or
#
# `"""
# this
# is a long
# comment
# """`
#
# Here's a comment I used in a script once:
'''
Given a price, a base year index and the current year index, this will return the adjusted value
See: https://www.bls.gov/cpi/factsheets/cpi-math-calculations.pdf#page=2
Ctrl+F for "constant dollars"
'''
# ### The `print()` function
#
# So far, we've just been running the notebook cells to get the last value returned by the code we write. Using the [`print()`](https://docs.python.org/3/library/functions.html#print) function is a way to print specific things in your script to the screen.
#
# To print multiple things on the same line separated by a space, use a comma between each entry you'd like to print. (You can also use the `sep` keyword argument to specify a delimiter other than a space.)
print('Hello!')
print(my_name)
print('Hello', my_name)
print('Hello', my_name, sep='|')
# ### Doing math in Python
#
# You can do [basic math](https://www.digitalocean.com/community/tutorials/how-to-do-math-in-python-3-with-operators) in Python. You can also do [more advanced math](https://docs.python.org/3/library/math.html).
4+2
10-9
5*10
1000/10
# ** raises a number to the power of another number
5**2
# % returns the remainder of a division problem
100 % 8
# divmod() returns the quotient ~and~ the remainder
divmod(100, 8)
# +
# because divmod() returns a tuple, you can access each piece of the equation
# using bracket indexing (more on this in the next section)
divmod(100, 8)[0]
# -
divmod(100, 8)[1]
# ## Collections of data
#
# Now we're going to talk about two ways you can use Python to group data into a collection: lists and dictionaries.
# ### Lists
#
# A _list_ is a comma-separated list of items inside square brackets: `[]`.
#
# Here's a list of ingredients, each one a string, that together makes up a salsa recipe.
salsa_ingredients = ['tomato', 'onion', 'jalapeño', 'lime', 'cilantro']
# To get an item out of a list, you'd refer to its numerical position in the list -- its _index_ (1, 2, 3, etc.) -- inside square brackets immediately following your reference to that list. In Python, as in many other programming languages, counting starts at 0. That means the first item in a list is item `0`.
salsa_ingredients[0]
salsa_ingredients[1]
# You can use _negative indexing_ to grab things from the right-hand side of the list -- and in fact, `[-1]` is a common idiom for getting "the last item in a list" when it's not clear how many items are in your list.
salsa_ingredients[-1]
# If you wanted to get a slice of multiple items out of your list, you'd use colons (just like in Excel, kind of!).
#
# If you wanted to get the first three items, you'd do this:
salsa_ingredients[0:3]
# You could also have left off the initial 0 -- when you leave out the first number, Python defaults to "the first item in the list." In the same way, if you leave off the last number, Python defaults to "the last item in the list."
salsa_ingredients[:3]
# Note, too, that this slice is giving us items 0, 1 and 2. The `3` in our slice is the first item we _don't_ want. That can be kind of confusing at first. Let's try a few more:
# everything in the list except the first item
salsa_ingredients[1:]
# the second, third and fourth items
salsa_ingredients[1:4]
# the last two items
salsa_ingredients[-2:]
# To see how many items are in a list, use the `len()` function:
len(salsa_ingredients)
# To add an item to a list, use the [`append()`](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists) method:
salsa_ingredients
salsa_ingredients.append('mayonnaise')
salsa_ingredients
# Haha _gross_. To remove an item from a list, use the `pop()` method. If you don't specify the index number of the item you want to pop out, it will default to "the last item."
salsa_ingredients.pop()
salsa_ingredients
# You can use the [`in` and `not in`](https://docs.python.org/3/reference/expressions.html#membership-test-operations) expressions, among others, to test membership in a list (they'll return a boolean):
'lime' in salsa_ingredients
'cilantro' not in salsa_ingredients
# ### Dictionaries
#
# A _dictionary_ is a comma-separated list of key/value pairs inside curly brackets: `{}`. Let's make an entire salsa recipe:
salsa = {
'ingredients': salsa_ingredients,
'instructions': 'Chop up all the ingredients and cook them for awhile.',
'oz_made': 12
}
# To retrieve a value from a dictionary, you'd refer to the name of its key inside square brackets `[]` immediately after your reference to the dictionary:
salsa['oz_made']
salsa['ingredients']
# You can also use the `get()` method to retrieve an item from a dictionary. The benefit of using `get()` instead of square brackets: Your script won't throw an error if the key doesn't exist, and this is sometimes what you want. Instead, it will return `None`.
salsa.get('instructions')
print(salsa['cooking_duration'])
print(salsa.get('cooking_duration'))
# To add a new key/value pair to a dictionary, assign a new key to the dictionary inside square brackets and set the value of that key with `=`:
salsa['tastes_great'] = True
salsa
# To delete a key/value pair out of a dictionary, use the `del` command and reference the key:
del salsa['tastes_great']
salsa
# ### Indentation
#
# Whitespace matters in Python. Sometimes you'll need to indent bits of code to make things work. This can be confusing! `IndentationError`s are common even for experienced programmers. (FWIW, Jupyter will try to be helpful and insert the correct amount of "significant whitespace" for you.)
#
# You can use tabs or spaces, just don't mix them. [The Python style guide](https://www.python.org/dev/peps/pep-0008/) recommends indenting your code in groups of four spaces, so that's what we'll use.
# ### `for` loops
#
# You would use a `for` loop to iterate over a collection of things. The statement begins with the keyword `for` (lowercase), then a temporary `variable_name` of your choice to represent the items in the thing you're looping over, then the Python keyword `in`, then the collection you're looping over (or its variable name), then a colon, then the indented block of code with instructions about what to do with each item in the collection.
#
# Let's say we have a list of numbers, `ls`.
ls = [1, 2, 3, 4, 5, 6]
# We could loop over the list and print out each number:
for number in ls:
print(number)
# We could print out each number _times 6_:
for number in ls:
print(number*6)
# ... anything that happens in that indented block will be applied to each element in the list -- whatever you need to do inside your loop.
#
# Note too that the variable name `number` in our loop is totally arbitrary. This also would work, though it would probably confuse anyone else reading your script:
for banana in ls:
print(banana)
# It can be hard, at first, to figure out what's a "Python word" and what's a variable name that you get to define. This comes with practice.
# Strings are iterable, too. Let's loop over the letters in a sentence:
# +
sentence = "This one's a few stubbies short of a six-pack"
for letter in sentence:
print(letter)
# -
# To this point: Because strings are iterable, like lists, you can use the same kinds of methods:
# get the first five characters
sentence[:5]
# get the length of the sentence
len(sentence)
'stubbies' in sentence
# You can iterate over dictionaries, too -- just remember that dictionaries _don't keep track of the order that items were added to it_.
#
# When you're looping over a dictionary, the variable name in your `for` loop will refer to the keys. Let's loop over our `salsa` dictionary from up above to see what I mean.
for key in salsa:
print(key)
# To get the _value_ of a dictionary item in a for loop, you'd need to use the key to retrieve it from the dictionary:
for key in salsa:
print(key, '=>', salsa[key])
# ### `if` statements
# Just like in Excel, you can use the "if" keyword to handle conditional logic.
#
# These statements begin with the keyword `if` (lowercase), then the condition to evaluate, then a colon, then a new line with a block of indented code to execute if the condition resolves to `True`.
if 4 < 6:
print('4 is less than 6')
# You can also add an `else` statement (and a colon) with an indented block of code you want to run if the condition resolves to `False`.
if 4 > 6:
print('4 is greater than 6?!')
else:
print('4 is not greater than 6.')
# If you need to, you can add multiple conditions with `elif`.
# +
HOME_SCORE = 6
AWAY_SCORE = 8
if HOME_SCORE > AWAY_SCORE:
print('we won!')
elif HOME_SCORE == AWAY_SCORE:
print('we tied!')
else:
print('we lost!')
# -
# ### List comprehensions
#
# Sometimes, you want to _do something_ to a list of data but you need to save the results of your operation under a new variable. A common scenario would be filtering a list or transforming the items somehow.
#
# A list comprehension happens inside square brackets and includes the keywords `for` and `in`. It also has placeholder variable names (that you define) to stand in for each item in your list.
#
# ```python
# [item for item in your_list]
# ```
#
# Let's say you want to strip whitespace and upcase every item in your list.
my_gross_list = [' McDonalds Corp.', 'ARBYS ', ' wendys', ' tHe KrUsTy KrAb ']
# You could do someting like this:
my_clean_list = [x.upper().strip() for x in my_gross_list]
my_clean_list
# You can also add _conditions_ to your list comprehensions. Let's say we want to keep only list items that end with 'S' (using a string method called [`endswith()`](https://docs.python.org/3/library/stdtypes.html#str.endswith)):
ends_with_s = [x for x in my_clean_list if x.endswith('S')]
ends_with_s
# ### Dealing with errors
#
# Run the code in the following cell:
print(salsa_ingredients[0])
print(salsa_ingredients[-1])
print(salsa_ingredients[100])
# Errors like this are extremely common and happen to literally every person who writes code. It's just part of the work. They can be frustrating, though! There is a strategy for solving them, though.
#
# First thing: Read error messages (called "tracebacks") from the bottom up. We're getting something called an `IndexError`, and it's saying "list index out of range."
#
# Moving upward: The error message points to the offending line of code: 3.
#
# Maybe, from here, we can figure out the error. (Answer: We don't have 100 items in our list.) If not, I would Google the exact text of the error on the first line we read, and maybe the word "python": ["IndexError: list index out of range" python](https://www.google.com/search?q=%22IndexError%3A+list+index+out+of+range%22+python). You'll get _very_ acquainted with StackOverflow.
#
# 👉 To dive deeper into debugging errors, [check out this notebook](Debugging%20strategies.ipynb).
| python/notebooks/Data types and basic syntax.ipynb |