code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Machine learning
# Tutta la parte dedicata al machine learning (regressione + classificazione) verrà scritta qui
# +
import numpy as np
import geopandas as gpd
import pandas as pd
# funzioni di sk-learn
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV, cross_validate
from sklearn.linear_model import Ridge
from sklearn.preprocessing import OneHotEncoder, StandardScaler, RobustScaler
from sklearn.metrics import matthews_corrcoef, r2_score, accuracy_score, confusion_matrix, plot_confusion_matrix
from sklearn.model_selection import train_test_split
# custom lib
import Supporto_ML as supp_ML
# +
"""
Prima task: prevedere l'attività di twitter a livello provinciale
"""
# Importo i dati dal database finale (vedi Import Dati)
data = pd.read_csv('data/processed/MachineLearningDB.csv')
# Smisto target e features
target_mattina = data['TargetDay']
target_sera = data['TargetNight']
data.drop(columns=['TargetDay', 'TargetNight'], inplace=True)
#data.drop(columns=['Weekday'], inplace=True)
#Restringere database di features, molte delle features ipotizzate contano poco
"""
data=data[["Tweet1m", "Tavg1m", "Rainmax1m", "Rainavg1m", "Electro1m", "Tweet1n",
"Tavg1n", "Rainmax1n", "Rainavg1n", "Electro1n"]]
"""
data=data[["Tweet1m", "Tweet2m", "Tweet1n", "Tweet2n"]]
# -
# Logistic Regressor (a livello regionale)
print("Day")
supp_ML.logistic_regressor_fittato(data, target_mattina)
print("Evening")
supp_ML.logistic_regressor_fittato(data, target_sera)
# # Osservazioni:
# 1) Altissima varianza valore r2, ho molti pochi dati per fare questa stima (also, ci sono i dati del fine anno e delle feste che sballano tutto) \
# 2) Rimuovere features aiuta abbastanza \
# 3) Includere il weekday **sembra** peggiorare la stima (probabilmente dovuto al fatto che finchè non lo fixiamo il one-hot-encoder hitta alcune colonne di numeri interi
#
# Overall sembra che il predittore sia better-than random usando come features solo numero di tweets dei giorni precendenti, ma si trova comunque lontano da un buon predittore
# +
# Random Forest Regressor
supp_ML.Random_Forest_Regressor_CV(data, target_mattina)
supp_ML.Random_Forest_Regressor_CV(data, target_sera)
# -
"""
Adesso mi occupo di fare la parte di classificazione: voglio individuare quali siano le circoscrizioni di Trento che hanno
il più alto numero di interazioni sociali basandomi sui dati delle giornate precedenti.
Il dataset sarà uguale a quello precedente con la differenza che la parte di conteggi delle X sarà divisa per circoscrizione
Dal momento che ci sono 12 circoscrizioni i parametri di input saranno
[interazioni sociali giorno i-2 divisi per circoscrizione][interazioni sociali giorno i-1 divisi per circoscrizione] + altri parametri
Per quanto riguarda i target, considereremo la circoscrizione con il maggior numero di tweets.
Inizialmente provo a lavorare con un Classificatore random forest, dal momento che abbiamo pochi elementi
"""
# Random Forest Classifier
supp_ML.Random_Forest_Classifier_Circoscrizione(data)
| SocialPulse(noCC)/Machine_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Running a multi-member hydrological ensemble on the Raven Server
#
# Here we use birdy's WPS client to launch the GR4JCN and HMETS hydrological models on the server and analyze the outputs.
# +
from birdy import WPSClient
from example_data import TESTDATA
import datetime as dt
from urllib.request import urlretrieve
import xarray as xr
import numpy as np
from matplotlib import pyplot as plt
url = "http://localhost:9099/wps"
wps = WPSClient(url)
# +
# The model parameters for gr4jcn and hmets. Can either be a string of comma separated values, a list, an array or a named tuple.
gr4jcn ='0.529, -3.396, 407.29, 1.072, 16.9, 0.947'
hmets = '9.5019, 0.2774, 6.3942, 0.6884, 1.2875, 5.4134, 2.3641, 0.0973, 0.0464, 0.1998, 0.0222, -1.0919, ' \
'2.6851, 0.3740, 1.0000, 0.4739, 0.0114, 0.0243, 0.0069, 310.7211, 916.1947'
# Forcing files. Raven uses the same forcing files for all and extracts the information it requires for each model.
ts=TESTDATA['raven-gr4j-cemaneige-nc-ts']
# Model configuration parameters.
config = dict(
start_date=dt.datetime(2000, 1, 1),
end_date=dt.datetime(2002, 1, 1),
area=4250.6,
elevation=843.0,
latitude=54.4848,
longitude=-123.3659,
)
# Launch the WPS to get the multi-model results. Note the "gr4jcn" and "hmets" keys.
resp=wps.raven_multi_model(ts=str(ts),gr4jcn=gr4jcn,hmets=hmets, **config)
# And get the response
# With `asobj` set to False, only the reference to the output is returned in the response.
# Setting `asobj` to True will retrieve the actual files and copy the locally.
[hydrograph, storage, solution, diagnostics] = resp.get(asobj=True)
# -
# # Here the code crashes on my instance with Errno:
# [Errno -51] NetCDF: Unknown file format: b'/tmp/tmp77v34gl4'
#
# Perhaps the NetCDF reader is not working well? Xarray should be installed. Need to verify please.
print(diagnostics)
# The `hydrograph` and `storage` outputs are netCDF files storing the time series. These files are opened by default using `xarray`, which provides convenient and powerful time series analysis and plotting tools.
hydrograph.q_sim
hydrograph.q_sim.plot()
print("Max: ", hydrograph.q_sim.max())
print("Mean: ", hydrograph.q_sim.mean())
print("Monthly means: ", hydrograph.q_sim.groupby('time.month').mean())
| docs/source/notebooks/multi_model_simulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sea Snail (Abalone) Age and Gender Prediction
# # NAME: <NAME> <br>PRN: 18030142032
#
# <b>Abalone </b> is a common name for any of a group of small to very large sea snails, marine gastropod molluscs in the family Haliotidae. Other common names are ear shells, sea ears, and muttonfish or muttonshells in Australia, ormer in Great Britain, abalone in South Africa, and pāua in New Zealand. These are sea snails that are quite in dangered in South Africa at least, but can be found in Australia, Great Britian and New Zealand.
# The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope. <br>
# The gender of a live abalone is determined by holding it out of the water with the holes along the bottom.<br>
# Other measurements, which are easier to obtain, are used to predict the age.
# <b><u>Table of Contents</u></b>
# - DataSet
# - Dependencies
# - Data Exploration
# - Wrangling- Cleaning & Preprocessing
# - Visualization
# - Observation
# - Predictive Analysis using Classification Algorithms
# - Algorithms Application on dataset for Age and Gender Prediction
# - Conclusion
# ### <br><br><br><br><u>Dataset Location:</u> https://archive.ics.uci.edu/ml/datasets/abalone
# #### Dataset Description
# + active=""
# Name Data-Type Measure. Description
# ---- --------- ----- -----------
# Sex nominal M, F, and I (infant)
# Length continuous mm Longest shell measurement
# Diameter continuous mm perpendicular to length
# Height continuous mm with meat in shell
# Whole weight continuous grams whole abalone
# Shucked weight continuous grams weight of meat
# Viscera weight continuous grams gut weight (after bleeding)
# Shell weight continuous grams after being dried
# Rings integer +1.5 gives the age in years
# -
# ### <br><br><br><br>Import Dependencies
# <b>Libraries Used:</b> Numpy, Pandas, Seaborn, Matplotlib, Sklearn
# +
# data analysis
import numpy as np
import pandas as pd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# machine learning algorithms
from sklearn.svm import SVC, LinearSVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import classification_report,accuracy_score,confusion_matrix
import warnings
warnings.filterwarnings('ignore')
# -
# ### <br><br><br><br>Data Loading from CSV and Exploration
data = pd.read_csv('./sea-snail-abalone.csv')
print("The dimension of dataset (row,column):",data.shape)
print("\nThere are 9 Columns of the given dataset which are as follows :\n",list(data.columns))
data.info()
# The information shows that there are some values missing from 'Height' and 'Viscera weight' column and the data-type for 'Diameter' and 'Shucked weight' are not numeric values.<br><br>
data.describe() # DataSet Description
# ## <br><br><br><br>Data Wrangling: Editing, Cleaning and Preprocessing
data.columns = data.columns.str.replace(' ', '_') # replacing whitespaces to underscore
data.columns = data.columns.str.lower() # changing from camel-case to lower case.
data.head()
data.dtypes # Data Types for each column before conversion.
# Checking whether any column having the PERCENTAGE of NULL Values
print(data.isnull().sum()/data.shape[0] * 100)
# +
data.diameter = data.diameter.str.replace('?','0') # Replacing unwanted entries
data.diameter = pd.to_numeric(data.diameter) # Data type conversion of the column Diameter(it cannot be zero)
data.diameter[data.diameter == 0] = data.diameter.mean() # Updating missing values with median of the column
data.height = data.height.fillna(data.height.mean()) # Replacing NAN value with mean of column
data.shucked_weight = data.shucked_weight.str.replace('?','0') # Replacing unwanted entries
data.shucked_weight = pd.to_numeric(data.shucked_weight) # Data type conversion of the column Shucked_Weight
data.shucked_weight[data.shucked_weight == 0] = data.shucked_weight.mean()# Updating missing values with median
data.viscera_weight = data.viscera_weight.fillna(data.viscera_weight.mean()) # Replacing NAN value with mean
# -
data.dtypes # Data Types for each column after conversion.
# PERCENTAGE of NULL Values
print(data.isnull().sum()/data.shape[0] * 100)
# Verifying all the measurables for snail are greater than zero and positive
print(data.length.any() <= 0)
print(data.diameter.any() <= 0)
print(data.height.any() <= 0)
print(data.whole_weight.any() <= 0)
print(data.shucked_weight.any() <= 0)
print(data.viscera_weight.any() <= 0)
print(data.shell_weight.any() <= 0)
print(data.rings.any() <= 0)
# The 'Length', 'Diameter', 'Height', 'Weights' can not be negative and zero. If any of the given values for the columns are zero or negative then the data about the Abalone is not correct. A living being has all these given properties(especially the sea snail).
# ## <br><br><br><br> Data Visualization
# <b>Sex:</b> The sex variable is categrised into three parts: M for Male, F for Female and I for Infant(not adult).
data.groupby(['sex']).count().plot(kind='bar',legend=False,colors='lightcoral')
plt.title('Distribution of Sex Ratio')
plt.show()
# The count of each category with bar plot shows that the dataset is balanced relative to gender attibutes. <br><br>
sns.countplot(data['rings'])
plt.title('Distribution of Rings')
plt.show()
# <b>Rings:</b> Abalone with Rings between 7-11 have the most observations.<br><br>
sns.boxplot(data=data, x='sex', y='rings')
plt.show()
# - Distribution between Male and Female is similar
# - Most of the Rings both for Male and Female are between 7 and 12
# - Infants have mostly from 5 to 10 Rings
data.hist(figsize=(20,10), grid = False, layout=(3,3), bins = 30)
plt.show()
# These Histogram shows the skewness of data.<br><br><br><br>
# ### Generating HeatMap to find out the correlation between each column
plt.figure(figsize=(10, 10))
corr = data.corr()
sns.heatmap(corr, annot=True)
plt.show()
# ### The HeatMap shows the following features:-
# - Length is highly correlated with Diameter<br>
# - Whole weight is highly correlated with all the features except Rings<br>
# - From all the features excluding Rings, Height is least correlated with other features<br>
# - Rings feature has the highest correlation with Shell Weight followed by Height, Length and Diameter<br>
# <br><br>
# ## Correlation Observation
# Common function to find any correlations in the dataset
def getAnyCorrelation(xcol,ycol):
fig, ax = plt.subplots(1)
for i in range(1):
x=data[xcol]
y=data[ycol]
marker="o",
alpha=0.2,
ax.scatter(x,y, label=str(i))
plt.ylabel(ycol)
plt.xlabel(xcol)
plt.title("Correlation between "+xcol+" & "+ycol,fontsize = 16)
plt.show()
# ### Weight are linearly correlated with other weight features
getAnyCorrelation('shucked_weight','viscera_weight')
getAnyCorrelation('shucked_weight','shell_weight')
getAnyCorrelation('viscera_weight','shell_weight')
# ### Length and Diameter shows linear correlation
getAnyCorrelation('length','diameter')
# ### Length, Diameter and Whole Weight shows linear correlation
getAnyCorrelation('diameter','whole_weight')
getAnyCorrelation('length','whole_weight')
# # <br><br><br><br>Classification Algorithms Used for Predictive Analysis:
#
# <b>1. K Nearest Neighbour</b> :- non-parametric, lazy learning algorithm. Purpose is to use a dataset in which the data points are separated into several classes to predict the classification of a new sample point. <br><br>
#
# <b>2. Support Vector Machine</b> :- a powerful and flexible class of supervised algorithms for both classification and regression. Produces significant accuracy with less computation power. <br> <br>
#
# <b>3. Naive Bayes</b> :- a group of extremely fast and simple classification algorithms that are often suitable for very high-dimensional datasets. Very useful as a quick-and-dirty baseline for a classification problem.<br>
#
# I am going to apply these three aglorithms one by one to predict the Age and Gender of Abalone
# # <br><br><br> 1. Age Prediction:
# Age is a continious data. We can not apply classification model on age as it is regression problem. To do classification, I divided age into four categories namely: Young, Teen, Adult and Old. So now, the age of the Abalone can be predicted using classification model.
ds = data
# +
def sexConversion(x,t):
if x == t :
return 1
else:
return 0
ds['Sex_M'] = ds.sex.apply(lambda x: sexConversion(x,'M'))
ds['Sex_F'] = ds.sex.apply(lambda x: sexConversion(x,'F'))
ds['Sex_I'] = ds.sex.apply(lambda x: sexConversion(x,'I'))
# -
# #### Categorizing Rings into four parts: Young, Teen, Adult and Old for Age classification
bins = [8,9,11,ds['rings'].max()]
group_names = ['young','adult','old']
ds['rings'] = pd.cut(ds['rings'],bins, labels = group_names)
dictionary = {'young':0, 'teen':1, 'adult':2,'old':3}
ds['rings'] = ds['rings'].map(dictionary)
ds1 = ds.drop(['sex'], axis = 1)
# ## Splitting DataSet for Age Prediction
X = ds1.drop(['rings'], axis = 1)
y = ds1['rings']
y = y.fillna(y.mean())
y = y.astype(int)
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size = 0.2,random_state=10)
# ### 1. Applying KNN Model
knn = KNeighborsClassifier(n_neighbors=2)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, y_train) * 100, 2)
acc_knn
print(classification_report(y_pred,y_test))
confusion_matrix(y_pred,y_test)
aa_knn = round(accuracy_score(y_pred,y_test)*100,2)
print('Accuracy of the Model:',aa_knn)
# ### 2. Applying Gaussian Naive Bayes Model
gaussian = GaussianNB()
gaussian.fit(X_train, y_train)
y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, y_train) * 100, 2)
acc_gaussian
print(classification_report(y_pred,y_test))
confusion_matrix(y_pred,y_test)
aa_gnb = round(accuracy_score(y_pred,y_test)*100,2)
print('Accuracy of the Model:',aa_gnb)
# ### 3. Applying SVM Model
svc = SVC(kernel='linear',gamma = 560)
svc.fit(X_train, y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, y_train) * 100, 2)
acc_svc
print(classification_report(y_pred,y_test))
confusion_matrix(y_pred,y_test)
aa_svm = round(accuracy_score(Y_pred,y_test)*100,2)
print('Accuracy of the Model:',aa_svm)
# ## Best Classification Model for Age Prediction
age_models = pd.DataFrame({
'Model': ['SVM', 'KNN', 'Naive Bayes'],
'A_Score': [acc_svc, acc_knn, acc_gaussian],
'Model_Acc_A':[aa_svm,aa_knn,aa_gnb]
})
age_models.sort_values(by='A_Score', ascending=False)
# KNN and SVM is clearly the best model with 80% model score and accuracy rate which is pretty good at best.
# # <br><br><br>2. Gender Prediction:
#
ds2 = ds1.drop(['Sex_M','Sex_F','Sex_I'], axis = 1)
ds2['sex'] = ds.sex
# ## Splitting DataSet for Gender Prediction
# +
X = ds2.drop(['sex'], axis = 1)
y = ds2['sex']
dictionary = {'M':1,'F':2,'I':0}
y = y.map(dictionary)
X.rings = X.rings.fillna(X.rings.median())
X.rings = X.rings.astype(int)
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size = 0.2,random_state=10)
# -
# ### 1. Applying SVM Model
svc = SVC(kernel='linear',gamma = "auto")
svc.fit(X_train, y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, y_train) * 100, 2)
acc_svc
print(classification_report(y_pred,y_test))
confusion_matrix(Y_pred,y_test)
ga_svm = round(accuracy_score(Y_pred,y_test)*100,2)
print('Accuracy of the Model:',ga_svm)
# ### 2. Applying Gaussian Naive Bayes Model
gaussian = GaussianNB()
gaussian.fit(X_train, y_train)
y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, y_train) * 100, 2)
acc_gaussian
print(classification_report(y_pred,y_test))
confusion_matrix(y_pred,y_test)
ga_gnb = round(accuracy_score(Y_pred,y_test)*100,2)
print('Accuracy of the Model:',ga_gnb)
# ### 3. Applying KNN Model
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, y_train) * 100, 2)
acc_knn
print(classification_report(y_pred,y_test))
confusion_matrix(y_pred,y_test)
ga_knn = accuracy_score(y_pred,y_test)*100
print('Accuracy of the Model:',ga_knn)
# ## Best Classification Model for Gender Prediction
gender_models = pd.DataFrame({
'Model': ['SVM', 'KNN', 'Naive Bayes'],
'G_Score': [acc_svc, acc_knn, acc_gaussian],
'Model_Acc_G':[ga_svm,ga_knn,ga_gnb]
})
gender_models.sort_values(by='G_Score', ascending=False)
# SVM and Naive Bayes are clearly the average Models with 53% accuracy which are pretty average at best.
# KNN is better than SVM and Naive Bayes with less score but more accurate than these two.
# # <br><br><br><br> Conclusion
m = pd.merge(age_models,gender_models,on='Model')
m
# The SVM model is clearly the best model for both Age and KNN is best model for Gender Prediction. The score for the model is about 79%(KNN) and 87%(SVM) for Age and Gender respectively which is pretty good. However, the KNN model is the average model in this case. The accuracy of the model is about 56% and 51% for Age and Gender respectively. The Naive Bayes is also the average but the poor model for both predictions.
| Assignments/Python/Abalone-Predictive_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from bayes_opt import BayesianOptimization
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
# %matplotlib inline
# -
# # Target Function
#
# Lets create a target 1-D function with multiple local maxima to test and visualize how the [BayesianOptimization](https://github.com/fmfn/BayesianOptimization) package works. The target function we will try to maximize is the following:
#
# $$f(x) = e^{-(x - 2)^2} + e^{-\frac{(x - 6)^2}{10}} + \frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \in (-2, 10)$.
def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
# +
x = np.linspace(-2, 10, 1000)
y = target(x)
plt.plot(x, y)
# -
# # Create a BayesianOptimization Object
#
# Enter the target function to be maximized, its variable(s) and their corresponding ranges (see this [example](https://github.com/fmfn/BayesianOptimization/blob/master/examples/usage.py) for a multi-variable case). A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
# In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
# $\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold. Additionally we will use the cubic correlation in our Gaussian Process.
# # Plotting and visualizing the algorithm at each step
# ### Lets first define a couple functions to make plotting easier
# +
def posterior(bo, xmin=-2, xmax=10):
xmin, xmax = -2, 10
bo.gp.fit(bo.X, bo.Y)
mu, sigma2 = bo.gp.predict(np.linspace(xmin, xmax, 1000).reshape(-1, 1), eval_MSE=True)
return mu, np.sqrt(sigma2)
def plot_gp(bo, x, y):
fig = plt.figure(figsize=(16, 10))
fig.suptitle('Gaussian Process and Utility Function After {} Steps'.format(len(bo.X)), fontdict={'size':30})
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
mu, sigma = posterior(bo)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(bo.X.flatten(), bo.Y, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility = bo.util.utility(x.reshape((-1, 1)), bo.gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
# -
# Correlation lenght ~ $1 / \theta$
bo = BayesianOptimization(target, {'x': (-2, 10)})
gp_params = {'corr': 'cubic', 'theta0': 1e0, 'thetaL': 1e-1, 'thetaU': 1e1}
max_params = {'acq': 'ei', 'xi': 0.1}
# ### Two random points
#
# After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
bo.maximize(init_points=5, n_iter=0, **max_params, **gp_params)
plot_gp(bo, x, y)
# ### After one step of GP (and two random points)
for _ in range(5):
bo.maximize(init_points=0, n_iter=1, **max_params)
plot_gp(bo, x, y)
pl.show()
x
| Bayesian Optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # BIG DATA ANALYSIS : Image Classification Using Machine Learning
# ---
# ## 이미지도 사실 행렬
# +
import numpy as np
from PIL import Image
img = Image.open('data/yonsei-logo.png').convert('L')#흑백으로 변환
display(img)
# -
arr = np.array(img)
print(arr.shape)
arr
# # Mnist 데이터 셋을 이용한 손글씨 인식
# +
from sklearn.datasets import fetch_openml
#데이터 다운로드
mnist = fetch_openml('mnist_784', cache=False)
#Features
X = mnist.data
#Label
y = mnist.target
print(X.shape, y.shape)
# -
# ## 한 이미지를 골라서 매트릭스로 출력을 해보면
for i in range(0,784,28):
print(" ".join([str(int(x)).ljust(3) for x in X[0][i:i+28].tolist()]))
import matplotlib.pyplot as plt
# %matplotlib inline
# +
fig = plt.figure(figsize=(10,10))
for row in range(0,5):
for col in range(1,6):
index = row*5+col
plt.subplot(5,5,index)
img = X[index].reshape(28, 28)
#행렬을 이미지로
plt.imshow(img, cmap='gray')
plt.show()
# -
# # Classification 모델 만들기
# +
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC, NuSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
# -
import pandas as pd
# +
# 사용할 Classifier 리스트에 저장
classifiers = [
KNeighborsClassifier(),
LogisticRegression(),
SVC(),
DecisionTreeClassifier(),
RandomForestClassifier(),
MLPClassifier(),
GaussianNB()
]
#일단 10000개의 데이터만 갖고 훈련
num_of_samples = 10000
X_train, X_test, y_train, y_test = \
X[:num_of_samples], X[-5000:] ,y[:num_of_samples], y[-5000:]
# -
# 각 모델의 훈련 및 예측 시간을 기록하기 위한 패키지
import time
# +
log_cols=["Classifier", "Accuracy", "Log Loss"]
log = pd.DataFrame(columns=log_cols)
for clf in classifiers:
print("="*30)
name = clf.__class__.__name__
print(name)
start_time = time.time()
clf.fit(X_train, y_train)
inteval = time.time()-start_time
print("training-time:",round(inteval,3))
start_time = time.time()
train_predictions_1 = clf.predict(X_test)
inteval = time.time()-start_time
print("predicting-time:",round(inteval,3))
acc = accuracy_score(y_test, train_predictions_1)
print("Accuracy: {:.4%}".format(acc))
print("="*30)
# -
# # PCA를 이용한 차원축소 (Dimension Reduction)
# <img src="data/pca_image.jpg"/>
from sklearn.decomposition import PCA
# 784개의 feature를 50개의 feature로 투영
pca = PCA(n_components=50)
principalComponents = pca.fit_transform(X)
principalComponents.shape
# ## 축소된 Feature를 이용하여 분류 알고리즘 테스트
# +
classifiers = [
KNeighborsClassifier(),
LogisticRegression(),
SVC(),
DecisionTreeClassifier(),
RandomForestClassifier(),
MLPClassifier(),
GaussianNB()
]
num_of_samples = 10000
X_train,X_test,y_train,y_test = \
principalComponents[:num_of_samples], principalComponents[-5000:], y[:num_of_samples], y[-5000:]
# -
# +
log_cols=["Classifier", "Accuracy", "Log Loss"]
log = pd.DataFrame(columns=log_cols)
for clf in classifiers:
print("="*30)
name = clf.__class__.__name__
print(name)
start_time = time.time()
clf.fit(X_train, y_train)
inteval = time.time()-start_time
print("training-time:",round(inteval,3))
start_time = time.time()
train_predictions_1 = clf.predict(X_test)
inteval = time.time()-start_time
print("predicting-time:",round(inteval,3))
acc = accuracy_score(y_test, train_predictions_1)
print("Accuracy: {:.4%}".format(acc))
print("="*30)
# -
# # 차원 축소 후 시각화
#
print("y=",y[0])
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(X[:10000])
principalComponents
plt.scatter(principalComponents[:,0], principalComponents[:,1], c=y[:10000].astype(int)/10,alpha=0.5,cmap='rainbow')
# plt.legend()
plt.show()
# ## T-sne를 이용한 시각화
#
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(X[:10000])
plt.scatter(tsne_results[:,0], tsne_results[:,1], c=y[:10000].astype(int)/10,alpha=0.5,cmap='rainbow')
# plt.legend()
plt.show()
| practice/week-10/W10_1_image-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as st
df = pd.read_csv("pb_gaussian_var.csv")
# +
list_of_n = df.n.unique()
print(list_of_n)
estimates_std = []
standard_error = []
for n_val in list_of_n:
estimates_std.append(np.std(df.loc[df.n==n_val, :]['estimator_mean']))
standard_error.append(np.mean(df.loc[df.n==n_val, :]['estimator_variance']))
# +
true_mean = 190
true_sd = 30
true_var = true_sd**2
point_estimates = []
for n_val in list_of_n:
point_estimates.append(np.mean(df.loc[df.n==n_val, :]['estimator_mean']))
#plot is cutting off first datapoint. Hardcoding the first "throwaway" datapoint
list_of_n_new = list_of_n.copy()
list_of_n_new=np.insert(list_of_n_new,0,0)
point_estimates_new = point_estimates.copy()
point_estimates_new.insert(0,900)
estimates_std_new = estimates_std.copy()
estimates_std_new.insert(0,0)
plt.xscale('log')
plt.axhline(y=true_var, color='green', linestyle=':', label = 'target')
plt.plot([str(e) for e in list_of_n_new], point_estimates_new,
marker='o', color='blue', label='mean of estimates')
plt.xlabel("n")
plt.ylabel("Mean of Estimator")
plt.legend()
plt.show()
plt.xscale('log')
plt.axhline(y=true_var, color='green', linestyle=':', label = 'target')
plt.errorbar([str(e) for e in list_of_n_new], point_estimates_new,
marker='o', color='blue', label='mean of estimates', yerr=estimates_std_new)
plt.xlabel("n")
plt.ylabel("Mean of Estimator")
plt.legend()
plt.show()
# -
plt.plot([str(e) for e in list_of_n], estimates_std,
marker='o', color='blue', label='standard deviation of point estimates')
plt.plot([str(e) for e in list_of_n], standard_error,
marker='o', color='magenta', label='average of standard error')
plt.plot([str(e) for e in list_of_n], [(1/(np.sqrt(e))) for e in list_of_n],
marker='o', color='green', label='1/n^2')
plt.xlabel("size of n")
plt.legend()
plt.show()
# +
def construct_ci(alpha):
ci_success_over_n = []
true_theta = 900
index = 0
for n_val in list_of_n:
ci_success_trials = np.zeros(1000) #T
for i in range(1000):
theta = (df.loc[df.n==n_val, :]['estimator_mean'])[index]
sigma_sq = (df.loc[df.n==n_val, :]['estimator_variance'])[index]
index += 1
ci = st.norm.interval(alpha, loc=theta, scale=np.sqrt(sigma_sq))
if ci[0] <= true_theta <= ci[1]:
ci_success_trials[i] = 1
ci_success_over_n.append(np.mean(ci_success_trials))
# plt.xscale('log')
plt.axhline(y=alpha, color='green', linestyle=':', label = 'target')
plt.plot([str(e) for e in list_of_n], ci_success_over_n,
marker='o', color='blue', label='CI success rate')
plt.xlabel("n")
plt.ylabel("CI Success Rate")
plt.legend()
plt.show()
return(ci_success_over_n)
# -
print(construct_ci(.50))
print(construct_ci(.60))
print(construct_ci(.90))
| without DP/parametric_bootstrap/gaussian_var_experiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chunking
# Demo tutorial for how to use nlp_toolkit to train sequence labeling model and predict new samples. The task we choose is noun phrases labeling.
#
# The dataset includes working experience texts from different cv, and we want to label noun phrases in given text.
#
# Available models:
#
# 1. WordRNN
# 2. CharRNN
# 3. IDCNN
from nlp_toolkit import Dataset, Labeler
# ## Data Processing
# ### Load config dict
import yaml
config = yaml.safe_load(open('../config_sequence_labeling.yaml'))
config['data']['inner_char'] = True
config['embed']['pre'] = True
# if we want to use pre_trained embeddings, we need a gensim-format embedding file
config['embed']['word']['path'] = '../data/embeddings/tencent_np_200d.txt'
# ### Load data
# if you are using pre-trained embeddings, the oov rate should not be over 5%
dataset = Dataset(fname='../data/cv_word.txt', task_type='sequence_labeling', mode='train', config=config)
for x, y in zip(dataset.texts['token'][0:10], dataset.labels[0:10]):
print(list(zip(x, y)))
# if your want to see the vocab and label index mapping dict
# +
# dataset.transformer._word_vocab._token2id
# +
# dataset.transformer._label_vocab._token2id
# -
# ## Chunking Labeling
# ### Define Sequence Labeler
# avialiable models: model_name_list = ['word_rnn', 'char_rnn', 'idcnn']
model_name='word_rnn'
seq_labeler = Labeler(model_name=model_name, dataset=dataset, seq_type='bucket')
# ### Train model
trained_model, history = seq_labeler.train()
# ### plot acc and loss
from nlp_toolkit import visualization as vs
vs.plot_loss_acc(history, 'sequence_labeling')
# ### 10-fold training
config['train']['train_mode'] = 'fold'
seq_labeler_new = Labeler(model_name=model_name, dataset=dataset, seq_type='bucket')
seq_labeler_new.train()
# ## Predict New Samples
# ### Load data and transformer
dataset = Dataset(fname='../data/cv_word_predict.txt',
task_type='sequence_labeling', mode='predict',
tran_fname='models/word_rnn_201812171047/transformer.h5')
# ### Load model
seq_labeler = Labeler('word_rnn', dataset)
seq_labeler.load(weight_fname='models/word_rnn_201812171047/model_weights_05_0.9339_0.8622.h5',
para_fname='models/word_rnn_201812171047/model_parameters.json')
# ### predict samples
# if you want to get the label probability of each tokens, set return_prob as True
# if you don't want to use Dataset class to generate some other features or clean texts (not recommend)
# just make one fake dict with one key "token" and the value is the nested list of words
y_pred = seq_labeler.predict(dataset.texts, return_prob=False)
# more pretty result prediction
print([(x1, y1) for x1, y1 in zip(dataset.texts['token'][0], y_pred[0])])
# more more pretty result prediction --> it will create a new html file, you need open it in a browser
from nlp_toolkit import visualization as vs
vs.entity_visualization(dataset.texts['token'], y_pred)
# ### Evaluate model
# +
dataset = Dataset(mode='eval', fname='../data/cv_word_basic.txt',
task_type='sequence_labeling',
tran_fname='models/word_rnn_201812182341/transformer.h5',
data_format='basic')
seq_labeler = Labeler('word_rnn', dataset)
seq_labeler.load(weight_fname='models/word_rnn_201812182341/model_weights_02_0.9286_0.8590.h5',
para_fname='models/word_rnn_201812182341/model_parameters.json')
# if you don't want to use Dataset class to generate some other features or clean texts (not recommend)
# just make one fake dict with one key "token" and the value is the nested list of words.
seq_labeler.evaluate(dataset.texts, dataset.labels)
| examples/chunking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="wKR035iYD9hC"
import tensorflow as tf
# + id="amnS2qZGEM3n"
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# + colab={"base_uri": "https://localhost:8080/"} id="qgaXCBpKEmId" outputId="b93f0719-940b-417b-e9b8-abf6172fcacf"
len(x_train)
# + colab={"base_uri": "https://localhost:8080/"} id="cqXqo3IZFPC3" outputId="7c4e69ed-3661-4214-ac1e-dce092e31d30"
len(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="KaUEE0KfFa_x" outputId="2aa003bc-f805-4aa4-eb33-e9552bcd09be"
x_train.shape
# + colab={"base_uri": "https://localhost:8080/"} id="N4Xgqo50Fhqc" outputId="372d2c02-22cf-4781-846b-5502593e3f1a"
x_train[0].shape
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="10iInq8nFukf" outputId="576ebaae-0d11-4fc4-fa87-9a78046e5b71"
import matplotlib.pyplot as plt
plt.title(f'Label: {y_train[0]}')
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.axis('off')
# + colab={"base_uri": "https://localhost:8080/"} id="ZXhm6aPIJoqA" outputId="a4ccbf7b-a61b-4648-a935-7c6d3ff8c8e4"
# CNN 사용 시 차원 늘리기
# (28, 28) -> (28, 28, 1)
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
x_train.shape, x_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Q23aOPHmGSsu" outputId="018f6008-6611-4caa-d507-0a2e6d95bdb0"
# 입력 (28, 28)
# 출력 (10,)
# model = tf.keras.models.Sequential([
# tf.keras.layers.Flatten(input_shape=(28, 28)),
# tf.keras.layers.Dense(128, activation='relu'),
# tf.keras.layers.Dropout(0.2),
# tf.keras.layers.Dense(10, activation='softmax')
# ])
# 입력 (28, 28, 1)
# 출력 (10,)
model = tf.keras.models.Sequential([
tf.keras.layers.Input((28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu', padding='same'),
tf.keras.layers.MaxPool2D((2, 2)),
tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="jvzx4chkHuyH" outputId="bd347e76-f58a-4d09-a9a0-0b21a6feeaca"
# 느리면 런타임 -> 런타임 유형 변경 -> GPU 선택
# 일일 이용 시간 제한 있음
history = model.fit(x_train, y_train, epochs=5)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="t1KbbdH_H9wl" outputId="9e3e06d7-ce43-49be-a9bf-c318960145fc"
plt.title('Accuracy')
plt.plot(history.history['accuracy'])
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="hZW49AWYIqdz" outputId="387d4384-d927-49c7-ffc7-2b8d4515e2ff"
plt.title('Loss')
plt.plot(history.history['loss'])
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="G3qC3-T7I4th" outputId="2e158758-e81f-431d-9a9c-daf8c4213cc5"
accuracy, loss = model.evaluate(x_test, y_test, verbose=2)
print(f"오차: {accuracy:.2f} 정확도: {loss*100:.1f}%")
# + id="PmGIql1IV3CM"
| notebooks/tensorflow-mnist-cnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using the GrainSizeTools script with Jupyter notebooks: a simple example
# Set the inline mode for plots and run the script
# %matplotlib inline
# %run .../grain_size_tools/GrainSizeTools_script.py
areas = extract_column(file_path='.../data_set.txt')
diameters = area2diameter(areas, 0.5)
calc_grain_size(diameters, plot='lin')
Saltykov(diameters, numbins=15)
calc_shape(diameters)
calc_diffstress(32, phase='quartz', piezometer='Stipp_Tullis')
| DOCS/JN_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.9 64-bit
# metadata:
# interpreter:
# hash: 31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6
# name: python3
# ---
# + [markdown] id="T_du22iY_i3w"
# ### **This notebook involves the Sentiment analysis of Restaraunt reviews using scraped dataset.**
# + [markdown] id="01vabqH3CFOG"
# Install required libraries
# + colab={"base_uri": "https://localhost:8080/"} id="6JpaP9fbCMI5" outputId="1d12aa4b-923f-4058-c5b3-5f3ac217f528"
# !pip install scikit-plot
# !pip install pyspellchecker
# + [markdown] id="bwoJKCWp__xf"
# Import required libraries
# + id="Ogxuogt_AEyV" colab={"base_uri": "https://localhost:8080/"} outputId="649a7c17-93eb-4743-b002-2e21855936cc"
import pandas as pd
import re
import string
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
from nltk import sent_tokenize, word_tokenize, WordPunctTokenizer
from nltk.stem import PorterStemmer,WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn.svm import SVC, LinearSVC,NuSVC, libsvm
from sklearn.model_selection import train_test_split
from scikitplot.metrics import plot_confusion_matrix, roc_curve, auc
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from collections import Counter
import pickle
from imblearn.over_sampling import SMOTE
from imblearn.metrics import classification_report_imbalanced
import warnings
warnings.filterwarnings('ignore')
# + [markdown] id="3WP6E4mo_X2D"
# Load the data
# + colab={"base_uri": "https://localhost:8080/", "height": 289} id="nAz0mv_i_X2E" outputId="ede77a84-fc2e-4c76-9709-050a695e4118"
df = pd.read_csv('/content/sentiment_analysis.csv')
df = df.drop('Unnamed: 0',axis=1)
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="-jVox4dF_X2G" outputId="d0ea9fb3-5ce5-456e-acc6-04875f3a7873"
#Check columns
df.columns
# + colab={"base_uri": "https://localhost:8080/"} id="vZ7a-O5hmRNW" outputId="f7c5a85b-0bb2-44b4-f51b-c1e2ae614e84"
# General stats about datasets
df.info()
# + id="nb1c11DcFDPi"
# Change column names for better readability
df = df.rename(columns = {'comments':'Review'})
# + [markdown] id="kB_IuHo1DG-S"
# Here, in columns, we have added **Review_Sentiment**,& **Sentiment_Score**. We have extracted Review_Sentiment & Sentiment_Score using **Transformer** library from **Huggingface**.
# + [markdown] id="KxV0Fe4VDwI4"
# ### **Text Pre-processing**
# + [markdown] id="j_eJ5Mc7_X2G"
# We will do the text pre-processing, which involves **removing stop words & special characters, convert to lowercase letters, Stemming & Lemmatzation**.
# + [markdown] id="svVYBBJLE3-W"
# Let's consider **Review** feature for text pre-processing.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="ka2X7SFgbcBh" outputId="d136e490-2f12-4260-b3df-c9ae7aed120a"
# strip [b''b]
df['review_preprocessed'] = df['Review'].str.strip('[b''b]')
# Replace \n by ''
df = df.replace(r'\\n',' ', regex=True)
df[['Review','review_preprocessed']].head()
# + [markdown] id="oHKPZrHTVaCe"
# **Lower casing**
# + [markdown] id="MdC1MKbxVjHb"
# Lower casing is a common text preprocessing technique. The idea is to convert the input text into same casing format so that 'text', 'Text' and 'TEXT' are treated the same way.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="yVHZnOgvVS1P" outputId="71d3da73-72b1-44c3-c4ce-ef3154e35e62"
df["review_preprocessed"] = df["review_preprocessed"].str.strip().str.lower()
df[['Review','review_preprocessed']].head()
# + [markdown] id="g5frIuQgWGnT"
# **Removal of Punctuations**
# + [markdown] id="emNjQdrRWMJn"
# Another common text preprocessing technique is to remove the punctuations from the text data. This is again a text standardization process that will help to treat 'hurray' and 'hurray!' in the same way.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="oqjKbg5lWeSQ" outputId="b858429b-400a-49ac-8881-9c5c8f2721ae"
punct_to_remove = string.punctuation
def remove_punctuation(text):
"""custom function to remove the punctuation"""
return text.translate(str.maketrans('', '', punct_to_remove))
df['review_preprocessed'] = df['review_preprocessed'].apply(lambda review : remove_punctuation(review))
df[['Review','review_preprocessed']].head()
# + [markdown] id="1jZop__PqT0m"
# **Removal of stopwords**
# + [markdown] id="zJLv0R_3qa91"
# Stopwords are commonly occuring words in a language like 'the', 'a' and so on. They can be removed from the text, as they don't provide valuable information for downstream tasks.
# + colab={"base_uri": "https://localhost:8080/", "height": 238} id="qSvhjFu1qtMW" outputId="7f6f712e-1ce2-414b-d87e-f3b70d5a64f8"
#Download stopwords from nltk libraries
nltk.download('stopwords')
STOPWORDS = set(nltk.corpus.stopwords.words('english'))
def remove_stopwords(text):
"""custom function to remove the stopwords"""
return " ".join([word for word in str(text).split() if word not in STOPWORDS])
df["review_preprocessed"] = df["review_preprocessed"].apply(lambda text: remove_stopwords(text))
df[['Review','review_preprocessed']].head()
# + [markdown] id="TLIIC-nvvGsd"
# **Lemmatization**
# + [markdown] id="45f9DerNva4n"
# Lemmatization is similar to stemming in reducing inflected words to their word stem but differs in the way that it makes sure the root word (also called as lemma) belongs to the language.
# + [markdown] id="G6BmRpYlvj4P"
# Let us use the WordNetLemmatizer in nltk to lemmatize our reviews.
# + colab={"base_uri": "https://localhost:8080/", "height": 238} id="Fplj-pn9vnih" outputId="175ee8c8-f379-48c7-fb69-8216e5047c9c"
# Download wordnet from nltk library
nltk.download('wordnet')
lemmatizer = WordNetLemmatizer()
def lemmatize_words(text):
return " ".join([lemmatizer.lemmatize(word) for word in text.split()])
df["review_preprocessed"] = df["review_preprocessed"].apply(lambda text: lemmatize_words(text))
df[['Review','review_preprocessed']].head()
# + [markdown] id="EitzIB087QDb"
# **Slang Removal**
# + [markdown] id="bmU-eKWSE6HP"
# We are using a slang.txt file which consists of the words. Source of this file was taken from [here]('https://github.com/rishabhverma17/sms_slang_translator/blob/master/slang.txt').
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="EM-jo4g57VPB" outputId="54737781-2a4e-440f-f6f5-b6067311f471"
#open the fle slang.txt
file=open("/content/slang.txt","r")
slang=file.read()
#seperating each line present in the file
slang=slang.split('\n')
def slang_removal(review):
review_tokens=review.split()
slang_word=[]
meaning=[]
#store the slang words and meanings in different lists
for line in slang:
temp=line.split("=")
slang_word.append(temp[0])
meaning.append(temp[-1])
#replace the slang word with meaning
for i,word in enumerate(review_tokens):
if word in slang_word:
idx=slang_word.index(word)
review_tokens[i]=meaning[idx]
review=" ".join(review_tokens)
return review
df['review_preprocessed'] = df['review_preprocessed'].apply(lambda x : slang_removal(x))
df[['Review','review_preprocessed']].head()
# + [markdown] id="KhlxHFVRG0z1"
# **Tokenization**
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="Q-k_L5rFLhRn" outputId="64b3b260-4a5e-4e92-9a28-341a0a5f6d3d"
# Split each review sentence to words using review.split()
df['review_preprocessed'] = df['review_preprocessed'].apply(lambda review : review.split())
df[['Review','review_preprocessed']].head()
# + [markdown] id="Ibh4FyRPI6mq"
# ##### Let's see how a revew looks before & after pre-processing
# + colab={"base_uri": "https://localhost:8080/", "height": 86} id="21iS95Mf_X2J" outputId="c301f6b4-c229-4c2d-dfb0-08bd859ef5d1"
# Before pre-processing
df.Review[0]
# + colab={"base_uri": "https://localhost:8080/"} id="tBCw60W_JsDg" outputId="44d57c90-1756-4cf6-b246-c15343cbd925"
# After pre-processing
df.review_preprocessed[0]
# + [markdown] id="Iyos0u3bK-6u"
# Pre-processing has been done. Let's move on to build Sentiment analysis model using Support Vector machine (SVM).
# + colab={"base_uri": "https://localhost:8080/"} id="lmP5YqExo4ZM" outputId="97cc5683-beb9-439c-c736-64f130c3b826"
# Convert list of tokens back into a string
def convert_tostring(review_list):
return ' '.join(review_list)
df['review_preprocessed'] = df['review_preprocessed'].apply(lambda x:convert_tostring(x))
df['review_preprocessed'].head()
# + [markdown] id="lW5T61cPLPvf"
# **Build Sentiment analysis model**
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="WpeJJyNM_X2K" outputId="b96eae0c-036d-4a0c-e08f-e98a3458b5ae"
sa_df = df[['Review','review_preprocessed','Review_Sentiment','Restaurant_Name']]
sa_df.head()
# + [markdown] id="IORsHSAyLxeE"
# Let's see how our target variable is distributed in our dataset.
# + colab={"base_uri": "https://localhost:8080/", "height": 505} id="u3YpSYlxL84o" outputId="ecd2d253-43c9-4751-e150-846a4521e9a2"
# %%time
#Sentiment classes count
df_class=df['Review_Sentiment'].value_counts()
print('Count of Sentiment classes :\n',df_class)
#Percentage of Sentiment classes count
per_revenue_class=df['Review_Sentiment'].value_counts()/len(df)*100
print('percentage of count of Sentiment classes :\n',per_revenue_class)
#Countplot and violin plot for Sentiment classes
fig,ax=plt.subplots(1,2,figsize=(16,5))
sns.countplot(df.Review_Sentiment.values,ax=ax[0],palette='husl')
sns.violinplot(x=df.Review_Sentiment.values,y=df.index.values,ax=ax[1],palette='husl')
sns.stripplot(x=df.Review_Sentiment.values,y=df.index.values,jitter=True,color='black',linewidth=0.5,size=0.5,alpha=0.5,ax=ax[1],palette='husl')
ax[0].set_xlabel('Review_Sentiment')
ax[1].set_xlabel('Review_Sentiment')
ax[1].set_ylabel('Index')
# + [markdown] id="0F6FpauUNKOq"
# **Take aways:**
# * We have imbalanced data, where 81.8% of the data represents the positive class and 18.18% of the data represents the Negative class.
# * If you look at the jitter in violin plot, we can say that Sentiment classes distributed uniformly over the indexs of the dataframe and Also we can observe the imbalanced class distribution where the Positive class is most densely distributed than the Negative class.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="qoU8zfrV_X2L" outputId="8d7ccfe3-88ae-45f7-b661-8c777b51d96d"
# General stats about Review_Sentiment feature
sa_df.describe()
# + [markdown] id="qGB_NO6P_X2M"
# Split the dataset into train & test sets
# + [markdown] id="etUd_Ge9_X2N"
# We will split dataset into train data (80 %) and test data (20 %)
# + id="Y997dCEW_X2N"
X_train, X_test, y_train, y_test = train_test_split(df['review_preprocessed'],df.Review_Sentiment, test_size=0.2, random_state = 42)
# + colab={"base_uri": "https://localhost:8080/"} id="hOBFyNNvXiPX" outputId="19a20263-f900-441e-9eb6-9278909e5fb7"
print('Dataset shapes')
print('X_train : {}'.format(X_train.shape))
print('X_test : {}'.format(X_test.shape))
print('y_train : {}'.format(y_train.shape))
print('y_test : {}'.format(y_test.shape))
# + [markdown] id="5_CpxAwk_X2O"
# Encode the Sentiment labels to 0 & 1
# + id="G3Pbf0sy_X2P"
#Encode using LabelBinarizer 0 -> Negative, 1-> Positive
lb=LabelBinarizer()
#Fit Review_Sentiment
encoder=lb.fit(y_train.values)
# Transform y_train, y_test
y_train = encoder.transform(y_train)
y_test = encoder.transform(y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="yydsuo0uY_5u" outputId="5e68d576-2069-48c0-d328-f1f740a00465"
# Check y_train
y_train[:10]
# + colab={"base_uri": "https://localhost:8080/"} id="pfyZXsBHZJWP" outputId="9fddf100-67bb-4c15-f081-8cfdce508469"
# Check y_train
y_test[:10]
# + [markdown] id="4JprUtZjeKe7"
# ### **Feature extraction from Text**
# + [markdown] id="kwRAZUiCeT42"
# We are considering two approaches for extracting features from text for Sentiment analysis.
# * Bag of Words
# * TF-IDF
#
# **Bag of Words**:
# Bag of Words model is used to preprocess the text by converting it into a bag of words, which keeps a count of the total occurrences of most frequently used words.
#
# **TF-IDF model** :
# TF-IDF stands for “Term Frequency — Inverse Document Frequency”. This is a technique used to quantify a word in documents, we generally compute a weight to each word which signifies the importance of the word in the document and corpus. This method is a widely used technique in Information Retrieval and Text Mining.
#
# We are building 2 different SVM models for unbalanced class data & balanced data for these features.
#
# + [markdown] id="raoVD1Uxv7mn"
# ### **Building SVM models for Unbalanced class data**
# + [markdown] id="OQfQFm5g_X2Q"
# ### **Bags of words model**
#
# To convert text to numerical vectors, we are using Count vectorizer & TfidfTransformer for Bag of words model to extract features.
#
# + [markdown] id="TuokqK09qwSN"
# **Train SVM (Support vectot machine) model using Bag of words features**
# + colab={"base_uri": "https://localhost:8080/"} id="HQVVSMdkqz0d" outputId="9b80b52f-eec9-46ea-9aed-7ece7cc04daf"
#training the support vector machine model
# Define a Pipeline
svm_bow = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SVC(kernel = 'linear',probability=True)),
])
# Initiate SVC
#fitting the svm for bag of words
svm_bow=svm_bow.fit(X_train,y_train)
print(svm_bow)
# + colab={"base_uri": "https://localhost:8080/"} id="dRZn9n0xqoKU" outputId="8fc46dca-eb28-4e0d-ceca-c3f694d1cbcf"
#Accuracy of the model
BOW_svm_score=svm_bow.score(X_train,y_train)
print('Accuracy of the trained BOW SVM model :',round(BOW_svm_score,3))
# + [markdown] id="-4M80dQnryiD"
# **Model performance on test data**
# + colab={"base_uri": "https://localhost:8080/"} id="FuLMjBPLr1Ic" outputId="f280b05e-627b-4448-f6e3-e5f12945391a"
#SVM model prediction for bag of words
svm_bow_predict=svm_bow.predict(X_test)
#Test accuracy for bag of words
svm_bow_score=accuracy_score(y_test,svm_bow_predict)
print("Test accuracy for SVM BoW model :", round(svm_bow_score,3))
# + colab={"base_uri": "https://localhost:8080/", "height": 530} id="TksU_6p3tdff" outputId="cc513c72-9df0-451e-e157-e98c607a8d1e"
#Confusion matrix
#Plot the confusion matrix
plot_confusion_matrix(y_test, svm_bow_predict, normalize=False,figsize=(15,8))
# + colab={"base_uri": "https://localhost:8080/"} id="1pUl6P-ltxfW" outputId="d3f359c1-60ff-42ee-e022-40c9421edaa1"
# Classification report
class_report= classification_report(y_test, svm_bow_predict)
print(class_report)
# + [markdown] id="MKcjMnipuTj4"
# **Plot ROC (Receiver Operating Characteristic) curve**
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="YLWT_L-MuSOr" outputId="5b60cc12-7fbd-4bde-f664-4b3ca7c5b64c"
#ROC_AUC curve
plt.figure()
false_positive_rate, recall, thresholds=roc_curve(y_test, svm_bow_predict)
roc_auc=auc(false_positive_rate,recall)
plt.title('Reciver Operating Characteristics(ROC)')
plt.plot(false_positive_rate, recall,'b',label='ROC(area=%0.3f)' %roc_auc)
plt.legend()
plt.plot([0,1],[0,1],'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall(True Positive Rate)')
plt.xlabel('False Positive Rate')
plt.show()
print('AUC:',roc_auc)
# + [markdown] id="F-KDyf5evGmE"
# Insights:
# * AUC of 0.755 means the model is making good guesses but not best. This is because the recall value for class 0 is low i.e., 0.54. So model is having difficulty in predicting class 0 correctly due to our training data has very low number of class 0 samples than class 1 , i.e., highly imbalanced data.
# + [markdown] id="rc94-Ib5_X2R"
# ### **Term Frequency-Inverse Document Frequency (TFIDF) model**
#
# It is used to convert text documents to matrix of tfidf features. We are considering TfidfVectorizer & TfidfTransformer for extracting tf-idf features.
#
# + [markdown] id="gk6iLKpwxEcG"
# **Train SVM (Support vectot machine) model using TF-IDF features**
# + colab={"base_uri": "https://localhost:8080/"} id="Hu-XBt3TxP4W" outputId="f2556ac9-db99-469e-98aa-92ee8ca8c481"
#training the support vector machine model
# Define a Pipeline
svm_tfidf = Pipeline([('vect', TfidfVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SVC(kernel = 'linear', probability=True)),
])
#fitting the svm for bag of words
svm_tfidf=svm_tfidf.fit(X_train,y_train)
print(svm_tfidf)
# + colab={"base_uri": "https://localhost:8080/"} id="QTwmILzlxdLi" outputId="011cd93c-8020-44f9-c2a8-3dd4e2bca27f"
#Accuracy of the model
tfidf_svm_score=svm_tfidf.score(X_train,y_train)
print('Accuracy of the trained TF-IDF SVM model :',round(tfidf_svm_score,3))
# + [markdown] id="a0vjgdQz_X2S"
# **Model performance on test data**
# + colab={"base_uri": "https://localhost:8080/"} id="t1x4LfEl_X2T" outputId="ecd65f42-87be-48bb-d874-a35da67833e0"
##SVM model prediction using TF-IDF features
svm_tfidf_predict=svm_tfidf.predict(X_test)
#Test accuracy for bag of words
svm_tfidf_score=accuracy_score(y_test,svm_tfidf_predict)
print("Test accuracy for SVM TF-IDF model :", round(svm_tfidf_score,3))
# + colab={"base_uri": "https://localhost:8080/", "height": 530} id="dX5F8QmqzF4K" outputId="d95d1f3e-07c8-41c4-9c3a-3ae8272d9aac"
#Confusion matrix
#Plot the confusion matrix
plot_confusion_matrix(y_test, svm_tfidf_predict, normalize=False,figsize=(15,8))
# + colab={"base_uri": "https://localhost:8080/"} id="AttmhU-vzdSi" outputId="5adba21d-246e-49b1-eb7c-9455fadb32c4"
# Classification report
class_report= classification_report(y_test, svm_tfidf_predict)
print(class_report)
# + [markdown] id="HkT4NJ53zqq5"
# **Plot ROC (Receiver Operating Characteristic) curve**
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="XmfttpDezyFZ" outputId="9607c095-ce71-4b1a-d58c-8ea22db89e79"
#ROC_AUC curve
plt.figure()
false_positive_rate, recall, thresholds=roc_curve(y_test, svm_tfidf_predict)
roc_auc=auc(false_positive_rate,recall)
plt.title('Reciver Operating Characteristics(ROC)')
plt.plot(false_positive_rate, recall,'b',label='ROC(area=%0.3f)' %roc_auc)
plt.legend()
plt.plot([0,1],[0,1],'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall(True Positive Rate)')
plt.xlabel('False Positive Rate')
plt.show()
print('AUC:',roc_auc)
# + [markdown] id="TmPNQznaz8St"
# Insights:
# * AUC of 0.74 means the model is making good gueses. This model also having difficulty in predicting class 0 correctly due to our training data has very low number of class 0 samples than class 1 , i.e., highly imbalanced data.
# + [markdown] id="7PHrf8Lb0Drt"
# ### **Building SVM models for balanced class data**
# + [markdown] id="qMixp3mu0Nge"
# To balance classes in our dataset, we are assigning **class_weight='balanced'** in SVC algorithm.
# + [markdown] id="bywHj0KM1HNY"
# **Bag of Words model**
# + [markdown] id="BctlUh2Y2o_9"
# **Train SVM (Support vectot machine) model using Bag of words features**
# + colab={"base_uri": "https://localhost:8080/"} id="lFefFpXk28Sd" outputId="09421648-aa71-49e9-d492-353bc80e0dcc"
#training the support vector machine model
svm_balanced_bow = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SVC(kernel='linear', probability=True, class_weight='balanced')),
])
#fitting the svm for bag of words
svm_bow=svm_balanced_bow.fit(X_train,y_train)
print(svm_bow)
# + colab={"base_uri": "https://localhost:8080/"} id="MdBk1E3v3k6I" outputId="524ee6f5-7fad-447a-dedc-9f32bf399a75"
#Accuracy of the model with SMOTE
BOW_svm_score=svm_balanced_bow.score(X_train,y_train)
print('Accuracy of the trained BOW SVM model for balanced class data :',round(BOW_svm_score,3))
# + [markdown] id="6u_hh-OH4RQk"
# **Model performance on test data**
# + colab={"base_uri": "https://localhost:8080/"} id="03X5suF_4Y73" outputId="c98abd02-ffd3-47ee-f743-72dbf98a22ab"
#SVM model prediction for bag of words
svm_bow_predict=svm_balanced_bow.predict(X_test)
#Test accuracy for bag of words
svm_bow_score=accuracy_score(y_test,svm_bow_predict)
print("Test accuracy for SVM BoW model for balanced classes:", round(svm_bow_score,3))
# + colab={"base_uri": "https://localhost:8080/", "height": 530} id="1kaYwvYr_X2a" outputId="276ad5dd-f19c-413c-d431-ad436ed9eb06"
#Confusion matrix
#Plot the confusion matrix
plot_confusion_matrix(y_test, svm_bow_predict, normalize=False,figsize=(15,8))
# + colab={"base_uri": "https://localhost:8080/"} id="PIa8L_Fy5htI" outputId="4b4c62f5-a94c-456c-ffd1-9f55e600f412"
# Classification report
class_report= classification_report(y_test, svm_bow_predict)
print(class_report)
# + [markdown] id="uCImJz0z5obQ"
# **Plot ROC curve**
# + colab={"base_uri": "https://localhost:8080/", "height": 71} id="gAAf_n8r5oIq" outputId="53e04b3d-e77e-4fe2-ac61-6d02d93b4d2d"
#ROC_AUC curve
plt.figure()
false_positive_rate, recall, thresholds=roc_curve(y_test, svm_bow_predict)
roc_auc=auc(false_positive_rate,recall)
plt.title('Reciver Operating Characteristics(ROC)')
plt.plot(false_positive_rate, recall,'b',label='ROC(area=%0.3f)' %roc_auc)
plt.legend()
plt.plot([0,1],[0,1],'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall(True Positive Rate)')
plt.xlabel('False Positive Rate')
plt.show()
print('AUC:',roc_auc)
# + [markdown] id="X-pt-qj8xPBe"
# Insights:
# * Great, we can see 6.1 % improvement in model performance i.e., AUC from 0.755 to 0.815 after balancing the classes. This model is best among all models so far.
# + [markdown] id="GtCtjON_RrrD"
# **TF-IDF model for balanced class data**
# + [markdown] id="KJj-f8R76WqB"
# **Train SVM (Support vectot machine) model using TF-IDF features**
# + colab={"base_uri": "https://localhost:8080/"} id="_EAIbyyL6Qts" outputId="98b3a9ca-6d4b-4bd9-ce2e-fa30f8a850a7"
#training the support vector machine model
#Define a Pipelne
svm_balanced_tfidf = Pipeline([('vect', TfidfVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SVC(kernel='linear', probability=True, class_weight='balanced')),
])
#fitting the svm for bag of words
svm_tfidf=svm_balanced_tfidf.fit(X_train, y_train)
print(svm_tfidf)
# + colab={"base_uri": "https://localhost:8080/"} id="yXms6Trw6kHh" outputId="22aa2521-5136-45ea-f298-169e7abf6a29"
#Accuracy of the model
tfidf_svm_score=svm_balanced_tfidf.score(X_train,y_train)
print('Accuracy of the trained TF-IDF SVM model for balanced class data:',round(tfidf_svm_score,3))
# + [markdown] id="9EV97xdC6p57"
# **Model performance on test data**
# + colab={"base_uri": "https://localhost:8080/"} id="fn8H3vKD6ryl" outputId="e0eb9f7e-0db1-4fe5-e04a-ae111537b98d"
##SVM model prediction using TF-IDF features
svm_tfidf_predict=svm_balanced_tfidf.predict(X_test)
#Test accuracy for bag of words
svm_tfidf_score=accuracy_score(y_test,svm_tfidf_predict)
print("Test accuracy for SVM TF-IDF model for balanced class data:", round(svm_tfidf_score,3))
# + colab={"base_uri": "https://localhost:8080/", "height": 530} id="UNyVR8g56xke" outputId="4bc89f08-9f70-4a3c-8102-500e3d60ca53"
#Plot the confusion matrix
plot_confusion_matrix(y_test, svm_tfidf_predict, normalize=False,figsize=(15,8))
# + colab={"base_uri": "https://localhost:8080/"} id="EdaFJXMV7B0u" outputId="dbe835df-416d-4fa0-efd7-ddba602800ec"
# Classification report
class_report= classification_report(y_test, svm_tfidf_predict)
print(class_report)
# + [markdown] id="4vmQT2SG7GTw"
# **Plot ROC curve**
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="s7qZry5_7KEw" outputId="01077da2-908a-4b71-9726-bdac4d80d293"
#ROC_AUC curve
plt.figure()
false_positive_rate, recall, thresholds=roc_curve(y_test, svm_tfidf_predict)
roc_auc=auc(false_positive_rate,recall)
plt.title('Reciver Operating Characteristics(ROC)')
plt.plot(false_positive_rate, recall,'b',label='ROC(area=%0.3f)' %roc_auc)
plt.legend()
plt.plot([0,1],[0,1],'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall(True Positive Rate)')
plt.xlabel('False Positive Rate')
plt.show()
print('AUC:',roc_auc)
# + [markdown] id="K0eZRsnlyIgX"
# Insights:
# * Good, we can see ~6 % improvement in TF-IDF model performance i.e., AUC changed from 0.738 to 0.795.
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="6e-Yssn_A9ya" outputId="5ec40324-f1ea-42e1-c63b-0396759da196"
Results={
'Model':['BOW SVM model','TF-IDF SVM model'],
'f1-score(balanced)':[0.89, 0.89], # f1-score is weighted average
'f1-score(unbalanced)':[0.88, 0.88],
'AUC(balanced)':[0.815, 0.795],
'AUC(unbalanced)':[0.755, 0.738]
}
results_df=pd.DataFrame(Results)
results_df
# + [markdown] id="iEjB3UchzmFS"
# Insights:
# * Among all models, BOW SVM model with balanced classes is the best with the AUC score of 0.815.
# * 2nd best is TF-IDF SVM model with balanced classes with the AUC score of 0.795.
# + [markdown] id="DsLb4yKQ0J8_"
# Hence, we will use **BOW SVM model with balanced classes** for final prediction & the **Ranking of Restaurants based on Sentiemnt score. Ranking of Restaurants will be done in RASA through custom function**.
# + id="QZ31bQnY8aYi"
from joblib import dump, load
# + id="tcYq54BlzXen" colab={"base_uri": "https://localhost:8080/"} outputId="47cc001c-13ab-46d8-f55f-fabc71e113d5"
# Pickle the BOW SVM model with balanced classes for prediction
filename = 'BOW_SVM_balanced_model.joblib'
dump(svm_balanced_bow, filename)
# + id="vGeqqgRY15Lo"
# Load SVM balanced BOW model
svm_bow_model = load('BOW_SVM_balanced_model.joblib')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="M5E0-pWlbTDO" outputId="2b238194-feeb-481c-97c3-7b8ccc9a99d4"
df_final = sa_df.drop('Review',axis=1)
df_final.head()
# + colab={"base_uri": "https://localhost:8080/"} id="L6FhyEFTe6zs" outputId="bf0b9e8d-7838-4a70-f72d-0a60ec0c9f8d"
df_final.shape
# + id="LMPddH1ffABA"
pred_sentiment_score = svm_bow_model.predict_proba(df_final['review_preprocessed'])
# + id="7QltBNwSfp6h" colab={"base_uri": "https://localhost:8080/"} outputId="4e916b1c-d590-4ab9-c504-7a4b6a6da556"
pred_sentiment_score
# + id="50tWmGcf_fZP"
df_pred = pd.DataFrame(pred_sentiment_score, columns = ['Negative_score','Positive_score'])
# + id="nGTkoov6_0C_"
# Merge df_final & df_pred
df_final = df_final.merge(df_pred, left_index=True, right_index=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="qXUGMFWEAR5G" outputId="1a8588be-cf78-4de7-fe88-ae80980998b9"
df_final
# + id="1AuJqXP-AT00"
df_final = df_final.drop(['review_preprocessed','Review_Sentiment'], axis=1)
df_group = df_final.groupby('Restaurant_Name').mean()
# + id="zO7Z9K2vBJA7"
df_group = df_group.sort_values(by = ['Positive_score'], ascending=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 450} id="Zk7SPd-HBLvz" outputId="772da40b-7410-4791-cc1f-3af67f73e4b5"
df_group
# + id="hCCYx2w3BxBI"
df_restaurant = df_group.reset_index(drop=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="vd9_mQixCN8w" outputId="c261bc91-8cac-44b1-ae8d-585f665a63d8"
df_restaurant
# + id="Uhp87YJJCPlv"
df_restaurant.to_csv('Restaurants_with_sentiment_scores.csv', index=False)
# + id="iSmu9nMxCgu9"
| Sentiment_analysis_of_Restaurant_reviews_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import psycopg2
import csv
# +
database = psycopg2.connect (dbname = "", user="", password="", host="", port="")
cursor = database.cursor()
sql = (
"CREATE TABLE IF NOT EXISTS zipcode_countycode ("
" ZIP varchar(55),"
" COUNTY varchar(55),"
" RES_RATIO FLOAT,"
" BUS_RATIO FLOAT,"
" OTH_RATIO FLOAT,"
" TOT_RATIO FLOAT"
")"
)
cursor.execute(sql)
print ("Table created successfully")
# +
#with open('ZIP_COUNTY_122016_short.csv', 'r') as f:
with open('ZIP_COUNTY_122016_short_utc.csv', newline='', encoding='utf-8') as f:
# Create a DictReader to parse the CSV
csv_data = csv.reader(f, delimiter='\t')
i = 0
for row in csv_data:
#print(row['ZIP'])
cursor.execute("INSERT INTO zipcode_countycode (ZIP, COUNTY) VALUES (%s,%s)", row)
#database.commit()
i+=1
print (i)
cursor.close()
#database.close()
database.commit()
database.close()
print ("CSV data imported")
# -
| data_Ingestion/zipcode_countycode_data_upload.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A "Simple Program"
#
# +
# #!pip install pandas_datareader # uncomment and run this to install pandas data reader
# + tags=["remove_output"]
import pandas as pd
import numpy as np
import pandas_datareader as pdr # you might need to install this (see above)
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from datetime import datetime
# don't copy and paste these lists and number in several places
# (which is what the original "simple program" did... that's bad programming!)
# instead, create a var - if we change it here, it changes everywhere
stocks = ['GME','AMC','BB','BBBY','TSLA','GM','VZ']
start_yr = datetime(2021, 1, 1)
# load
stock_prices = pdr.get_data_yahoo(stocks, start=start_yr)
stock_prices = stock_prices.filter(like='Adj Close') # reduce to just columns with this in the name
stock_prices.columns = stocks # put their tickers as column names
print(stock_prices) # print
# -
# + tags=["remove_output"]
# this is wide data... so if we want to create a new variable, we have to do it once for each firm...
# what if we have 1000 firms? seems tough to do...
# make long/tidy:
stock_prices = stock_prices.stack().swaplevel().sort_index().reset_index()
stock_prices.columns = ['Firm','Date','Adj Close']
stock_prices # print - now that is formatted nicely, like CRSP!
# -
# + tags=["remove_output"]
# add return var.
# MAKE SURE YOU CREATE THE VARIABLES WITHIN EACH FIRM - use groupby
stock_prices['ret'] = stock_prices.groupby('Firm')['Adj Close'].pct_change()
stock_prices # print - the first ret for each firm should be missing...
# -
(stock_prices
.assign(ret=1+stock_prices.ret) # gross returns
.set_index(['Date','Firm'])['ret'].unstack() # convert to wide format
.cumprod()
.plot()
)
| lectures/stock_prices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# +
import os
import sys
from ggplot import *
pyrnafold_path = os.path.abspath(os.path.join('..'))
if pyrnafold_path not in sys.path:
sys.path.append(pyrnafold_path)
from pyrnafold.pyrnafold import trange_df, sig_positions
# -
# ls -lah ../data
# ## Build a dataframe
#
# `trange_df` parses `.txt` files containing base pairing probabilities for a given `trange` and builds a `DataFrame` that contains probability difference for each `T` relative to the lowest `T` in `trange`.
df = trange_df('../data/hHSR', trange=range(35, 45))
df
df.describe()
g = ggplot(df, aes(xmin='pos-1',xmax='pos', ymin=0, ymax='Diff')) \
+ geom_rect() \
+ facet_wrap('Temp')
print(g)
df[sig_positions(df, num_sigma=3)]
| notebooks/Folding analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parsing metrics into json
# ## Imports
import pandas as pd
from os.path import join
from tqdm.notebook import tqdm
# ## Load csv
df = pd.read_csv(join("metrics", "historia_przejazdow_2019-07.csv"), parse_dates=True)
df
# ## Remove unnecessary columns
df = df[["interval_start", "node", "degree", "in_degree", "out_degree", "pagerank"]]
# df = df.rename(columns={"interval_start":"s","rental_place":"o","return_place":"d","number_of_trips":"c"})
df['interval_start']= pd.to_datetime(df['interval_start'])
df
# ## Add day column
df["day"] = df["interval_start"].dt.day
df
# ## Change hour to a "minute in day" form
df["minute_in_day"] = df["interval_start"].dt.hour*60 + df["interval_start"].dt.minute
df
# ## Count the number of days in this month
days_in_month = df["interval_start"].dt.daysinmonth.max()
days_in_month
# ## Remove unnecessary columns and rename rest with {"node":"o", "degree":"k", "in_degree":"ik", "out_degree":"ok", "pagerank":"p"}
# +
df = df[["day", "minute_in_day", "node", "degree", "in_degree", "out_degree", "pagerank"]]
df = df.rename(columns={"node":"o", "degree":"k", "in_degree":"ik", "out_degree":"ok", "pagerank":"p"})
df
# -
# ## Example for a single day
(df[df.day==31])[["minute_in_day", "o", "k", "ik", "ok", "p"]].groupby('minute_in_day').apply(lambda g: g[["o", "k", "ik", "ok", "p"]].to_dict(orient='records')).to_dict()
# ## Loop for each day
# +
month_dict = {}
for day in range(1, days_in_month+1):
dict_for_current_day = (df[df.day==day])[["minute_in_day", "o", "k", "ik", "ok", "p"]].groupby('minute_in_day').apply(lambda g: g[["o", "k", "ik", "ok", "p"]].to_dict(orient='records')).to_dict()
month_dict[day] = dict_for_current_day
# -
# ## Save as json
# +
import json
with open('07.json', 'w') as fp:
json.dump(month_dict, fp)
# -
| Convert Metrics To Json.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # sKCSD tutorial
# In this tutorial we will cover three topics: data format for sKCSD estimation, sKCSD method and source visualization.
from kcsd import sKCSDcell, sKCSD, sample_data_path
from kcsd import utility_functions as utils
from kcsd.sKCSD_utils import LoadData
import numpy as np
import matplotlib.pyplot as plt
import os
data_fname = os.path.join(sample_data_path, "gang_7x7_200")
data = LoadData(data_fname)
# Data should be divided into three subdirectories: morphology, electrode_positions and LFP, each containing one file with morphology, electrode_positions and LFP. LoadData currently supports only swc morphology format. LoadData can read in electrode positions as a text file either with 1 column with x postions for each electrode followed by y postions for each electrodes followed by z positions of each electrode; or a textfile with 3 columns with x, y, z electrode postions. LFPs should be a text file with appropriate numbers of the shape of n_electrodes x n_time_samples. LoadData also allows for initialization of an empty object and reading in arbitrary data files from specific location using assign function:
# data1 = LoadData()
# data1.assign('morphology', path_to_morphology_file)
# data1.assign('electrode_positions_file', path_to_electrode_positions_file)
# data1.assign('LFP', path_to_LFP_file)
# Data used in this tutorial was generated using figures/skcsd_paper/run_LFPy.py (class for running example simulations using LFPy). LFPy uses mV for voltage, ms for time and um for distance (or position). sKCSD (and KCSD) requires data in SI units. That is why data used for sKCSD estimation needs to be scaled.
data.LFP /= 1e3
data.morphology[:, 2:6] /= 1e6
data.ele_pos /= 1e6
# Let us visualise the current density.
#Other parameters
time = np.loadtxt(os.path.join(data_fname, 'tvec.txt'))
seglen = np.loadtxt(os.path.join(data_fname, 'seglength'))
ground_truth = np.loadtxt(os.path.join(data_fname, 'membcurr'))/seglen[:, None]*1e-3
vmin, vmax = ground_truth.min(), ground_truth.max()
if abs(vmin) > abs(vmax):
vmax = abs(vmin)
else:
vmin = -vmax
fig, ax = plt.subplots(1, 1)
ax.set_aspect('equal')
ax.set_xlabel('time (ms)')
ax.set_ylabel('#segment')
ax.imshow(ground_truth, origin="lower", aspect="auto", interpolation="none", cmap="bwr", vmin=vmin, vmax=vmax, extent=[time[0], time[-1], 1, len(seglen)])
# Corresponding voltage in the soma:
somav = np.loadtxt(os.path.join(data_fname, 'somav.txt'))
fig, ax = plt.subplots(1, 1)
ax.set_aspect('equal')
ax.set_xlabel('time (ms)')
ax.set_ylabel('voltage (mV)')
ax.plot(time, somav)
# To calculate sKCSD you must specify the width of the source (R) and noise level (lambd). For neuron morphologies R is going to be of the order of microns. We specified lambd = 10000, which amounts to roughly 10% noise.
n_src = 25
R = 8e-6
lambd = 0.0001
ker = sKCSD(data.ele_pos,
data.LFP,
data.morphology,
n_src_init=n_src,
src_type='gauss',
lambd=lambd,
R_init=R,
exact=True,
sigma=0.3)
# skcsd.values(estimate) provides potential and current source density. For potential estimate="POT", for current density estimate='CSD'. By default skcsd.values() provides CSD.
skcsd = ker.values(transformation="segments")
# Both current density and potential are calculated in the 1D space of the morphology loop. sKCSD.values(estimate, transformation) calculates either CSD (estimate="CSD") or potential (estimation="POT") and allows both for transformation to segments and a 3D morphology (for no transformation specify transformation=None). By default (sKCSD.values()) potential and current density are transformed to a 3D cube spanning from (xmin, ymin, zmin) to (xmax, ymax, zmax). To store cell morphology, morphology loop, source positions and allow easy transformations between morphology (segments), morphology loop and 3D cube, sKCSD uses a separate object sKCSDcell. By default sKCSD uses minimum and maximum neuron coordinates from the swc file.
fig, ax = plt.subplots(1, 2)
ax[1].set_aspect('equal')
ax[1].set_xlabel('time (ms)')
ax[1].imshow(skcsd, origin="lower", aspect="auto",
interpolation="none", cmap="bwr",
vmin=vmin, vmax=vmax,
extent=[time[0], time[-1], 1, len(seglen)])
ax[0].set_aspect('equal')
ax[0].set_xlabel('time (ms)')
ax[0].set_ylabel('#segment')
ax[0].imshow(ground_truth, origin="lower", aspect="auto",
interpolation="none", cmap="bwr",
vmin=vmin, vmax=vmax,
extent=[time[0], time[-1], 1, len(seglen)])
ax[0].set_title('Ground truth')
ax[1].set_title('estimated current density')
# To calculate CSD in the 3D cube:
skcsd_3D = ker.values(transformation="3D")
# To visualize morphology you can use the sKCSD cell object:
morpho_z, extent = ker.cell.draw_cell2D(axis=2, segments=False)
extent = [extent[-2]*1e6, extent[-1]*1e6, extent[0]*1e6, extent[1]*1e6]
# draw_cell2D(axis) provides a 2D view of the 3D morphology together with the minimum and maximum coordinates of the 2D view (extent) along a specified axis: 0 for x, 1 for y and 2 for z. Z axis is picked by default. For projections along x-axis extent equals to [ymin, ymax, zmin, zmax], for projections along y-axis extent equals to [xmin, xmax, zmin, zmax], for projections along z-axis extent equals to [xmin, xmax, ymin, ymax]. If segments=True is picked, the cell will be drawn using segment coordinates, if segments=False, the cell will be drawn using loop coordinates. If the number of sources (n_src) is low, these morphologies might differ.
fig, ax = plt.subplots(1, 1)
ax.imshow(morpho_z, origin="lower", aspect="auto", interpolation="none", extent=extent)
# Or create a new sKCSDcell object using morphology data and electrode positions:
n_src = 25
cell_itself = sKCSDcell(data.morphology, data.ele_pos, n_src)
morpho, extent = cell_itself.draw_cell2D(axis=2, segments=False)
extent = [extent[-2], extent[-1], extent[0], extent[1]]
extent = [x*1e6 for x in extent]
fig, ax = plt.subplots(1, 1)
ax.imshow(morpho, origin="lower", aspect="auto", interpolation="none", extent=extent)
ax.set_ylabel('x [um]')
ax.set_xlabel('y [um]')
idx = np.where(time == 40)[0][0]
csd = skcsd_3D[:, :, :, idx].sum(axis=(2))
print(csd.shape, extent)
ax.imshow(csd, origin="lower", aspect="auto", interpolation="none", extent=extent, vmin=vmin, vmax=vmax, alpha=.5, cmap=plt.cm.bwr_r)
# If sKCSDcell object is created by sKCSD the minimum and maximum 3D coordinates take values of the minimum and maximum coordinates of the morphology. When creating a separate sKCSDcell object one can specify custom xmin, ymin, zmin and xmax, ymax, zmax, which can later be convenient, when plotting CSD and electrode positions. Another useful parameter is tolerance, which specifies minimum width of the voxel (in any direction). By default tolerance is set to 2 um.
tolerance = 3e-6
xmin = data.ele_pos[:, 0].min() - 50e-6
xmax = data.ele_pos[:, 0].max() + 50e-6
ymin = data.ele_pos[:, 1].min() - 40e-6
ymax = data.ele_pos[:, 1].max() + 40e-6
zmin = data.ele_pos[:, 2].min() - 50e-6
zmax = data.ele_pos[:, 2].max() + 50e-6
cell_itself = sKCSDcell(data.morphology, data.ele_pos, n_src,
xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax,
zmin=zmin, zmax=zmax, tolerance=tolerance)
morpho_z, extent = cell_itself.draw_cell2D(axis=2, segments=False)
extent = [extent[-2], extent[-1], extent[0], extent[1]]
extent = [x*1e6 for x in extent]
fig, ax = plt.subplots(1, 1)
ax.imshow(morpho_z, origin="lower", aspect="auto", interpolation="none", extent=extent)
ax.set_ylabel('x [um]')
ax.set_xlabel('y [um]')
for i in range(len(data.ele_pos)):
pos_x, pos_y = data.ele_pos[i, 1]*1e6, data.ele_pos[i, 0]*1e6
ax.text(pos_x, pos_y, '*', ha="center", va="center", color="b")
# Let's visualize CSD on top of the morphology in t = 40 ms. sKCSDcell objects can be used to rescale both the CSD and the potential to visualize it in a larger space spanned by (xmin, ymin, zmin) and (xmax, ymax, zmax). To transform these objects it is best to use CSD in the morphology loop space (ker.values(transformation=None)) and tranform them using transform_to_3D method of sKCDcell.
fig, ax = plt.subplots(1, 1)
ax.imshow(morpho_z, origin="lower", aspect="auto", interpolation="none", extent=extent)
ax.set_ylabel('x [um]')
ax.set_xlabel('y [um]')
skcsd = ker.values(transformation=None)
skcsd_3D = cell_itself.transform_to_3D(skcsd, what="loop")
idx = np.where(time == 40)[0][0]
csd = skcsd_3D[:, :, :, idx].sum(axis=(2))
ax.imshow(csd, origin="lower", aspect="auto", interpolation="none", extent=extent, vmin=vmin, vmax=vmax, alpha=.5, cmap=plt.cm.bwr_r)
ax.set_ylabel('x [um]')
ax.set_xlabel('y [um]')
for i in range(len(data.ele_pos)):
pos_x, pos_y = data.ele_pos[i, 1]*1e6, data.ele_pos[i, 0]*1e6
ax.text(pos_x, pos_y, '*', ha="center", va="center", color="g")
| tutorials/skcsd_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--BOOK_INFORMATION-->
# <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
# *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by <NAME>; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
#
# *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
# <!--NAVIGATION-->
# < [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) | [Contents](Index.ipynb) | [Multiple Subplots](04.08-Multiple-Subplots.ipynb) >
# # Customizing Colorbars
# Plot legends identify discrete labels of discrete points.
# For continuous labels based on the color of points, lines, or regions, a labeled colorbar can be a great tool.
# In Matplotlib, a colorbar is a separate axes that can provide a key for the meaning of colors in a plot.
# Because the book is printed in black-and-white, this section has an accompanying online supplement where you can view the figures in full color (https://github.com/jakevdp/PythonDataScienceHandbook).
# We'll start by setting up the notebook for plotting and importing the functions we will use:
import matplotlib.pyplot as plt
plt.style.use('classic')
# %matplotlib inline
import numpy as np
# As we have seen several times throughout this section, the simplest colorbar can be created with the ``plt.colorbar`` function:
# +
x = np.linspace(0, 10, 1000)
I = np.sin(x) * np.cos(x[:, np.newaxis])
plt.imshow(I)
plt.colorbar();
# -
# We'll now discuss a few ideas for customizing these colorbars and using them effectively in various situations.
# ## Customizing Colorbars
#
# The colormap can be specified using the ``cmap`` argument to the plotting function that is creating the visualization:
plt.imshow(I, cmap='gray');
# All the available colormaps are in the ``plt.cm`` namespace; using IPython's tab-completion will give you a full list of built-in possibilities:
# ```
# plt.cm.<TAB>
# ```
# But being *able* to choose a colormap is just the first step: more important is how to *decide* among the possibilities!
# The choice turns out to be much more subtle than you might initially expect.
# ### Choosing the Colormap
#
# A full treatment of color choice within visualization is beyond the scope of this book, but for entertaining reading on this subject and others, see the article ["Ten Simple Rules for Better Figures"](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003833).
# Matplotlib's online documentation also has an [interesting discussion](http://Matplotlib.org/1.4.1/users/colormaps.html) of colormap choice.
#
# Broadly, you should be aware of three different categories of colormaps:
#
# - *Sequential colormaps*: These are made up of one continuous sequence of colors (e.g., ``binary`` or ``viridis``).
# - *Divergent colormaps*: These usually contain two distinct colors, which show positive and negative deviations from a mean (e.g., ``RdBu`` or ``PuOr``).
# - *Qualitative colormaps*: these mix colors with no particular sequence (e.g., ``rainbow`` or ``jet``).
#
# The ``jet`` colormap, which was the default in Matplotlib prior to version 2.0, is an example of a qualitative colormap.
# Its status as the default was quite unfortunate, because qualitative maps are often a poor choice for representing quantitative data.
# Among the problems is the fact that qualitative maps usually do not display any uniform progression in brightness as the scale increases.
#
# We can see this by converting the ``jet`` colorbar into black and white:
# +
from matplotlib.colors import LinearSegmentedColormap
def grayscale_cmap(cmap):
"""Return a grayscale version of the given colormap"""
cmap = plt.cm.get_cmap(cmap)
colors = cmap(np.arange(cmap.N))
# convert RGBA to perceived grayscale luminance
# cf. http://alienryderflex.com/hsp.html
RGB_weight = [0.299, 0.587, 0.114]
luminance = np.sqrt(np.dot(colors[:, :3] ** 2, RGB_weight))
colors[:, :3] = luminance[:, np.newaxis]
return LinearSegmentedColormap.from_list(cmap.name + "_gray", colors, cmap.N)
def view_colormap(cmap):
"""Plot a colormap with its grayscale equivalent"""
cmap = plt.cm.get_cmap(cmap)
colors = cmap(np.arange(cmap.N))
cmap = grayscale_cmap(cmap)
grayscale = cmap(np.arange(cmap.N))
fig, ax = plt.subplots(2, figsize=(6, 2),
subplot_kw=dict(xticks=[], yticks=[]))
ax[0].imshow([colors], extent=[0, 10, 0, 1])
ax[1].imshow([grayscale], extent=[0, 10, 0, 1])
# -
view_colormap('jet')
# Notice the bright stripes in the grayscale image.
# Even in full color, this uneven brightness means that the eye will be drawn to certain portions of the color range, which will potentially emphasize unimportant parts of the dataset.
# It's better to use a colormap such as ``viridis`` (the default as of Matplotlib 2.0), which is specifically constructed to have an even brightness variation across the range.
# Thus it not only plays well with our color perception, but also will translate well to grayscale printing:
view_colormap('viridis')
# If you favor rainbow schemes, another good option for continuous data is the ``cubehelix`` colormap:
view_colormap('cubehelix')
# For other situations, such as showing positive and negative deviations from some mean, dual-color colorbars such as ``RdBu`` (*Red-Blue*) can be useful. However, as you can see in the following figure, it's important to note that the positive-negative information will be lost upon translation to grayscale!
view_colormap('RdBu')
# We'll see examples of using some of these color maps as we continue.
#
# There are a large number of colormaps available in Matplotlib; to see a list of them, you can use IPython to explore the ``plt.cm`` submodule. For a more principled approach to colors in Python, you can refer to the tools and documentation within the Seaborn library (see [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb)).
# ### Color limits and extensions
#
# Matplotlib allows for a large range of colorbar customization.
# The colorbar itself is simply an instance of ``plt.Axes``, so all of the axes and tick formatting tricks we've learned are applicable.
# The colorbar has some interesting flexibility: for example, we can narrow the color limits and indicate the out-of-bounds values with a triangular arrow at the top and bottom by setting the ``extend`` property.
# This might come in handy, for example, if displaying an image that is subject to noise:
# +
# make noise in 1% of the image pixels
speckles = (np.random.random(I.shape) < 0.01)
I[speckles] = np.random.normal(0, 3, np.count_nonzero(speckles))
plt.figure(figsize=(10, 3.5))
plt.subplot(1, 2, 1)
plt.imshow(I, cmap='RdBu')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(I, cmap='RdBu')
plt.colorbar(extend='both')
plt.clim(-1, 1);
# -
# Notice that in the left panel, the default color limits respond to the noisy pixels, and the range of the noise completely washes-out the pattern we are interested in.
# In the right panel, we manually set the color limits, and add extensions to indicate values which are above or below those limits.
# The result is a much more useful visualization of our data.
# ### Discrete Color Bars
#
# Colormaps are by default continuous, but sometimes you'd like to represent discrete values.
# The easiest way to do this is to use the ``plt.cm.get_cmap()`` function, and pass the name of a suitable colormap along with the number of desired bins:
plt.imshow(I, cmap=plt.cm.get_cmap('Blues', 6))
plt.colorbar()
plt.clim(-1, 1);
# The discrete version of a colormap can be used just like any other colormap.
# ## Example: Handwritten Digits
#
# For an example of where this might be useful, let's look at an interesting visualization of some hand written digits data.
# This data is included in Scikit-Learn, and consists of nearly 2,000 $8 \times 8$ thumbnails showing various hand-written digits.
#
# For now, let's start by downloading the digits data and visualizing several of the example images with ``plt.imshow()``:
# +
# load images of the digits 0 through 5 and visualize several of them
from sklearn.datasets import load_digits
digits = load_digits(n_class=6)
fig, ax = plt.subplots(8, 8, figsize=(6, 6))
for i, axi in enumerate(ax.flat):
axi.imshow(digits.images[i], cmap='binary')
axi.set(xticks=[], yticks=[])
# -
# Because each digit is defined by the hue of its 64 pixels, we can consider each digit to be a point lying in 64-dimensional space: each dimension represents the brightness of one pixel.
# But visualizing relationships in such high-dimensional spaces can be extremely difficult.
# One way to approach this is to use a *dimensionality reduction* technique such as manifold learning to reduce the dimensionality of the data while maintaining the relationships of interest.
# Dimensionality reduction is an example of unsupervised machine learning, and we will discuss it in more detail in [What Is Machine Learning?](05.01-What-Is-Machine-Learning.ipynb).
#
# Deferring the discussion of these details, let's take a look at a two-dimensional manifold learning projection of this digits data (see [In-Depth: Manifold Learning](05.10-Manifold-Learning.ipynb) for details):
# project the digits into 2 dimensions using IsoMap
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
projection = iso.fit_transform(digits.data)
# We'll use our discrete colormap to view the results, setting the ``ticks`` and ``clim`` to improve the aesthetics of the resulting colorbar:
# plot the results
plt.scatter(projection[:, 0], projection[:, 1], lw=0.1,
c=digits.target, cmap=plt.cm.get_cmap('cubehelix', 6))
plt.colorbar(ticks=range(6), label='digit value')
plt.clim(-0.5, 5.5)
# The projection also gives us some interesting insights on the relationships within the dataset: for example, the ranges of 5 and 3 nearly overlap in this projection, indicating that some hand written fives and threes are difficult to distinguish, and therefore more likely to be confused by an automated classification algorithm.
# Other values, like 0 and 1, are more distantly separated, and therefore much less likely to be confused.
# This observation agrees with our intuition, because 5 and 3 look much more similar than do 0 and 1.
#
# We'll return to manifold learning and to digit classification in [Chapter 5](05.00-Machine-Learning.ipynb).
# <!--NAVIGATION-->
# < [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) | [Contents](Index.ipynb) | [Multiple Subplots](04.08-Multiple-Subplots.ipynb) >
| PythonDataScienceHandbook/notebooks/04.07-Customizing-Colorbars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Практическое задание 3. Градиентный спуск своими руками
# ### О задании
#
# В данном задании необходимо реализовать обучение линейной регрессии с помощью различных вариантов градиентного спуска.
#
#
# ### Оценивание и штрафы
# Каждая из задач имеет определенную «стоимость» (указана в скобках около задачи). Максимально допустимая оценка за работу — 10 баллов.
#
# Сдавать задание после указанного срока сдачи нельзя. При выставлении неполного балла за задание в связи с наличием ошибок на усмотрение проверяющего предусмотрена возможность исправить работу на указанных в ответном письме условиях.
#
# Задание выполняется самостоятельно. «Похожие» решения считаются плагиатом и все задействованные студенты (в том числе те, у кого списали) не могут получить за него больше 0 баллов (подробнее о плагиате см. на странице курса). Если вы нашли решение какого-то из заданий (или его часть) в открытом источнике, необходимо указать ссылку на этот источник в отдельном блоке в конце вашей работы (скорее всего вы будете не единственным, кто это нашел, поэтому чтобы исключить подозрение в плагиате, необходима ссылка на источник).
#
# Неэффективная реализация кода может негативно отразиться на оценке.
#
# Все ответы должны сопровождаться кодом или комментариями о том, как они были получены.
# ## Реализация градиентного спуска
# **Напоминание:**
#
# MSE в матричном виде: $Q(w) = \dfrac{1}{\ell} \left( y - Xw \right)^T \left( y - Xw \right)$ ($\ell$ - кол-во объектов)
#
# Тогда его градиент будет: $\triangledown_wQ = \dfrac{-2}{\ell}X^T\big(y - Xw\big)$
#
# **Бонус (0.5 балла)** Выведите эту формулу для градиента MSE.
# Реализуйте линейную регрессию с функцией потерь MSE, обучаемую с помощью:
#
# **Задание 1 (1 балл)** Градиентного спуска;
#
# **Задание 2 (1.5 балла)** Стохастического градиентного спуска;
#
# **Задание 3 (2.5 балла)** Метода Momentum.
#
#
# Во всех пунктах необходимо соблюдать следующие условия:
#
# * Все вычисления должны быть векторизованы;
# * Циклы средствами python допускается использовать только для итераций градиентного спуска;
# * В качестве критерия останова необходимо использовать (одновременно):
#
# * проверку на евклидовую норму разности весов на двух соседних итерациях (например, меньше некоторого малого числа порядка $10^{-6}$, задаваемого параметром `tolerance`);
# * достижение максимального числа итераций (например, 10000, задаваемого параметром `max_iter`).
# * Чтобы проследить, что оптимизационный процесс действительно сходится, будем использовать атрибут класса `loss_history` — в нём после вызова метода `fit` должны содержаться значения функции потерь для всех итераций, начиная с первой (до совершения первого шага по антиградиенту);
# * Инициализировать веса можно случайным образом или нулевым вектором.
#
#
# Ниже приведён шаблон класса, который должен содержать код реализации каждого из методов.
# +
import numpy as np
from sklearn.base import BaseEstimator
class LinearReg(BaseEstimator):
def __init__(self, gd_type='stochastic',
tolerance=1e-4, max_iter=1000, w0=None, alpha=1e-3, eta=1e-2):
"""
gd_type: 'full' or 'stochastic' or 'momentum'
tolerance: for stopping gradient descent
max_iter: maximum number of steps in gradient descent
w0: np.array of shape (d) - init weights
eta: learning rate
alpha: momentum coefficient
"""
self.gd_type = gd_type
self.tolerance = tolerance
self.max_iter = max_iter
self.w0 = w0
self.alpha = alpha
self.w = None
self.eta = eta
self.loss_history = None # list of loss function values at each training iteration
def fit(self, X, y):
"""
X: np.array of shape (ell, d)
y: np.array of shape (ell)
---
output: self
"""
self.loss_history = []
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
return self
def predict(self, X):
if self.w is None:
raise Exception('Not trained yet')
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
pass
def calc_gradient(self, X, y):
"""
X: np.array of shape (ell, d) (ell can be equal to 1 if stochastic)
y: np.array of shape (ell)
---
output: np.array of shape (d)
"""
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
pass
def calc_loss(self, X, y):
"""
X: np.array of shape (ell, d)
y: np.array of shape (ell)
---
output: float
"""
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
pass
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# **Задание 4 (0 баллов)**.
# * Загрузите данные из домашнего задания 2 ([train.csv](https://www.kaggle.com/c/nyc-taxi-trip-duration/data));
# * Разбейте выборку на обучающую и тестовую в отношении 7:3 с random_seed=0;
# * Преобразуйте целевую переменную `trip_duration` как $\hat{y} = \log{(y + 1)}$.
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# **Задание 5 (3 балла)**. Обучите и провалидируйте модели на данных из предыдущего пункта, сравните качество между методами по метрикам MSE и $R^2$. Исследуйте влияние параметров `max_iter` и `eta` (`max_iter`, `alpha` и `eta` для Momentum) на процесс оптимизации. Согласуется ли оно с вашими ожиданиями?
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# **Задание 6 (2 балла)**. Постройте графики (на одной и той же картинке) зависимости величины функции потерь от номера итерации для полного, стохастического градиентного спусков, а также для полного градиентного спуска с методом Momentum. Сделайте выводы о скорости сходимости различных модификаций градиентного спуска.
#
# Не забывайте о том, что должны получиться *красивые* графики!
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# ### Бонус
# **Задание 7 (2 балла)**. Реализуйте линейную регрессию с функцией потерь MSE, обучаемую с помощью метода
# [Adam](https://arxiv.org/pdf/1412.6980.pdf) - добавьте при необходимости параметры в класс модели, повторите пункты 5 и 6 и сравните результаты.
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# **Задание 8 (2 балла)**. Реализуйте линейную регрессию с функцией потерь
# $$ L(\hat{y}, y) = log(cosh(\hat{y} - y)),$$
#
# обучаемую с помощью градиентного спуска.
# +
#╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
# -
# **Задание 9 (0.01 балла)**. Вставьте картинку с вашим любимым мемом в этот Jupyter Notebook
# ╰( ͡° ͜ʖ ͡° )つ──☆*:・゚
| hw/hw3_grad/homework-practice-03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/FedericoBarona/AIDA/blob/main/AIDA_58025_Barona_Revisiting_Pyhton.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="w9Ag3kdy7doj"
# #Revisiting Python
# © <NAME> (2021) <br>
# This notebook discusses the basic and intermidiate Pyhton coding principles and coding design
#
# + [markdown] id="MXZE-Hun8A6A"
# ## Part 1: Programming logic and Design
#
# In this section, we will revisit Programming Logic and Design using the Pyhton Language.
#
# 
#
# Discussions about Python
# + [markdown] id="X-51zu9_8-Gn"
# 1.1 Varible and Printing Values
# + colab={"base_uri": "https://localhost:8080/"} id="8aGb24_R5yPU" outputId="608562d1-2d40-4d8a-bfe4-3d9eebaa3e68"
# This is a C++ line of code
# int x = 2;
# cout<<x;
# Now this is a Python line of code
x = 2
# print(x)
# print(type(x))
# print("The data type of x is ",type(x), "and its value is ",x.".")
# print(f"The data type of x is ",{type(x)}, "and its value is ",{x}.".")
print("The data type of x is {} and its value is {}.".format(type(x),x))
# + colab={"base_uri": "https://localhost:8080/"} id="juZGoM1bAr-3" outputId="f954423b-5232-48c4-e657-b9ba7980b4bc"
print("Hello world")
# + [markdown] id="NMX2FlDq8IXv"
# ## Part 2: Object Oriented Programming
# + id="zxFW3H_u8NcW"
# + [markdown] id="HCRlh9Q68N_R"
# ## PArt 3: Data Structures and Algorithms
# + id="wR_0NqQp8S2f"
| AIDA_58025_Barona_Revisiting_Pyhton.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# This is Machine Learning pipeline preparation, which will be restructed into train_classifier.py
# import libraries
import pandas as pd
from sqlalchemy import create_engine
import nltk
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
import re
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from lightgbm import LGBMClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report
import joblib
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql("SELECT * FROM Response",engine)
X = df["message"]
y = df.iloc[:,4:]
# ### Tokenization function to process text data
# normalize text by using lower case and remove puntuation
# tokenize words and lemmatize each word
def tokenize(text):
text = re.sub("[^a-zA-Z0-9]"," ",text)
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
stopwords_list = stopwords.words("english")
for token in tokens:
clean_token = lemmatizer.lemmatize(token).lower().strip()
if (clean_token not in stopwords_list): clean_tokens.append(clean_token)
return clean_tokens
# ### Build a machine learning pipeline
# Use CountVectorizer to put word count in vectors and Tfidf to statistically measure word frequency and how relevant the word is to the document.
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(LGBMClassifier()))
])
# ### Train pipeline
#
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
# ### Evaluate model
# y_pred = pd.DataFrame(y_pred,index=X_test.index, columns=y_test.columns)
# Though not shown here, LGBM gives better result than simply plugging RandomForest or KNeighbor in pipeline
print(classification_report(y_test,y_pred,target_names=y_test.columns))
# ### Improve model
# Use grid search to find better parameters.
# first look at parameters in pipeline for fine-tuning
pipeline.get_params()
# +
parameters = {
'clf__estimator__colsample_bytree': [0.3,0.7,1.0],
'clf__estimator__min_child_samples': [20,100,250,500],
'vect__max_features': [None,3000,6000],
}
grid_search = GridSearchCV(pipeline, parameters,scoring='f1_micro')
grid_search.fit(X_train, y_train)
# -
# ### Evaluate grid search model
grid_search.best_params_
# Simple grid search gives the same parameters used above, so the evaluation result is also the same.
final_model = grid_search.best_estimator_
final_pred = final_model.predict(X_test)
print(classification_report(y_test,final_pred,target_names=y_test.columns))
# ### Export model
# Evaluation of the model from grid search doesn't show significant better performance, so we save original pipeline.
filename = "final_model.sav"
joblib.dump(pipeline, filename)
| ML_Pipeline_Preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
data = requests.get('http://www.gutenberg.org/cache/epub/8001/pg8001.html')
content = data.content
print(content[1163:2200])
# +
import re
from bs4 import BeautifulSoup
def strip_html_tags(text):
soup = BeautifulSoup(text, "html.parser")
[s.extract() for s in soup(['iframe', 'script'])]
stripped_text = soup.get_text()
stripped_text = re.sub(r'[\r|\n|\r\n]+', '\n', stripped_text)
return stripped_text
clean_content = strip_html_tags(content)
print(clean_content[1163:2045])
# -
# # Tokenization
# ## Sentence Tokenization
# +
import nltk
from nltk.corpus import gutenberg
from pprint import pprint
import numpy as np
# loading text corpora
alice = gutenberg.raw(fileids='carroll-alice.txt')
sample_text = ("US unveils world's most powerful supercomputer, beats China. "
"The US has unveiled the world's most powerful supercomputer called 'Summit', "
"beating the previous record-holder China's Sunway TaihuLight. With a peak performance "
"of 200,000 trillion calculations per second, it is over twice as fast as Sunway TaihuLight, "
"which is capable of 93,000 trillion calculations per second. Summit has 4,608 servers, "
"which reportedly take up the size of two tennis courts.")
sample_text
# -
# Total characters in Alice in Wonderland
len(alice)
# First 100 characters in the corpus
alice[0:100]
# ### Default sentence tokenizer
# +
default_st = nltk.sent_tokenize
alice_sentences = default_st(text=alice)
sample_sentences = default_st(text=sample_text)
print('Total sentences in sample_text:', len(sample_sentences))
print('Sample text sentences :-')
print(np.array(sample_sentences))
print('\nTotal sentences in alice:', len(alice_sentences))
print('First 5 sentences in alice:-')
print(np.array(alice_sentences[0:5]))
# -
# ### Other languages sentence tokenization
# +
from nltk.corpus import europarl_raw
german_text = europarl_raw.german.raw(fileids='ep-00-01-17.de')
# Total characters in the corpus
print(len(german_text))
# First 100 characters in the corpus
print(german_text[0:100])
# +
# default sentence tokenizer
german_sentences_def = default_st(text=german_text, language='german')
# loading german text tokenizer into a PunktSentenceTokenizer instance
german_tokenizer = nltk.data.load(resource_url='tokenizers/punkt/german.pickle')
german_sentences = german_tokenizer.tokenize(german_text)
# -
# verify the type of german_tokenizer
# should be PunktSentenceTokenizer
print(type(german_tokenizer))
# check if results of both tokenizers match
# should be True
print(german_sentences_def == german_sentences)
# print first 5 sentences of the corpus
print(np.array(german_sentences[:5]))
# ### Using PunktSentenceTokenizer for sentence tokenization
punkt_st = nltk.tokenize.PunktSentenceTokenizer()
sample_sentences = punkt_st.tokenize(sample_text)
print(np.array(sample_sentences))
# ### Using RegexpTokenizer for sentence tokenization
SENTENCE_TOKENS_PATTERN = r'(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<![A-Z]\.)(?<=\.|\?|\!)\s'
regex_st = nltk.tokenize.RegexpTokenizer(
pattern=SENTENCE_TOKENS_PATTERN,
gaps=True)
sample_sentences = regex_st.tokenize(sample_text)
print(np.array(sample_sentences))
# ## Word Tokenization
# ### Default word tokenizer
default_wt = nltk.word_tokenize
words = default_wt(sample_text)
np.array(words)
# ### Treebank word tokenizer
treebank_wt = nltk.TreebankWordTokenizer()
words = treebank_wt.tokenize(sample_text)
np.array(words)
# ### TokTok Word Tokenizer
from nltk.tokenize.toktok import ToktokTokenizer
tokenizer = ToktokTokenizer()
words = tokenizer.tokenize(sample_text)
np.array(words)
# ### Regexp word tokenizer
TOKEN_PATTERN = r'\w+'
regex_wt = nltk.RegexpTokenizer(pattern=TOKEN_PATTERN,
gaps=False)
words = regex_wt.tokenize(sample_text)
np.array(words)
GAP_PATTERN = r'\s+'
regex_wt = nltk.RegexpTokenizer(pattern=GAP_PATTERN,
gaps=True)
words = regex_wt.tokenize(sample_text)
np.array(words)
word_indices = list(regex_wt.span_tokenize(sample_text))
print(word_indices)
print(np.array([sample_text[start:end] for start, end in word_indices]))
# ### Derived regex tokenizers
wordpunkt_wt = nltk.WordPunctTokenizer()
words = wordpunkt_wt.tokenize(sample_text)
np.array(words)
whitespace_wt = nltk.WhitespaceTokenizer()
words = whitespace_wt.tokenize(sample_text)
np.array(words)
# ## Building Tokenizers with NLTK and spaCy
# +
def tokenize_text(text):
sentences = nltk.sent_tokenize(text)
word_tokens = [nltk.word_tokenize(sentence) for sentence in sentences]
return word_tokens
sents = tokenize_text(sample_text)
np.array(sents)
# -
words = [word for sentence in sents for word in sentence]
np.array(words)
# +
import spacy
nlp = spacy.load('en_core', parse = True, tag=True, entity=True)
text_spacy = nlp(sample_text)
# -
sents = np.array(list(text_spacy.sents))
sents
sent_words = [[word.text for word in sent] for sent in sents]
np.array(sent_words)
words = [word.text for word in text_spacy]
np.array(words)
# # Removing Accented Characters
# +
import unicodedata
def remove_accented_chars(text):
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
return text
remove_accented_chars('Sómě Áccěntěd těxt')
# -
# # Expanding Contractions
# +
from contractions import CONTRACTION_MAP
import re
def expand_contractions(text, contraction_mapping=CONTRACTION_MAP):
contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())),
flags=re.IGNORECASE|re.DOTALL)
def expand_match(contraction):
match = contraction.group(0)
first_char = match[0]
expanded_contraction = contraction_mapping.get(match)\
if contraction_mapping.get(match)\
else contraction_mapping.get(match.lower())
expanded_contraction = first_char+expanded_contraction[1:]
return expanded_contraction
expanded_text = contractions_pattern.sub(expand_match, text)
expanded_text = re.sub("'", "", expanded_text)
return expanded_text
# -
expand_contractions("Y'all can't expand contractions I'd think")
# # Removing Special Characters
# +
def remove_special_characters(text, remove_digits=False):
pattern = r'[^a-zA-z0-9\s]' if not remove_digits else r'[^a-zA-z\s]'
text = re.sub(pattern, '', text)
return text
remove_special_characters("Well this was fun! What do you think? 123#@!",
remove_digits=True)
# -
# # Case conversion
text = 'The quick brown fox jumped over The Big Dog'
text.lower()
text.upper()
text.title()
# # Text Correction
# ## Correcting repeating characters
# +
old_word = 'finalllyyy'
repeat_pattern = re.compile(r'(\w*)(\w)\2(\w*)')
match_substitution = r'\1\2\3'
step = 1
while True:
# remove one repeated character
new_word = repeat_pattern.sub(match_substitution,
old_word)
if new_word != old_word:
print('Step: {} Word: {}'.format(step, new_word))
step += 1 # update step
# update old word to last substituted state
old_word = new_word
continue
else:
print("Final word:", new_word)
break
# +
from nltk.corpus import wordnet
old_word = 'finalllyyy'
repeat_pattern = re.compile(r'(\w*)(\w)\2(\w*)')
match_substitution = r'\1\2\3'
step = 1
while True:
# check for semantically correct word
if wordnet.synsets(old_word):
print("Final correct word:", old_word)
break
# remove one repeated character
new_word = repeat_pattern.sub(match_substitution,
old_word)
if new_word != old_word:
print('Step: {} Word: {}'.format(step, new_word))
step += 1 # update step
# update old word to last substituted state
old_word = new_word
continue
else:
print("Final word:", new_word)
break
# +
from nltk.corpus import wordnet
def remove_repeated_characters(tokens):
repeat_pattern = re.compile(r'(\w*)(\w)\2(\w*)')
match_substitution = r'\1\2\3'
def replace(old_word):
if wordnet.synsets(old_word):
return old_word
new_word = repeat_pattern.sub(match_substitution, old_word)
return replace(new_word) if new_word != old_word else new_word
correct_tokens = [replace(word) for word in tokens]
return correct_tokens
# -
sample_sentence = 'My schooool is realllllyyy amaaazingggg'
correct_tokens = remove_repeated_characters(nltk.word_tokenize(sample_sentence))
' '.join(correct_tokens)
# ## Correcting spellings
# +
import re, collections
def tokens(text):
"""
Get all words from the corpus
"""
return re.findall('[a-z]+', text.lower())
WORDS = tokens(open('big.txt').read())
WORD_COUNTS = collections.Counter(WORDS)
# top 10 words in corpus
WORD_COUNTS.most_common(10)
# +
def edits0(word):
"""
Return all strings that are zero edits away
from the input word (i.e., the word itself).
"""
return {word}
def edits1(word):
"""
Return all strings that are one edit away
from the input word.
"""
alphabet = 'abcdefghijklmnopqrstuvwxyz'
def splits(word):
"""
Return a list of all possible (first, rest) pairs
that the input word is made of.
"""
return [(word[:i], word[i:])
for i in range(len(word)+1)]
pairs = splits(word)
deletes = [a+b[1:] for (a, b) in pairs if b]
transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1]
replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b]
inserts = [a+c+b for (a, b) in pairs for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"""Return all strings that are two edits away
from the input word.
"""
return {e2 for e1 in edits1(word) for e2 in edits1(e1)}
# -
def known(words):
"""
Return the subset of words that are actually
in our WORD_COUNTS dictionary.
"""
return {w for w in words if w in WORD_COUNTS}
# +
# input word
In [409]: word = 'fianlly'
# zero edit distance from input word
edits0(word)
# -
# returns null set since it is not a valid word
known(edits0(word))
# one edit distance from input word
edits1(word)
# get correct words from above set
known(edits1(word))
# two edit distances from input word
edits2(word)
# get correct words from above set
known(edits1(word))
# two edit distances from input word
edits2(word)
# get correct words from above set
known(edits2(word))
candidates = (known(edits0(word)) or
known(edits1(word)) or
known(edits2(word)) or
[word])
candidates
def correct(word):
"""
Get the best correct spelling for the input word
"""
# Priority is for edit distance 0, then 1, then 2
# else defaults to the input word itself.
candidates = (known(edits0(word)) or
known(edits1(word)) or
known(edits2(word)) or
[word])
return max(candidates, key=WORD_COUNTS.get)
correct('fianlly')
correct('FIANLLY')
# +
def correct_match(match):
"""
Spell-correct word in match,
and preserve proper upper/lower/title case.
"""
word = match.group()
def case_of(text):
"""
Return the case-function appropriate
for text: upper, lower, title, or just str.:
"""
return (str.upper if text.isupper() else
str.lower if text.islower() else
str.title if text.istitle() else
str)
return case_of(word)(correct(word.lower()))
def correct_text_generic(text):
"""
Correct all the words within a text,
returning the corrected text.
"""
return re.sub('[a-zA-Z]+', correct_match, text)
# -
correct_text_generic('fianlly')
correct_text_generic('FIANLLY')
# +
from textblob import Word
w = Word('fianlly')
w.correct()
# -
w.spellcheck()
w = Word('flaot')
w.spellcheck()
# # Stemming
# +
# Porter Stemmer
from nltk.stem import PorterStemmer
ps = PorterStemmer()
ps.stem('jumping'), ps.stem('jumps'), ps.stem('jumped')
# -
ps.stem('lying')
ps.stem('strange')
# +
# Lancaster Stemmer
from nltk.stem import LancasterStemmer
ls = LancasterStemmer()
ls.stem('jumping'), ls.stem('jumps'), ls.stem('jumped')
# -
ls.stem('lying')
ls.stem('strange')
# Regex based stemmer
from nltk.stem import RegexpStemmer
rs = RegexpStemmer('ing$|s$|ed$', min=4)
rs.stem('jumping'), rs.stem('jumps'), rs.stem('jumped')
rs.stem('lying')
rs.stem('strange')
# Snowball Stemmer
from nltk.stem import SnowballStemmer
ss = SnowballStemmer("german")
print('Supported Languages:', SnowballStemmer.languages)
# stemming on German words
# autobahnen -> cars
# autobahn -> car
ss.stem('autobahnen')
# springen -> jumping
# spring -> jump
ss.stem('springen')
# +
def simple_stemmer(text):
ps = nltk.porter.PorterStemmer()
text = ' '.join([ps.stem(word) for word in text.split()])
return text
simple_stemmer("My system keeps crashing his crashed yesterday, ours crashes daily")
# -
# # Lemmatization
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
# lemmatize nouns
print(wnl.lemmatize('cars', 'n'))
print(wnl.lemmatize('men', 'n'))
# lemmatize verbs
print(wnl.lemmatize('running', 'v'))
print(wnl.lemmatize('ate', 'v'))
# lemmatize adjectives
print(wnl.lemmatize('saddest', 'a'))
print(wnl.lemmatize('fancier', 'a'))
# ineffective lemmatization
print(wnl.lemmatize('ate', 'n'))
print(wnl.lemmatize('fancier', 'v'))
# +
import spacy
# use spacy.load('en') if you have downloaded the language model en directly after install spacy
nlp = spacy.load('en_core', parse=True, tag=True, entity=True)
text = 'My system keeps crashing his crashed yesterday, ours crashes daily'
def lemmatize_text(text):
text = nlp(text)
text = ' '.join([word.lemma_ if word.lemma_ != '-PRON-' else word.text for word in text])
return text
lemmatize_text("My system keeps crashing! his crashed yesterday, ours crashes daily")
# +
from nltk.tokenize.toktok import ToktokTokenizer
tokenizer = ToktokTokenizer()
stopword_list = nltk.corpus.stopwords.words('english')
def remove_stopwords(text, is_lower_case=False, stopwords=stopword_list):
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
if is_lower_case:
filtered_tokens = [token for token in tokens if token not in stopwords]
else:
filtered_tokens = [token for token in tokens if token.lower() not in stopwords]
filtered_text = ' '.join(filtered_tokens)
return filtered_text
remove_stopwords("The, and, if are stopwords, computer is not")
# -
def normalize_corpus(corpus, html_stripping=True, contraction_expansion=True,
accented_char_removal=True, text_lower_case=True,
text_lemmatization=True, special_char_removal=True,
stopword_removal=True, remove_digits=True):
normalized_corpus = []
# normalize each document in the corpus
for doc in corpus:
# strip HTML
if html_stripping:
doc = strip_html_tags(doc)
# remove accented characters
if accented_char_removal:
doc = remove_accented_chars(doc)
# expand contractions
if contraction_expansion:
doc = expand_contractions(doc)
# lowercase the text
if text_lower_case:
doc = doc.lower()
# remove extra newlines
doc = re.sub(r'[\r|\n|\r\n]+', ' ',doc)
# lemmatize text
if text_lemmatization:
doc = lemmatize_text(doc)
# remove special characters and\or digits
if special_char_removal:
# insert spaces between special characters to isolate them
special_char_pattern = re.compile(r'([{.(-)!}])')
doc = special_char_pattern.sub(" \\1 ", doc)
doc = remove_special_characters(doc, remove_digits=remove_digits)
# remove extra whitespace
doc = re.sub(' +', ' ', doc)
# remove stopwords
if stopword_removal:
doc = remove_stopwords(doc, is_lower_case=text_lower_case)
normalized_corpus.append(doc)
return normalized_corpus
{'Original': sample_text,
'Processed': normalize_corpus([sample_text])[0]}
| New-Second-Edition/Ch03 - Processing and Understanding Text/Ch03a - Text Wrangling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This workbook is intended to be used to work on a single quarter of Kepler data at a time. It has not been developed to run sequentially, but rather is a space to play with different methods.
# +
import sys
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
from matplotlib import colors
import matplotlib.cm as cmx
import matplotlib.gridspec as gridspec
import numpy as np
np.set_printoptions(threshold=sys.maxsize)
import pandas as pd
import seaborn as sns
import pickle
if sys.version_info[0] < 3:
import Tkinter as Tk
from tkFileDialog import askopenfilename,askdirectory,asksaveasfile
else:
import tkinter as Tk
from tkinter.filedialog import askopenfilename,askdirectory,asksaveasfile
sys.path.append('python')
from clusterOutliers import clusterOutliers
import keplerml
# -
# ## Import the features created with keplerml.py
# !!! ONLY OPEN PICKLED FILES YOU KNOW ARE SAFE !!!
#
# We prefer pickling over other methods for speed and data preservation, but for the examples here we provide features in a csv format since unpickling unknown files can run arbitrary code. While we provide pickled example data for consistency in how we have handled data, feel free to use the csv data for example purposes.
# +
# User defined
path_to_fits = "/home/dgiles/Documents/KeplerLCs/fitsFiles/Q8fitsfiles/" # path to fits file directory, fits files must be local
# remote access to light curves in progress
# We use a cluster outlier object to contain the analysis products for each quarter.
# The object contains methods for clustering and scoring using preset parameters.
# Currently, clusterOutlier objects can be created with a path to a csv or pickled dataframe, or directly from a dataframe.
# Via csv:
path_to_features = "./data/output/Q8_sample.csv"
Q8coo = clusterOutliers(feats=path_to_features,fitsDir=path_to_fits,output_file='Q8_sample.coo')
# Pickled dataframe:
#path_to_features = "./data/output/Q8_sample.p"
#Q8coo = clusterOutliers(path_to_features,fitsDir,'Q8_sample.coo')
# User imported dataframe:
#Q8_features = pd.read_csv(path_to_features,index_col=0)
#Q8coo = clusterOutliers(Q8_features,fitsDir,'Q8_sample.coo')
# Existing cluster outlier object:
#from clusterOutliers import load_coo
#load_coo('Q8_sample.coo')
# -
| Q_Workbook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming Bootcamp 2016
# # Lesson 7 Exercises - ANSWERS
# ---
# ---
# ## 1. Creating your function file (1pt)
#
# Up until now, we've created all of our functions inside the Jupyter notebook. However, in order to use your functions across different scripts, it's best to put them into a separate file. Then you can load this file from anywhere with a single line of code and have access to all your custom functions!
#
# **Do the following:**
# - Open up your plain-text editor.
# - Copy and paste all of the functions you created in lab6, question 3, into a blank file and save it as **my_utils.py** (don't forget the .py extension). Save it anywhere on your computer.
# - Edit the code below so that it imports your `my_utils.py` functions. You will need to change `'../utilities/my_utils.py'` to where you saved your file.
#
# Some notes:
# - You need to supply the path *relative to where this notebook is*. See the slides for some examples on how to specify paths.
# - You can use this method to import your functions from *anywhere* on your computer! In contrast, the regular `import` function will only find custom functions that are in the same directory as the current notebook/script.
# +
import imp
my_utils = imp.load_source('my_utils', '../utilities/my_utils.py') #CHANGE THIS PATH
# test that this worked
print "Test my_utils.gc():", my_utils.gc("ATGGGCCCAATGG")
print "Test my_utils.reverse_compl():", my_utils.reverse_compl("GGGGTCGATGCAAATTCAAA")
print "Test my_utils.read_fasta():", my_utils.read_fasta("horrible.fasta")
print "Test my_utils.rand_seq():", my_utils.rand_seq(23)
print "Test my_utils.shuffle_nt():", my_utils.shuffle_nt("AAAAAAGTTTCCC")
print "\nIf the above produced no errors, then you're good!"
# -
# Feel free to use these functions (and any others you've created) to solve the problems below. You can see in the test code above how they can be accessed.
# ---
# ## 2. Command line arguments (6pts)
#
# **Note: Do the following in a SCRIPT, not in the notebook**. You can not use command line arguments within Jupyter notebooks.
#
# After testing your code as a script, copy and paste it here for grading purposes only.
# **(A)** Write a script that expects 4 arguments, and prints those four arguments to the screen. Test this script by running it (on the command line) as shown in the lecture. Copy and paste the code below once you have it working.
# +
import sys
print sys.argv[1]
print sys.argv[2]
print sys.argv[3]
print sys.argv[4]
# -
#
# **(B)** Write a script that expects 3 numerical arguments ("a", "b", and "c") from the command line.
# - Check that the correct number of arguments is supplied (based on the length of sys.argv)
# - If not, print an error message and exit
# - Otherwise, go on to add the three numbers together and print the result.
#
# Copy and paste your code below once you have it working.
#
# **Note: All command line arguments are read in as strings (just like with raw_input). To use them as numbers, you must convert them with float().**
# +
import sys
if len(sys.argv) == 4:
a = float(sys.argv[1])
b = float(sys.argv[2])
c = float(sys.argv[3])
else:
print "Incorrect args. Please enter three numbers."
sys.exit()
print a + b + c
# -
#
# **(C)** Here you will create a script that generates a random dataset of sequences.
#
# **Your script should expect the following command line arguments, in this order. Remember to convert strings to ints when needed:**
# 1. outFile - string; name of the output file the generated sequences will be printed to
# 1. numSeqs - integer; number of sequences to create
# 1. minLength - integer; minimum sequence length
# 1. maxLength - integer; maximum sequence length
#
# The script should read in these arguments and check if the correct number of arguments is supplied (exit if not).
#
# **If all looks good, then print the indicated number of randomly generated sequences as follows:**
# - the length of each individual sequence should be randomly chosen to be between minLength and maxLength (so that not all sequences are the same length)
# - each sequence should be given a unique ID (e.g. using a counter to make names like seq1, seq2, ...)
# - the output should be in fasta format (>seqID\nsequence\n)
# - the output shold be printed to the indicated file
#
# Then, run your script to create a file called `fake.fasta` containing 100,000 random sequences of random length 50-500 nt.
#
# Copy and paste your code below once you have it working.
# +
import sys, random, imp
my_utils = imp.load_source('my_utils', '../utilities/my_utils.py')
if len(sys.argv) == 5:
outFile = sys.argv[1]
numSeqs = int(sys.argv[2])
minLength = int(sys.argv[3])
maxLength = int(sys.argv[4])
else:
print "Incorrect args. Please enter an outfile, numSeqs, minLength, maxLength."
sys.exit()
outs = open(outFile, 'w')
for i in range(numSeqs):
randLen = random.randint(minLength, maxLength)
randSeq = my_utils.rand_seq(randLen)
seqName = "seq" + str(i)
outs.write(">" + seqName + "\n" + randSeq + "\n")
outs.close()
# -
# ---
# ## 3. `time` practice (7pts)
#
# For the following problems, use the file you created in the previous problem (`fake.fasta`) and the `time.time()` function. (Note: there is also a copy of `fake.fasta` on Piazza if you need it.)
#
# **Note: Do not include the time it takes to read the file in your time calculation! Loading files can take a while.**
# **(A) Initial practice with timing.** Add code to the following cell to time how long it takes to run. Print the result.
# +
import time
start = time.time()
sillyList = []
for i in range(50000):
sillyList.append(sum(sillyList))
end = time.time()
print end - start
# -
#
# **(B) Counting characters.** Is it faster to use the built-in function `str.count()` or to loop through a string and count characters manually? Compare the two by counting all the A's in all the sequences in `fake.fasta` using each method and comparing how long they take to run.
#
# (You do not need to output the counts)
# +
# Method 1 (Manual counting)
seqDict = my_utils.read_fasta("fake.fasta")
start = time.time()
for seqID in seqDict:
seq = seqDict[seqID]
count = 0
for char in seq:
if char == "A":
count += 1
end = time.time()
print end - start
# +
# Method 2 (.count())
seqDict = my_utils.read_fasta("fake.fasta")
start = time.time()
for seqID in seqDict:
seq = seqDict[seqID]
count = seq.count("A")
end = time.time()
print end - start
# -
# *Which was faster?* **Method 2, str.count()**
#
# **(C) Replacing characters.** Is it faster to use the built-in function `str.replace()` or to loop through a string and replace characters manually? Compare the two by replacing all the T's with U's in all the sequences in `fake.fasta` using each method, and comparing how long they take to run.
#
# (You do not need to output the edited sequences)
# +
# Method 1 (Manual replacement)
seqDict = my_utils.read_fasta("fake.fasta")
start = time.time()
for seqID in seqDict:
seq = seqDict[seqID]
newSeq = ""
for char in seq:
if char == "T":
newSeq += "U"
else:
newSeq += char
end = time.time()
print end - start
# +
# Method 2 (.replace())
seqDict = my_utils.read_fasta("fake.fasta")
start = time.time()
for seqID in seqDict:
seq = seqDict[seqID]
newSeq = seq.replace("T", "U")
end = time.time()
print end - start
# -
# *Which was faster?* **Method 2, str.replace()**
#
# **(D) Lookup speed in data structures.** Is it faster to get unique IDs using a list or a dictionary? Read in `fake.fasta`, ignoring everything but the header lines. Count the number of unique IDs (headers) using a list or dictionary, and compare how long each method takes to run.
#
# Be patient; this one might take a while to run!
# +
# Method 1 (list)
seqDict = my_utils.read_fasta("fake.fasta")
uniqueIDs = []
start = time.time()
for seqID in seqDict:
if seqID not in uniqueIDs:
uniqueIDs.append(seqID)
end = time.time()
print end - start
# +
# Method 2 (dictionary)
seqDict = my_utils.read_fasta("fake.fasta")
uniqueIDs = {}
start = time.time()
for seqID in seqDict:
if seqID not in uniqueIDs:
uniqueIDs[seqID] = True
end = time.time()
print end - start
# -
# *Which was faster?* **Method 2, dictionary**
#
# If you're curious, below is a brief explanation of the outcomes you should have observed:
#
# > (B) The built-in method should be much faster! Most built in functions are pretty well optimized, so they will often (but not always) be faster.
#
# > (C) Again, the built in function should be quite a bit faster.
#
# > (D) If you did this right, then the dictionary should be faster by several orders of magnitude. When you use a dictionary, Python jumps directly to where the requested key *should* be, if it were in the dictionary. This is very fast (it's an O(1) operation, for those who are familiar with the terminology). With lists, on the other hand, Python will scan through the whole list until it finds the requested element (or until it reaches the end). This gets slower and slower on average as you add more elements (it's an O(n) operation). Just something to keep in mind if you start working with very large datasets!
#
# ---
# ## 4. `os` and `glob` practice (6pts)
#
# Use `horrible.fasta` as a test fasta file for the following.
# **(A)** Write code that prompts the user (using `raw_input()`) for two pieces of information: an input file name (assumed to be a fasta file) and an output folder name (does not need to already exist). Then do the following:
# - Check if the input file exists
# - If it doesn't, print an error message
# - Otherwise, go on to check if the output folder exists
# - If it doesn't, create it
# > Note: I did this below with command line args instead of raw_input() to give more examples of using args
# +
import sys, os
# read in command line args
if len(sys.argv) == 3:
inputFile = sys.argv[1]
outputFolder = sys.argv[2]
else:
print ">>Error: Incorrect args. Please provide an input file name and an output folder. Exiting."
sys.exit()
# check if input file / output directory exist
if not os.path.exists(inputFile):
print ">>Error: input file (%s) does not exist. Exiting." % inputFile
sys.exit()
if not os.path.exists(outputFolder):
print "Creating output folder (%s)" % outputFolder
os.mkdir(outputFolder)
# -
#
# **(B)** Add to the code above so that it also does the following after creating the output folder:
# - Read in the fasta file (**ONLY** if it exists)
# - Print each individual sequence to a separate file **in the specified output folder**.
# - The files should be named `<SEQID>.fasta`, where `<SEQID>` is the name of the sequence (from the fasta header)
# +
import sys, os, my_utils
# read in command line args
if len(sys.argv) == 3:
inputFile = sys.argv[1]
outputFolder = sys.argv[2]
else:
print ">>Error: Incorrect args. Please provide an input file name and an output folder. Exiting."
sys.exit()
# check if input file / output directory exist
if not os.path.exists(inputFile):
print ">>Error: input file (%s) does not exist. Exiting." % inputFile
sys.exit()
if not os.path.exists(outputFolder):
print "Creating output folder (%s)" % outputFolder
os.mkdir(outputFolder)
# read in sequences from fasta file & print to separate output files
# you'll get an error for one of them because there's a ">" in the sequence id,
# which is not allowed in a file name. you can handle this however you want.
# here I used a try-except statement and just skipped the problematic file (with a warning message)
seqs = my_utils.read_fasta(inputFile)
for seqID in seqs:
outFile = "%s/%s.fasta" % (outputFolder, seqID)
outStr = ">%s\n%s\n" % (seqID, seqs[seqID])
try:
outs = open(outFile, 'w')
outs.write(outStr)
outs.close()
except IOError:
print ">>Warning: Could not print (%s) file. Skipping." % outFile
# -
#
# **(C)** Now use `glob` to get a list of all files in the output folder from part (B) that have a .fasta extension. For each file, print just the file name (not the file path) to the screen.
# +
import sys, glob, os
# read in command line args
if len(sys.argv) == 2:
folderName = sys.argv[1]
else:
print ">>Error: Incorrect args. Please provide an folder name. Exiting."
sys.exit()
fastaList = glob.glob(folderName + "/*.fasta")
for filePath in fastaList:
print os.path.basename(filePath)
| class_materials/Useful_Python_modules/2016/lab7_exercises_ANSWERS_2016.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/skimaza/assist/blob/main/cnn_tl_detectron2_assist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="cUU2xGvKcgXP"
# # AI 전략경영MBA 경영자를 위한 딥러닝 원리의 이해
# # Convolutional Neural Network 전이학습 실습 예제
# # Detectron2 전이학습
# - https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5A
# + [markdown] id="QHnVupBBn9eR"
# # Detectron2 Beginner's Tutorial
#
# <img src="https://dl.fbaipublicfiles.com/detectron2/Detectron2-Logo-Horz.png" width="500">
#
# Welcome to detectron2! This is the official colab tutorial of detectron2. Here, we will go through some basics usage of detectron2, including the following:
# * Run inference on images or videos, with an existing detectron2 model
# * Train a detectron2 model on a new dataset
#
# You can make a copy of this tutorial by "File -> Open in playground mode" and make changes there. __DO NOT__ request access to this tutorial.
#
# + [markdown] id="vM54r6jlKTII"
# # Install detectron2
# + id="FsePPpwZSmqt"
# !pip install pyyaml==5.1
# This is the current pytorch version on Colab. Uncomment this if Colab changes its pytorch version
# # !pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 -f https://download.pytorch.org/whl/torch_stable.html
# Install detectron2 that matches the above pytorch version
# See https://detectron2.readthedocs.io/tutorials/install.html for instructions
# !pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
# exit(0) # After installation, you need to "restart runtime" in Colab. This line can also restart runtime
# + id="9_FzH13EjseR"
# check pytorch installation:
import torch, torchvision
if torch.cuda.is_available():
use_cuda = True
else:
use_cuda = False
print('use_cuda?', use_cuda)
assert torch.__version__.startswith("1.9") # please manually install torch 1.9 if Colab changes its default version
# + id="ZyAvNCJMmvFF"
# Some basic setup:
# Setup detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
# + [markdown] id="Vk4gID50K03a"
# # Run a pre-trained detectron2 model
# + [markdown] id="JgKyUL4pngvE"
# We first download an image from the COCO dataset:
# + id="dq9GY37ml1kr"
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O input.jpg
im = cv2.imread("./input.jpg")
cv2_imshow(im)
# + [markdown] id="uM1thbN-ntjI"
# Then, we create a detectron2 config and a detectron2 `DefaultPredictor` to run inference on this image.
# + id="HUjkwRsOn1O0"
cfg = get_cfg()
# + id="OmZ9iNGKDlRR"
if not use_cuda:
cfg.MODEL.DEVICE = 'cpu'
print(cfg.MODEL.DEVICE)
# + id="3AzjcQmzDLnq"
# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as well
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
# + id="ZyOO8DbVDId9"
predictor = DefaultPredictor(cfg)
outputs = predictor(im)
# + id="7d3KxiHO_0gb"
# look at the outputs. See https://detectron2.readthedocs.io/tutorials/models.html#model-output-format for specification
print(outputs["instances"].pred_classes)
print(outputs["instances"].pred_boxes)
# + id="8IRGo8d0qkgR"
# We can use `Visualizer` to draw the predictions on the image.
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
# + [markdown] id="b2bjrfb2LDeo"
# # Train on a custom dataset
# + [markdown] id="tjbUIhSxUdm_"
# In this section, we show how to train an existing detectron2 model on a custom dataset in a new format.
#
# We use [the balloon segmentation dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon)
# which only has one class: balloon.
# We'll train a balloon segmentation model from an existing model pre-trained on COCO dataset, available in detectron2's model zoo.
#
# Note that COCO dataset does not have the "balloon" category. We'll be able to recognize this new class in a few minutes.
#
# ## Prepare the dataset
# + id="4Qg7zSVOulkb"
# download, decompress the data
# !wget https://github.com/matterport/Mask_RCNN/releases/download/v2.1/balloon_dataset.zip
# !unzip balloon_dataset.zip > /dev/null
# + id="JOjjmj3OdWKZ"
# !ls -l balloon
# + [markdown] id="tVJoOm6LVJwW"
# Register the balloon dataset to detectron2, following the [detectron2 custom dataset tutorial](https://detectron2.readthedocs.io/tutorials/datasets.html).
# Here, the dataset is in its custom format, therefore we write a function to parse it and prepare it into detectron2's standard format. User should write such a function when using a dataset in custom format. See the tutorial for more details.
#
# + id="PIbAM2pv-urF"
# if your dataset is in COCO format, this cell can be replaced by the following three lines:
# from detectron2.data.datasets import register_coco_instances
# register_coco_instances("my_dataset_train", {}, "json_annotation_train.json", "path/to/image/dir")
# register_coco_instances("my_dataset_val", {}, "json_annotation_val.json", "path/to/image/dir")
from detectron2.structures import BoxMode
def get_balloon_dicts(img_dir):
json_file = os.path.join(img_dir, "via_region_data.json")
with open(json_file) as f:
imgs_anns = json.load(f)
dataset_dicts = []
for idx, v in enumerate(imgs_anns.values()):
record = {}
filename = os.path.join(img_dir, v["filename"])
height, width = cv2.imread(filename).shape[:2]
record["file_name"] = filename
record["image_id"] = idx
record["height"] = height
record["width"] = width
annos = v["regions"]
objs = []
for _, anno in annos.items():
assert not anno["region_attributes"]
anno = anno["shape_attributes"]
px = anno["all_points_x"]
py = anno["all_points_y"]
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
obj = {
"bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": 0,
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
for d in ["train", "val"]:
DatasetCatalog.register("balloon_" + d, lambda d=d: get_balloon_dicts("balloon/" + d))
MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"])
balloon_metadata = MetadataCatalog.get("balloon_train")
# + [markdown] id="6ljbWTX0Wi8E"
# To verify the data loading is correct, let's visualize the annotations of randomly selected samples in the training set:
#
# **학습데이터 일부를 읽어서 annotation(레이블)이 제대로 설정되었는지 확인**
# + id="UkNbUzUOLYf0"
dataset_dicts = get_balloon_dicts("balloon/train")
# 임의로 3개만 뽑아서 확인
for d in random.sample(dataset_dicts, 3):
print('\n', d['file_name'], d['image_id'], d['width'], d['height'], '-----------------------')
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=balloon_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
cv2_imshow(out.get_image()[:, :, ::-1])
# + [markdown] id="wlqXIXXhW8dA"
# ## Train!
#
# Now, let's fine-tune a COCO-pretrained R50-FPN Mask R-CNN model on the balloon dataset. It takes ~2 minutes to train 300 iterations on a P100 GPU.
#
# + [markdown] id="aGqLkZDrevIF"
# ### Colab CPU 모드인 경우에 약 3시간 소요
# + id="7unkuuiqLdqd"
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
if not use_cuda:
cfg.MODEL.DEVICE = 'cpu'
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# + id="hBXeH8UXFcqU"
# Look at training curves in tensorboard:
# %load_ext tensorboard
# %tensorboard --logdir output
# + [markdown] id="0e4vdDIOXyxF"
# ## Inference & evaluation using the trained model
# Now, let's run inference with the trained model on the balloon validation dataset. First, let's create a predictor using the model we just trained:
#
#
# + id="Ya5nEuMELeq8"
from detectron2.utils.visualizer import ColorMode
# + id="xJD75qmURDFj"
# Inference should use the config with parameters that are used in training
# cfg now already contains everything we've set previously. We changed it a little bit for inference:
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
#Then, we randomly select several samples to visualize the prediction results.
dataset_dicts = get_balloon_dicts("balloon/val")
for d in random.sample(dataset_dicts, 3):
print('\n', d['file_name'], d['image_id'], d['width'], d['height'], '-----------------------')
im = cv2.imread(d["file_name"])
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
metadata=balloon_metadata,
scale=0.5,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
# + [markdown] id="kblA1IyFvWbT"
# # We can also evaluate its performance using AP metric implemented in COCO API.
# This gives an AP of ~70. Not bad!
# + id="h9tECBQCvMv3"
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
# + id="wIRi2robRPAx"
evaluator = COCOEvaluator("balloon_val", ("bbox", "segm"), False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "balloon_val")
print(inference_on_dataset(trainer.model, val_loader, evaluator))
# + [markdown] id="-sB8ppKgRqkc"
# # 그 밖의 pretrained model 시험
# + [markdown] id="DFWSDxLrOkem"
# ## Pose Key point detection
# + id="RYlNq3aUPgqM"
# Inference with a keypoint detection model
cfg = get_cfg() # get a fresh new config
if not use_cuda:
cfg.MODEL.DEVICE = 'cpu'
cfg.merge_from_file(model_zoo.get_config_file("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set threshold for this model
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
dataset_dicts = get_balloon_dicts("balloon/val")
for d in random.sample(dataset_dicts, 3):
print('\n', d['file_name'], d['image_id'], d['width'], d['height'], '-----------------------')
im = cv2.imread(d["file_name"])
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
metadata=balloon_metadata,
scale=0.5,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
# + id="BqlBonvrTxpt"
balloon_metadata
# + [markdown] id="INVI1pROT70H"
# ## balloon dataset의 metadata는 keypoint connection 방법이 없으므로 예측된 키포인트를 그대로 디스플레이
# + id="z9YnPhiLSPqY"
# Inference with a keypoint detection model
cfg = get_cfg() # get a fresh new config
if not use_cuda:
cfg.MODEL.DEVICE = 'cpu'
cfg.merge_from_file(model_zoo.get_config_file("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set threshold for this model
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
dataset_dicts = get_balloon_dicts("balloon/val")
for d in random.sample(dataset_dicts, 3):
print('\n', d['file_name'], d['image_id'], d['width'], d['height'], '-----------------------')
im = cv2.imread(d["file_name"])
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:,:,::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
# v = Visualizer(im[:, :, ::-1],
# metadata=balloon_metadata,
# scale=0.5,
# instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
# )
# out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
# cv2_imshow(out.get_image()[:, :, ::-1])
# + id="X1cM5zuWTSGt"
cfg.DATASETS.TRAIN[0]
# + id="L2saZGngTPP5"
MetadataCatalog.get(cfg.DATASETS.TRAIN[0])
# + id="OxlcnnyjTbdY"
MetadataCatalog.get(cfg.DATASETS.TRAIN[0]).keypoint_connection_rules
# + [markdown] id="8IHPccwnUOSc"
# ## COCO dataset의 configuration에는 키포인트 연결 규칙이 정의되어 있음
# + [markdown] id="CyazSYeGOspe"
# # panoptic segmentation : 배경도 세그멘테이션
# + id="roTj1N9F5uJ5"
# Inference with a panoptic segmentation model
cfg = get_cfg()
if not use_cuda:
cfg.MODEL.DEVICE = 'cpu'
cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
predictor = DefaultPredictor(cfg)
dataset_dicts = get_balloon_dicts("balloon/val")
for d in random.sample(dataset_dicts, 3):
print('\n', d['file_name'], d['image_id'], d['width'], d['height'], '-----------------------')
im = cv2.imread(d["file_name"])
panoptic_seg, segments_info = predictor(im)["panoptic_seg"]
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_panoptic_seg_predictions(panoptic_seg.to("cpu"), segments_info)
cv2_imshow(out.get_image()[:, :, ::-1])
# + id="d58fxrXgE1Xk"
| cnn_tl_detectron2_assist.ipynb |
/ -*- coding: utf-8 -*-
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + cell_id="00000-94476547-9d9d-42f8-840b-dd8c06269325" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=14988 execution_start=1643550891836 source_hash="f432d584" tags=[]
!pip install geopandas
!pip install pygeos
!pip install gpdvega
# Dependencies
import geopandas as gpd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import altair as alt
import pygeos
import gpdvega
import seaborn as sns
/ + cell_id="00001-3d7172ca-9cef-48c4-83af-014b22a9ea5e" deepnote_cell_type="code" deepnote_output_heights=[51.1875] deepnote_to_be_reexecuted=false execution_millis=1678 execution_start=1643550906836 source_hash="3e013236" tags=[]
counties = gpd.read_file('/work/ca-county-boundaries/CA_Counties/CA_Counties_TIGER2016.shp')
counties['county'] = pd.to_numeric(counties['COUNTYFP'])
ca_full = pd.read_csv('/work/cleaned-csvs/ca_counties_full_dataset.csv')
us_counties = gpd.read_file('/work/cb_2018_us_county_20m/cb_2018_us_county_20m.shp')
us_counties['county'] = pd.to_numeric(us_counties['COUNTYFP'])
us_counties['state'] = pd.to_numeric(us_counties['STATEFP'])
us_counties = us_counties[us_counties['state']<65]
us_df = pd.read_csv('/work/cleaned-csvs/national_counties_full_dataset.csv')
us_df = us_df[us_df['state']<19]
ca_19 = ca_full[ca_full['year']==2019]
ca_full['year'] = pd.to_datetime(ca_full.year, format='%Y')
ca_full['year'].unique()
ca_full.head()
/ + [markdown] cell_id="49846857-08c8-4213-a6d3-457bc70f83cf" deepnote_cell_type="markdown" tags=[]
/ The follow graph contains the change in inflow and outflow from 2014 through 2019 in all 58 CA counties. For any given county, we see that inflow and outflow are similar in size and highly correlated. We also see that inflow and outflow remain pretty consistant throught the period of this study. There are two interesting systematic differences. For 2015, all counties with inflow over 20,000, have a noticable decrease in both inflow and outflow. For 2017, we see all counties with an inflow greater than 20,000 have an increase in both inflow and outflow. This systematic difference has been documented by researchers at the University of Minnisota who caution against using this data after the IRS took over managing it from the US Census, noting, "Specifically, after 2012, out- and inmigration fall precipitously through 2014, increase dramatically through 2016, and then
/ sharply increase or decrease thereafter. The correspondence between the levels and changes
/ of out- and in-migration after (versus before) 2011 is also noteworthy." (page 4)
/ + cell_id="92c4b21a-5568-4a64-af9d-a9ec235737c0" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=356 execution_start=1643550908514 source_hash="44f4e2ff" tags=[]
outflow = alt.Chart(ca_full).mark_line().encode(
x='year:T',
y='individual_outflow',
color = 'county_name',
tooltip='county_name'
)
inflow = alt.Chart(ca_full).mark_line().encode(
x='year:T',
y='individual_inflow',
color = 'county_name',
tooltip='county_name'
)
(outflow | inflow).properties(title="County inflow and outflow over time (each line represents a county)")
/ + cell_id="f12e1b8b-ca88-4f30-85b0-6e974313d905" deepnote_cell_type="code" deepnote_output_heights=[79] deepnote_to_be_reexecuted=false execution_millis=237904065 execution_start=1643550908850 source_hash="b623e53d" tags=[]
/ + cell_id="334ad1e2-7477-4e08-bd01-330a526591f9" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=356 execution_start=1643550908851 source_hash="3131ac8" tags=[]
outflow = alt.Chart(ca_full).mark_point().encode(
x='individual_outflow',
y='county_name',
color = 'year:N',
tooltip='county_name'
)
inflow = alt.Chart(ca_full).mark_point().encode(
x='individual_outflow',
y='county_name',
color = 'year:N',
tooltip='county_name'
)
inflow | outflow
#To do: hide axes on outflow
/ + cell_id="18d9bf2a-b5a7-4ee3-ba2b-7665bd06baba" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=2615 execution_start=1643550909188 source_hash="62604240" tags=[]
ind_in = alt.Chart(ca_19).mark_point().encode(
x='individual_inflow',
y='county_name',
color = 'year:N',
tooltip='county_name'
)
ind_out = alt.Chart(ca_19).mark_point().encode(
x='individual_outflow',
y='county_name',
color = 'year:N',
tooltip='county_name'
)
ind_in + ind_out
/ + cell_id="5e3a5b75-ac46-47cd-9afd-692fdc29d5c8" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=2360 execution_start=1643550909444 source_hash="31c1dc2a" tags=[]
chart = alt.Chart(ca_full).mark_circle().encode(
x='individual_inflow',
y='individual_outflow',
color=alt.Color('county_name'),
tooltip=['county_name','year']
).properties(
title={
"text": ['CA County Inflow vs. Outflow 2014 - 2019'],
"subtitle": ["Each color represents a county"],
}
)
text1 = alt.Chart({'values':[{'x': 125000, 'y': 200000}]}).mark_text(
text='LA County in 2015 ➟',
).encode(
x='x:Q', y='y:Q'
)
text2 = alt.Chart({'values':[{'x': 195000, 'y': 340000}]}).mark_text(
text='LA County in 2017 ➟',# angle=327
).encode(
x='x:Q', y='y:Q'
)
text3 = alt.Chart({'values':[{'x': 110000, 'y': 260000}]}).mark_text(
text='LA County in 2014, 2016, 2018 + 2019 ➟'
).encode(
x='x:Q', y='y:Q'
)
chart + text1 + text2 + text3
/ + cell_id="00002-38450c67-9f6c-4ad4-a073-c810e27f715e" deepnote_cell_type="code" deepnote_output_heights=[578.125] deepnote_to_be_reexecuted=false execution_millis=1843 execution_start=1643550909961 source_hash="329536f8" tags=[]
ca_all = pd.merge(counties,ca_19,how='left',on='county')
# Create column with possible different choices for inflow/outflow target
ca_all['ratio_inflow_outflow'] = ca_all['individual_inflow'] / ca_all['individual_outflow']
ca_all['dif_inflow_outflow'] = ca_all['individual_inflow'] - ca_all['individual_outflow']
ca_all['per_change_pop'] = (ca_all['individual_inflow'] - ca_all['individual_outflow']) / ca_all['total_population']
#create rank for each of these. rank = 1 means high ranking for people coming into the county, rank = 58 means highest ranking for people leaving the county
ca_all['rank_ratio_inflow_outflow'] = ca_all['ratio_inflow_outflow'].rank(ascending=False)
ca_all['rank_dif_inflow_outflow'] = ca_all['dif_inflow_outflow'].rank(ascending=False)
ca_all['rank_per_change_pop'] = ca_all['per_change_pop'].rank(ascending=False)
ca_all[['NAME','per_change_pop','rank_per_change_pop','dif_inflow_outflow','rank_dif_inflow_outflow','rank_ratio_inflow_outflow','ratio_inflow_outflow']].sort_values(by='rank_per_change_pop').head()
/ + cell_id="00003-abcf82ac-1c53-4e27-a757-3d3df3101107" deepnote_cell_type="code" deepnote_output_heights=[1, 511] deepnote_to_be_reexecuted=false execution_millis=1881 execution_start=1643550909965 source_hash="7ee5ac6c" tags=[]
col = [ 'perc_poverty','republican_pct', 'perc_unemployed','per_capita_farm_proprieter_jobs','latitude', 'perc_white', 'perc_65_over',
'housing_per_capita','perc_vacant','per_capita_num_violent_crimes',
'perc_american_indian', 'perc_enrolled_undergrad',
'perc_owner', 'avg_temp','perc_renter','perc_hispanic',
'area_land', 'area_water',
'longitude', 'perc_black','democrat_pct','median_income',
'median_rent', 'median_home_value', 'educational_attainment','perc_asian',
'av_commute_time','total_population',
'individual_inflow', 'individual_outflow','per_change_pop','ratio_inflow_outflow','dif_inflow_outflow',
]
plt.figure(figsize = (10,10))
sns.heatmap(ca_all[col].corr().sort_values(
by='individual_inflow',axis=1).sort_values(
by='individual_inflow',axis=0),xticklabels=True,
yticklabels=True, square = True,linewidths=.25).set(title='Correlation between CA Variables')
/ + cell_id="6279f949-e0cf-47de-af8c-1e7cef991894" deepnote_cell_type="code" deepnote_output_heights=[1, 611] deepnote_table_loading=false deepnote_table_state={"filters": [], "pageIndex": 0, "pageSize": 10, "sortBy": []} deepnote_to_be_reexecuted=false execution_millis=6141 execution_start=1643550911831 source_hash="b1e86600" tags=[]
col = ['individual_outflow', 'individual_inflow','per_change_pop','ratio_inflow_outflow','dif_inflow_outflow']
cor_df = ca_all.corr()#.sort_values(by='individual_inflow',axis=1,ascending=False)
plt.figure(figsize = (5,15))
sns.heatmap(cor_df[col].dropna(axis=0).sort_values(by='individual_inflow'),linewidths=.15,annot=True)
/ + [markdown] cell_id="b9bb39e0-eca4-4bb7-9ee5-7cfa3ed2f1b1" deepnote_cell_type="text-cell-h3" is_collapsed=false tags=[]
/ ### Which counties had the biggest increase or decrease in population?
/ + [markdown] cell_id="0e9ec8cb-7d51-4552-8440-ade2d45cb29b" deepnote_cell_type="text-cell-p" is_collapsed=false tags=[]
/ There are many different ways to compare counties and their relative inflow and outflow. We could take the ratio of inflow to outflow, the net inflow (inflow minus outflow), or the percent change in population (inflow - outflow)/population. Each method will yield a different ranking for cities with the highest inflow. This is an investigation of the variation and similarity between these methods. If we want larger counties to hold a larger weight in our model than smaller counties, using raw numbers is most logical (inflow, outflow, or inflow - outflow). If instead, we want all counties to hold equal weight in the model, normalized versions of these metrics are more logical (percent change in population or ratio of inflow to outflow). Here is a look at the similarities and differences between the methods.
/ + cell_id="00002-e20dab0f-0544-4570-ba39-7fb2dd191130" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=3877 execution_start=1643550917925 source_hash="70c072b" tags=[]
inflow = alt.Chart(ca_all).mark_geoshape().encode(
tooltip='NAME',
color='individual_inflow'
).properties(title='2019 Individual Inflow (number of people who move to a county)')
outflow = alt.Chart(ca_all).mark_geoshape().encode(
tooltip='NAME',
color='individual_outflow'
).properties(title='2019 Individual Outflow (number of people who leave a county)')
population = alt.Chart(ca_all).mark_geoshape().encode(
tooltip='NAME',
color='total_population'
).properties(title='2019 Individual Outflow (number of people who leave a county)')
difference = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','dif_inflow_outflow'],
color = alt.Color('dif_inflow_outflow',scale=alt.Scale(scheme='blueorange', domain=[-9000, 9000]))
).properties(title='2019 Difference between Inflow and Outflow')
(inflow | outflow)
/ + cell_id="5305f2f7-88ca-4ede-bc4d-29cde4e8b569" deepnote_cell_type="code" deepnote_output_heights=[611, 329.96875, 21.1875] deepnote_to_be_reexecuted=false execution_millis=1487 execution_start=1643550920316 source_hash="52bf7a17" tags=[]
butte = ca_all[ca_all['county_name']=='butte']
# alt.Chart(butte).mark_geoshape(stroke='grey',fill=None).encode(
# text='county_name'
# )
butte = alt.Chart(butte).mark_text().encode(
longitude='longitude:Q',
latitude='latitude:Q',
text=alt.Text('county_name:N'))
/ + cell_id="b146d287-a4cf-4f26-b7cc-d856426980fe" deepnote_cell_type="code" deepnote_output_heights=[null, 341] deepnote_to_be_reexecuted=false execution_millis=1988 execution_start=1643550920317 source_hash="fc77670d" tags=[]
difference + butte
/ + cell_id="4fa660c3-69ac-48ef-a5b7-b12e5c1c42eb" deepnote_cell_type="code" deepnote_output_heights=[null, 40.390625] deepnote_to_be_reexecuted=false execution_millis=2101 execution_start=1643550922120 source_hash="d0158901" tags=[]
ratio_inflow_outflow = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','ratio_inflow_outflow'],
color = alt.Color('ratio_inflow_outflow',scale=alt.Scale(scheme='blueorange', domain=[.5, 1.5]))
).properties(title='2019 Ratio of Inflow to Outflow')
per_change_pop = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','per_change_pop'],
color = alt.Color('per_change_pop',scale=alt.Scale(scheme='blueorange', domain=[-0.02, 0.02]),legend=alt.Legend(format=".0%"))
#color='per_change_pop'
).properties(title='2019 Percent Change in Population from Migration')
per_change_pop
/ + cell_id="e788057a-1733-45f7-9eba-f4f5b3fa09b7" deepnote_cell_type="code" deepnote_output_heights=[null, 2] deepnote_to_be_reexecuted=false execution_millis=3578 execution_start=1643550923932 source_hash="dc7cd52d" tags=[]
ratio_inflow_outflow
/ + [markdown] cell_id="80bdfb5b-617e-4b27-87ef-9b2940501cec" deepnote_cell_type="markdown" tags=[]
/ Here we notice that Butte county stands out with the lowest ratio of outflow to inflow (0.44). This means that 44 people move to this county for every 100 that leave. Between 2018 and 2019, approximately 3.7% of the population left the county, leaving it as the county with the sharpest decrease in population, far outpacing the next closest county of San Fransisco, which had a 1.5% decrease in population. This is largly due to the 2018 wildfire that hit Paradise, a town within the county. This may also account for the large increase in population in some surrounding counties including Yuba (0.8% increase),and Placer (1% increase).
/ + cell_id="bfb74a43-57c4-48b5-b880-c75d22c6c635" deepnote_cell_type="code" deepnote_output_heights=[117.21875] deepnote_to_be_reexecuted=false execution_millis=237905171 execution_start=1643550925969 source_hash="47bc38e2" tags=[]
most_leaving = ca_all.sort_values(by='ratio_inflow_outflow', axis=0, ascending=True)['NAME'][:5]
most_leaving_ratios = ca_all.sort_values(by='ratio_inflow_outflow', axis=0, ascending=True)['ratio_inflow_outflow'][:5]
print("These are the top 5 counties with the highest ratio of leaving to staying: \n{}".format(most_leaving.values))
most_leaving = ca_all.sort_values(by='ratio_inflow_outflow', axis=0, ascending=True)['NAME'][:5]
print("These are those counties' corresponding ratios of inflow to outflow: \n{}".format(most_leaving_ratios.values))
/ + cell_id="00004-24e9673f-f19a-4c03-b9c2-519d66c3c3eb" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=2230 execution_start=1643550925970 source_hash="eeb0ef9d" tags=[]
ranking_ratio_inflow_outflow = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','rank_ratio_inflow_outflow','ratio_inflow_outflow'],
color='rank_ratio_inflow_outflow'
).properties(title='2019 Ranking of Ratio of Inflow to Outflow')
ranking_dif_inflow_outflow = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','rank_dif_inflow_outflow','dif_inflow_outflow'],
color='rank_dif_inflow_outflow'
).properties(title='2019 Ranking of Difference between Inflow and Outflow')
ranking_per_change_pop = alt.Chart(ca_all).mark_geoshape().encode(
tooltip=['NAME','rank_per_change_pop','per_change_pop'],
color='rank_per_change_pop'
).properties(title='2019 Ranking of Percent Change in Population')
(ranking_ratio_inflow_outflow | ranking_dif_inflow_outflow) & (ranking_per_change_pop)
/ + cell_id="7c65e8f7-9885-4f11-a353-bfd09a6135f8" deepnote_cell_type="code" deepnote_output_heights=[1, 371] deepnote_to_be_reexecuted=false execution_millis=440 execution_start=1643550927805 source_hash="231ba366" tags=[]
# compare correlation between various metrics
# We see ratio inflow/outflow similar to per_change population.
# These are both v. dif from difference in - out
df = ca_all[['rank_ratio_inflow_outflow','rank_per_change_pop','rank_dif_inflow_outflow']].corr(method='kendall')
sns.heatmap(df,label=True, vmin=0, vmax=1,annot=True)
/ + cell_id="f1749134-47e9-4046-9627-1ac148bebf28" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=2214 execution_start=1643550927874 source_hash="284d4b6f" tags=[]
per_change_v_dif = alt.Chart(ca_all).mark_point().encode(
x = 'rank_dif_inflow_outflow',
y = 'rank_per_change_pop',
tooltip = ['NAME','rank_per_change_pop','rank_dif_inflow_outflow']
)
per_change_v_ratio = alt.Chart(ca_all).mark_point().encode(
x = 'rank_ratio_inflow_outflow',
y = 'rank_per_change_pop',
tooltip = ['NAME','rank_ratio_inflow_outflow','rank_per_change_pop']
)
ratio_v_dif = alt.Chart(ca_all).mark_point().encode(
x = 'rank_ratio_inflow_outflow',
y = 'rank_dif_inflow_outflow',
tooltip = ['NAME','rank_ratio_inflow_outflow','rank_dif_inflow_outflow']
).properties(title='Rank of the ratio of inflow to outflow vs. rank of the difference between inflow and outflow')
(ratio_v_dif | per_change_v_ratio) | (per_change_v_dif)
/ + cell_id="74347f4c-b933-4c43-886c-a261793b134a" deepnote_cell_type="code" deepnote_output_heights=[1, 344] deepnote_to_be_reexecuted=false execution_millis=184 execution_start=1643550929916 source_hash="1aedb32d" tags=[]
# compare correlation between various metrics
# We see ratio inflow/outflow similar to per_change population.
# These are both v. dif from difference in - out
sns.heatmap(ca_all[['ratio_inflow_outflow','per_change_pop','dif_inflow_outflow']].corr(),label=True, vmin=0, vmax=1,annot=True)
/ + cell_id="e5bba4b3-8614-46ab-b38b-ec796d1c8681" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=1791 execution_start=1643550930096 source_hash="9b7ac491" tags=[]
per_change_v_dif = alt.Chart(ca_all).mark_point().encode(
x = 'per_change_pop',
y = 'dif_inflow_outflow',
tooltip = ['NAME','per_change_pop','dif_inflow_outflow']
)
per_change_v_ratio = alt.Chart(ca_all).mark_point().encode(
x = 'ratio_inflow_outflow',
y = 'per_change_pop',
tooltip = ['NAME','ratio_inflow_outflow','per_change_pop']
)
ratio_v_dif = alt.Chart(ca_all).mark_point().encode(
x = 'ratio_inflow_outflow',
y = 'dif_inflow_outflow',
tooltip = ['NAME','ratio_inflow_outflow','dif_inflow_outflow']
).properties(title='Ratio of inflow to outflow vs. Difference between inflow and outflow')
ratio_v_dif | per_change_v_ratio | per_change_v_dif
/ + [markdown] cell_id="00005-166e744e-122f-4eac-b093-e6a742fb9633" deepnote_cell_type="text-cell-h1" is_collapsed=false tags=[]
/ # All of US
/ + cell_id="00006-96949154-afa5-42f3-85ee-48f19cc4435f" deepnote_cell_type="code" deepnote_output_heights=[1, 369] deepnote_to_be_reexecuted=false execution_millis=1130 execution_start=1643550932183 source_hash="793b3039" tags=[]
col = ['total_population', 'median_income',
'median_rent', 'median_home_value', 'educational_attainment',
'av_commute_time', 'perc_poverty', 'perc_white', 'perc_black',
'perc_american_indian', 'perc_asian', 'perc_hispanic', 'perc_65_over',
'perc_enrolled_undergrad', 'perc_unemployed', 'housing_per_capita',
'perc_owner', 'perc_renter', 'perc_vacant',
'individual_inflow', 'individual_outflow', 'area_land', 'area_water',
'longitude', 'latitude']
sns.heatmap(us_df[col].corr(),xticklabels=True, yticklabels=True)
/ + cell_id="e855c226-fe78-4f6c-b270-3832446561b8" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=15 execution_start=1643550933312 source_hash="3f35ee8a" tags=[]
df_19 = us_df[us_df['year']==2019]
df_19['individual_outflow'].std()
df_19['individual_outflow'].mean()
/ + cell_id="00007-62f9b3e2-d066-43c6-8664-cfafc3e413ec" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=303 execution_start=1643550933345 source_hash="8b4a2237" tags=[]
alt.Chart(df_19).mark_point().encode(
x ='individual_inflow',
y ='individual_outflow',
tooltip = 'name',
color='state:N'
).properties(title = '2019 inflow and outflow')
/ + cell_id="00008-b5fde48b-6183-47f2-9926-388ccc5144fc" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=270 execution_start=1643550933649 source_hash="efcdad4f" tags=[]
alt.Chart(df_19).mark_point().encode(
x ='median_home_value',
y ='median_rent',
tooltip = 'name',
color='state:N'
).properties(title = '2019 rent and home value')
/ + cell_id="00008-eda20bfe-8d09-40c7-b72d-14012c3e00b3" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=237904902 execution_start=1643550933893 source_hash="15be13c5" tags=[]
df_19.columns
/ + cell_id="00007-8fce08f9-3309-4bc7-9f77-9708da36615f" deepnote_cell_type="code" deepnote_output_heights=[1] deepnote_to_be_reexecuted=false execution_millis=237904896 execution_start=1643550933895 source_hash="a89a7188" tags=[]
us_df.columns
/ + cell_id="00011-b2c6b141-8791-4a66-9682-35a0e164df7d" deepnote_cell_type="code" deepnote_table_loading=false deepnote_table_state={"filters": [], "pageIndex": 8, "pageSize": 10, "sortBy": []} deepnote_to_be_reexecuted=false execution_millis=237904923 execution_start=1643550933923 source_hash="8e3e914d" tags=[]
us_all = pd.merge(us_counties,df_19,how='left',on=['state','county'])
/ + cell_id="00012-07dc45c9-2b78-4bb7-8fa2-50c17004b0d1" deepnote_cell_type="code" deepnote_output_heights=[611] deepnote_to_be_reexecuted=false execution_millis=16 execution_start=1643550933924 source_hash="a9dd212a" tags=[]
us_all = us_all[us_all['state'] != 15]
us_all = us_all[us_all['state'] != 2]
/ + cell_id="00012-09e3089a-60f1-4dc4-b00f-ec1896dd491b" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=6060 execution_start=1643550933940 source_hash="79d1fe8b" tags=[]
alt.Chart(us_all).mark_geoshape().encode(color='perc_black',tooltip='name').properties(title='Percent Black in 2019')
/ + cell_id="00014-071780b8-1d42-4249-8256-ef3904955d73" deepnote_cell_type="code" deepnote_output_heights=[null, 1] deepnote_to_be_reexecuted=false execution_millis=8008 execution_start=1643550939877 source_hash="44ad52f4" tags=[]
alt.Chart(us_all).mark_geoshape().encode(color='median_rent',tooltip=['name','median_rent'])
/ + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[]
/ <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=f6c76417-5fde-42f3-8920-755838dec3fa' target="_blank">
/ <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
/ Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| source/exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={} colab_type="code" id="vtNtfcHHoHNP"
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# + [markdown] colab_type="text" id="jZwnHZ70oUIM"
# # CropNet: Cassava Disease Detection
# + [markdown] colab_type="text" id="6sg9wHP9oR3q"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cropnet_cassava"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cropnet_cassava.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/cropnet_cassava.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cropnet_cassava.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="grEgSWu2iTxm"
# This notebook shows how to use the CropNet [cassava disease classifier](https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2) model from **TensorFlow Hub**. The model classifies images of cassava leaves into one of 6 classes: *bacterial blight, brown streak disease, green mite, mosaic disease, healthy, or unknown*.
#
# This colab demonstrates how to:
# * Load the https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2 model from **TensorFlow Hub**
# * Load the [cassava](https://www.tensorflow.org/datasets/catalog/cassava) dataset from **TensorFlow Datasets (TFDS)**
# * Classify images of cassava leaves into 4 distinct cassava disease categories or as healthy or unknown.
# * Evaluate the *accuracy* of the classifier and look at how *robust* the model is when applied to out of domain images.
# + [markdown] colab_type="text" id="bKn4Fiq2OD7u"
# ## Imports and setup
# + cellView="both" colab={} colab_type="code" id="FIP4rkjp45MG"
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
# + cellView="form" colab={} colab_type="code" id="mIqmq_qmWw78"
#@title Helper function for displaying examples
def plot(examples, predictions=None):
# Get the images, labels, and optionally predictions
images = examples['image']
labels = examples['label']
batch_size = len(images)
if predictions is None:
predictions = batch_size * [None]
# Configure the layout of the grid
x = np.ceil(np.sqrt(batch_size))
y = np.ceil(batch_size / x)
fig = plt.figure(figsize=(x * 6, y * 7))
for i, (image, label, prediction) in enumerate(zip(images, labels, predictions)):
# Render the image
ax = fig.add_subplot(x, y, i+1)
ax.imshow(image, aspect='auto')
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
# Display the label and optionally prediction
x_label = 'Label: ' + name_map[class_names[label]]
if prediction is not None:
x_label = 'Prediction: ' + name_map[class_names[prediction]] + '\n' + x_label
ax.xaxis.label.set_color('green' if label == prediction else 'red')
ax.set_xlabel(x_label)
plt.show()
# + [markdown] colab_type="text" id="kwrg9yIlaUSb"
# ## Dataset
#
# Let's load the *cassava* dataset from TFDS
# + colab={} colab_type="code" id="0rTcnxoSkp31"
dataset, info = tfds.load('cassava', with_info=True)
# + [markdown] colab_type="text" id="GpC71TFDhJFO"
# Let's take a look at the dataset info to learn more about it, like the description and citation and information about how many examples are available
# + colab={} colab_type="code" id="btJBMovmbYtR"
info
# + [markdown] colab_type="text" id="QT3XWAtR6BRy"
# The *cassava* dataset has images of cassava leaves with 4 distinct diseases as well as healthy cassava leaves. The model can predict all of these classes as well as sixth class for "unknown" when the model is not confident in it's prediction.
# + colab={} colab_type="code" id="9NT9q8yyXZfX"
# Extend the cassava dataset classes with 'unknown'
class_names = info.features['label'].names + ['unknown']
# Map the class names to human readable names
name_map = dict(
cmd='Mosaic Disease',
cbb='Bacterial Blight',
cgm='Green Mite',
cbsd='Brown Streak Disease',
healthy='Healthy',
unknown='Unknown')
print(len(class_names), 'classes:')
print(class_names)
print([name_map[name] for name in class_names])
# + [markdown] colab_type="text" id="I6y_MGxgiW09"
# Before we can feed the data to the model, we need to do a bit of preprocessing. The model expects 224 x 224 images with RGB channel values in [0, 1]. Let's normalize and resize the images.
# + colab={} colab_type="code" id="UxtxvqRjh7Nm"
def preprocess_fn(data):
image = data['image']
# Normalize [0, 255] to [0, 1]
image = tf.cast(image, tf.float32)
image = image / 255.
# Resize the images to 224 x 224
image = tf.image.resize(image, (224, 224))
data['image'] = image
return data
# + [markdown] colab_type="text" id="qz27YrZahdvn"
# Let's take a look at a few examples from the dataset
# + colab={} colab_type="code" id="j6LkAxv3f-aJ"
batch = dataset['validation'].map(preprocess_fn).batch(25).as_numpy_iterator()
examples = next(batch)
plot(examples)
# + [markdown] colab_type="text" id="eHlEAhL3hq2R"
# ## Model
#
# Let's load the classifier from TF-Hub and get some predictions and see the predictions of the model is on a few examples
# + colab={} colab_type="code" id="b6eIWkTjIQhS"
classifier = hub.KerasLayer('https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2')
probabilities = classifier(examples['image'])
predictions = tf.argmax(probabilities, axis=-1)
# + colab={} colab_type="code" id="MTQA1YAltfRZ"
plot(examples, predictions)
# + [markdown] colab_type="text" id="MuFE8A5aZv9z"
# ## Evaluation & robustness
#
# Let's measure the *accuracy* of our classifier on a split of the dataset. We can also look at the *robustness* of the model by evaluating its performance on a non-cassava dataset. For image of other plant datasets like iNaturalist or beans, the model should almost always return *unknown*.
# + cellView="form" colab={} colab_type="code" id="0ERcNxs0kHd3"
#@title Parameters {run: "auto"}
DATASET = 'cassava' #@param {type:"string"} ['cassava', 'beans', 'i_naturalist2017']
DATASET_SPLIT = 'test' #@param {type:"string"} ['train', 'test', 'validation']
BATCH_SIZE = 32 #@param {type:"integer"}
MAX_EXAMPLES = 1000 #@param {type:"integer"}
# + colab={} colab_type="code" id="Mt0-IVmZplbb"
def label_to_unknown_fn(data):
data['label'] = 5 # Override label to unknown.
return data
# + colab={} colab_type="code" id="cQYvY3IvY2Nx"
# Preprocess the examples and map the image label to unknown for non-cassava datasets.
ds = tfds.load(DATASET, split=DATASET_SPLIT).map(preprocess_fn).take(MAX_EXAMPLES)
dataset_description = DATASET
if DATASET != 'cassava':
ds = ds.map(label_to_unknown_fn)
dataset_description += ' (labels mapped to unknown)'
ds = ds.batch(BATCH_SIZE)
# Calculate the accuracy of the model
metric = tf.keras.metrics.Accuracy()
for examples in ds:
probabilities = classifier(examples['image'])
predictions = tf.math.argmax(probabilities, axis=-1)
labels = examples['label']
metric.update_state(labels, predictions)
print('Accuracy on %s: %.2f' % (dataset_description, metric.result().numpy()))
# + [markdown] colab_type="text" id="rvS18sBExpdL"
# ## Learn more
#
# * Learn more about the model on TensorFlow Hub: https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2
# * Learn how to build a custom image classifier running on a mobile phone with [ML Kit](https://developers.google.com/ml-kit/custom-models#tfhub) with the [TensorFlow Lite version of this model](https://tfhub.dev/google/lite-model/cropnet/classifier/cassava_disease_V1/1).
| examples/colab/cropnet_cassava.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from urllib.request import urlopen
from bs4 import BeautifulSoup
base_url = "https://emojidictionary.emojifoundation.com"
cat_urls = ["https://emojidictionary.emojifoundation.com/people"]
def search_catagory_page(cat_url: str):
cat_html = html = urlopen(cat_url)
cat_soup = BeautifulSoup(cat_html)
cat_emojis = ppl_soup.find_all("a", class_="media-left")
return [base_url + link.get("href")[1:] for link in cat_emojis]
for cat_url in cat_urls:
emoji_urls = search_catagory_page(cat_url)
print("\n".join(emoji_urls))
| JupyterNotebooks/EmojiDictionaryScrape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Note
# This file is compilation all possible POS tags
# +
# # Rename the files to gold and silver
# import argparse
# import os
# path = './data/train_20k/'
# count = 0
# for file in os.listdir(path):
# count= count + 1
# dst ="Silver_" + str(count) + ".csv"
# src =file
# os.rename(path+src,path+dst)
# -
ma_set = set()
word = None
with open('../data/multilingual_word_embeddings/pos_embedding_1_hot.475', 'r', encoding='utf-8') as f:
# skip first line
for i, line in enumerate(f):
if i == 0:
continue
word, vec = line.split('@', 1)
ma_set.add(word)
# ma_set
import re
def clean_ma(ma):
ma = re.sub("([\(\[]).*?([\)\]])", "\g<1>\g<2>", ma).replace('[] ','').strip(' []').replace(' ac','').replace(' ps','').replace('sgpl','sg').replace('sgdu','sg')
ma = ma.replace('i.','inst.').replace('.','').replace(' ','')
return ma
def IND(ma):
indeclinable = ['ind','prep','interj','prep','conj','part','abs','ca abs']
for i in indeclinable:
if i in ma:
ma = 'indecl'
return ma
s = set()
for ma in ma_set:
s.add(clean_ma(ma))
len(s)
df = pd.read_csv('../../../Documents/DCST/data/4.csv',sep=',')
# df.iloc[:,1]
# temp =[]
# for w in df.iloc[:,1]:
# temp.append(transliterate(w, sanscript.WX, sanscript.SLP1))
# temp
list(df.iloc[:,1])
def allow(words,temp):
ct = 0
for w,t in zip(words,temp):
if w==t:
ct+=1
if ct == len(words) -1:
return True
else:
return False
# +
import os
import pandas as pd
from indic_transliteration import sanscript
from indic_transliteration.sanscript import SchemeMap, SCHEMES, transliterate
data = pd.read_csv('../data/sanskrit_treebank/train/Gold_633.csv',sep=',')
def get_gold_POS_Tag(words):
kautilya_id ='NA'
path = '../../../Documents/DCST/data/4k_data_my_prepocessing/'
files = os.listdir(path)
gold_pos= ['NA']*len(words)
for file in files:
df = pd.read_csv(path+file,sep='\t',header = None)
temp =[]
if len(words) != len(df):
continue
for w in df.iloc[:,1]:
temp.append(transliterate(w, sanscript.WX, sanscript.SLP1))
words = [x.lower() for x in words]
temp = [x.lower() for x in temp]
if words == temp:
kautilya_id = file
for u,g in enumerate(df.iloc[:,2]):
if g == g:
gold_pos[u] = g
break
elif allow(words,temp):
kautilya_id = file
for u,g in enumerate(df.iloc[:,2]):
if g == g:
gold_pos[u] = g
else:
continue
return gold_pos,kautilya_id
get_gold_POS_Tag(list(df.iloc[:,1]))
# -
transliterate('nandigrAmam', sanscript.WX, sanscript.SLP1)
# +
import argparse
import os
path = '../data/sanskrit_treebank/'
languages_for_low_resource = ['san']
languages = sorted(list(set(languages_for_low_resource)))
#
splits = ['train','dev','test']
lng_to_files = dict((language, {}) for language in languages)
for language, d in lng_to_files.items():
for split in splits:
d[split] = []
lng_to_files[language] = d
sub_folders = os.listdir(path)
for sub_folder in sub_folders:
folder = os.path.join(path, sub_folder)
files = sorted(os.listdir(folder))
for file in files:
full_path = os.path.join(folder, file)
lng_to_files[language][sub_folder].append(full_path)
###################################
# # # # Add silver in train:
# counter = 0
# amount = 2000
# for f in os.listdir('../data/train_20k/'):
# full_path = os.path.join('../data/train_20k/', f)
# lng_to_files[language]['train'].append(full_path)
# counter += 1
# # if counter >= amount:
# # break
word_set = set()
for language, split_dict in lng_to_files.items():
posi =set()
for split, files in split_dict.items():
sentences = []
num_sentences = 0
for file in sorted(files):
# if split =='train':
# print(file)
flag = 0
data = pd.read_csv(file,sep=',')
gold_pos,kautilya_id = get_gold_POS_Tag(list(data.iloc[:,1]))
if kautilya_id == 'NA':
continue
num_sentences += 1
k=0
with open(file, 'r') as file:
for line in file:
new_line = []
line = line.strip()
# print(line)
if flag == 0:
flag = 1
continue
tokens = line.split(',')
id = str(int(tokens[0]) + 1)
word = tokens[1]
word_set.add(word)
############################
# To add rule features id of rule is given
my_data_id = file.name.replace('../data/sanskrit_treebank/','').replace('.csv','')\
.replace('../data/train_20k/','').replace('train/','').replace('dev/','').replace('test/','')
lemma = tokens[2]
pos = tokens[3]
# tag.add(pos)
if tokens[6]:
arc_tag = tokens[5]
head = str(int(tokens[6]) + 1)
else:
arc_tag = 'root'
head = '0'
new_line = [id, word ,pos,get_modified_coarse(pos),gold_pos[k],my_data_id,head, arc_tag,kautilya_id]
k+=1
posi.add(pos)
# print(new_line)
sentences.append(new_line)
sentences.append([])
print(num_sentences)
# ud_pos_ner_dp_
print('Language: %s Split: %s Num. Sentences: %s ' % (language, split, num_sentences))
if not os.path.exists('data'):
os.makedirs('data')
# ud_pos_ner_dp_
write_data_path = '../../../DCST_scratch/data/data_with_gold_pos_' + split + '_' + language
print('creating %s' % write_data_path)
with open(write_data_path, 'w') as f:
for line in sentences:
f.write('\t'.join(line) + '\n')
# -
def get_case_from_244(ma):
indeclinable = ['ind','prep','interj','prep','conj','part','indecl']
case_list = ['nom','voc','acc','inst','dat','abl','loc','i','g']
gender_list = ['n','f','m','*']
person_list = ['1','2','3']
no_list = ['du','sg','pl']
ma=ma.replace('sgpl','sg').replace('sgdu','sg')
# Remove active passive
case=''
if ma == 'comp':
case ='comp'
for tag in indeclinable:
if tag in ma:
case= "IND"
# Get case
if case =='':
for tag in case_list:
if tag in ma:
if tag == 'i':
if tag+'sg' in ma or tag+'du' in ma or tag+'pl' in ma:
case = 'i'
elif tag == 'g':
if tag+'sg' in ma or tag+'du' in ma or tag+'pl' in ma:
case = 'g'
else:
case = tag
if case == '':
if 'adv' in ma:
case = 'adv'
if case =='':
for tag in no_list:
if tag in ma:
case = 'FV'
if case=='':
for tag in no_list:
if tag in ma:
case = 'FV'
if case=='':
for tag in person_list:
if tag in ma:
case = 'FV'
if case=='':
case='IV'
return case
s = []
for m in ma_set:
s.append([m,get_case_from_244(m)])
import json
indeclinable = ['ind','prep','interj','prep','conj','part','abs','ca abs']
case_list = ['nom','voc','acc','i','inst','dat','abl','g','loc']
gender_list = ['n','f','m','*']
person_list = ['1','2','3']
no_list = ['du','sg','pl']
pops = [' ac',' ps']
def get_modified_coarse(ma):
ma = ma.replace('sgpl','sg').replace('sgdu','sg')
with open('../../L2S/files/coarse_to_ma_dict.json', 'r') as fh:
coarse_dict = json.load(fh)
for key in coarse_dict.keys():
if ma in coarse_dict[key]:
return key
s=[]
for m in ma_set:
s.append([m,get_modified_coarse(m)])
sorted(s)
print(get_modified_coarse('acc. pl. m'))
# +
import re
def get_case(ma):
indeclinable = ['ind','prep','interj','prep','conj','part']
case_list = ['nom','voc','acc','i','inst','dat','abl','g','loc']
gender_list = ['n','f','m','*']
person_list = ['1','2','3']
no_list = ['du','sg','pl']
pops = [' ac',' ps']
ma=ma.replace('sgpl','sg').replace('sgdu','sg')
temp = re.sub("([\(\[]).*?([\)\]])", "\g<1>\g<2>", ma).replace('[] ','').strip(' []')
temp = temp.split('.')
if temp[-1] == '':
temp.pop(-1)
# Remove active passive
case=''
no=''
person=''
gender=''
tense=''
coarse=''
for a,b in enumerate(temp):
if b in pops:
temp.pop(a)
# Get gender
for a,b in enumerate(temp):
if b.strip() in gender_list:
gender = b.strip()
temp.pop(a)
# Get case
for a,b in enumerate(temp):
if b.strip() in case_list:
case = b.strip()
temp.pop(a)
if case!= '':
coarse ='Noun'
# Get person
for a,b in enumerate(temp):
if b.strip() in person_list:
person = b.strip()
temp.pop(a)
# Get no
for a,b in enumerate(temp):
if b.strip() in no_list:
no = b.strip()
temp.pop(a)
# Get Tense
for b in temp:
tense=tense+ ' '+b.strip()
tense=tense.strip()
# print(tense)
if tense == 'adv':
coarse = 'adv'
for ind in indeclinable:
if tense == ind:
coarse = 'Ind'
if tense == 'abs' or tense == 'ca abs':
coarse = 'IV'
if tense!='' and coarse=='':
if person !='' or no!='':
coarse= 'FV'
else:
coarse = 'IV'
if case == 'i':
return 'inst'
if case !='':
return case
else:
return coarse
# -
def variation_III(ma):
##############################################
# Description:
# (1) Active passive is removed
# (2) Tense [*] is removed
ma=ma.replace('sgpl','sg').replace('sgdu','sg')
temp = re.sub("([\(\[]).*?([\)\]])", "\g<1>\g<2>", ma).replace('[] ','').strip(' []')
temp = temp.split('.')
if temp[-1] == '':
temp.pop(-1)
# Remove active passive
case=''
no=''
person=''
gender=''
tense=''
for a,b in enumerate(temp):
if b in pops:
temp.pop(a)
# Get gender
for a,b in enumerate(temp):
if b.strip() in gender_list:
gender = b.strip()
temp.pop(a)
# Get case
for a,b in enumerate(temp):
if b.strip() in case_list:
case = b.strip()
temp.pop(a)
# if case == 'i':
# case = 'inst'
# Get person
for a,b in enumerate(temp):
if b.strip() in person_list:
person = b.strip()
temp.pop(a)
# Get no
for a,b in enumerate(temp):
if b.strip() in no_list:
no = b.strip()
temp.pop(a)
# Get Tense
for b in temp:
tense=tense+ ' '+b.strip()
tense=tense.strip()
var=''
if case:
t = 'case=%s '%case
var=var+t
if gender:
t = 'gender=%s '%gender
var=var+t
if tense:
t = 'tense=%s '%tense
var=var+t
if no:
t = 'no=%s '%no
var=var+t
if person:
t = 'person=%s '%person
var=var+t
return var.strip()
variation_III('pp. . sg. ')
import re
def variation_IV(ma):
ma=ma.replace('sgpl','sg').replace('sgdu','sg')
temp = re.sub("([\(\[]).*?([\)\]])", "\g<1>\g<2>", ma).replace('[] ','').strip(' []')
temp = temp.split('.')
if temp[-1] == '':
temp.pop(-1)
# Remove active passive
case=''
no=''
person=''
gender=''
tense=''
coarse=''
for a,b in enumerate(temp):
if b in pops:
temp.pop(a)
# Get gender
for a,b in enumerate(temp):
if b.strip() in gender_list:
gender = b.strip()
temp.pop(a)
# Get case
for a,b in enumerate(temp):
if b.strip() in case_list:
case = b.strip()
temp.pop(a)
if case!= '':
coarse ='Noun'
# if case == 'i':
# case = 'inst'
# Get person
for a,b in enumerate(temp):
if b.strip() in person_list:
person = b.strip()
temp.pop(a)
# Get no
for a,b in enumerate(temp):
if b.strip() in no_list:
no = b.strip()
temp.pop(a)
# Get Tense
for b in temp:
tense=tense+ ' '+b.strip()
tense=tense.strip()
if tense == 'adv':
coarse = 'adv'
for ind in indeclinable:
if tense == ind:
coarse = 'Ind'
if tense!='' and coarse=='':
if person !='' and no!='':
coarse= 'FV'
else:
coarse = 'IV'
var=''
if case:
t = 'case=%s '%case
var=var+t
if gender:
t = 'gender=%s '%gender
var=var+t
if tense:
t = 'tense=%s '%tense
var=var+t
if no:
t = 'no=%s '%no
var=var+t
if person:
t = 'person=%s '%person
var=var+t
coarse = get_modified_coarse(ma)
if coarse:
t = 'coarse=%s '%coarse
var=var+t
return var.strip();
l = []
# for ma in ma_set:
# l.append(ma+' = '+variation_IV(ma))
variation_IV('pp. . sg. ')
# onehot_encoded[0]
count =0
with open("pos_embedding_1_hot.475", "w") as f:
f.write(str(len(tag))+' '+str(475)+'\n')
for word in tag:
word=word+'@'
f.write(word)
for v in onehot_encoded[count]:
f.write(' '+str(v))
f.write('\n')
count = count + 1
print(count)
| notebooks/Prepare_gold_data_POS_tags.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('./archive/metadata_clean.csv')
df.head()
org_df = pd.read_csv('./archive/movies_metadata.csv')
df['overview'], df['id'] = org_df['overview'], org_df['id']
df
df['overview'] = df['overview'].fillna('')
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english')
tfidf_matrix = tfidf.fit_transform(df['overview'])
print(tfidf_matrix.shape)
type(tfidf_matrix)
# +
# import statsmodels
# -
from sklearn.metrics.pairwise import linear_kernel
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
indices = pd.Series(df.index, index=df['title']).drop_duplicates()
indices = None
def content_recommender(title, cosine_sim, df=df, indices=indices):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key= lambda x: x[1], reverse=True)
sim_scores = sim_scores[1: 11]
movie_indices = [i[0] for i in sim_scores]
return df['title'].iloc[movie_indices]
content_recommender('The Lion King', cosine_sim)
df_credit = pd.read_csv('./archive/credits.csv')
df_keyword = pd.read_csv('./archive/keywords.csv')
df_credit.head()
df_keyword.head()
# +
def clean_ids(x):
try:
return int(x)
except:
return np.nan
df['id'] = df['id'].apply(clean_ids)
# -
df.dropna(inplace=True)
df['id'] = df['id'].astype('int')
df_credit['id'] = df_credit['id'].astype('int')
df_keyword['id'] = df_keyword['id'].astype('int')
df = df.merge(df_credit, on='id')
df = df.merge(df_keyword, on='id')
df.head()
from ast import literal_eval
# +
features = ['cast', 'crew', 'keywords', 'genres']
for f in features:
df[f] = df[f].apply(literal_eval)
df.head()
# -
df.loc[0]['crew'][0]
def get_director(x):
for crew_member in x:
if crew_member['job'] == 'Director':
return crew_member['name']
return np.nan
df['director'] = df['crew'].apply(get_director)
df['director'].head()
# +
def genrate_list(x):
if isinstance(x, list):
names = [ele['name'] for ele in x]
if len(names) > 3:
return names[:3]
else:
return names
return []
df['cast'] = df['cast'].apply(genrate_list)
df['keywords'] = df['keywords'].apply(genrate_list)
df['genres'] = df['genres'].apply(lambda x: x[:3])
# -
df[['title', 'cast', 'director', 'keywords', 'genres']].head(3)
# +
def sanitize(x):
if isinstance(x, list):
return [str.lower(i.replace(" ", "")) for i in x]
elif isinstance(x, str):
return str.lower(x.replace(" ", ""))
else:
return ""
for feature in ['cast', 'director', 'genres', 'keywords']:
df[feature] = df[feature].apply(sanitize)
# -
def create_soup(x):
return ' '.join(x['keywords']) + ' ' + ' '.join(x['cast']) + ' ' + x['director'] + ' ' + ' '.join(x['genres'])
df['soup'] = df.apply(create_soup, axis=1)
df.iloc[0]['soup']
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer(stop_words='english')
count_metrix = count_vectorizer.fit_transform(df['soup'])
from sklearn.metrics.pairwise import cosine_similarity
cosine_sim2 = cosine_similarity(count_metrix, count_metrix)
df.shape
cosine_sim2.size
df.reset_index()
indices2 = pd.Series(df.index, index=df['title'])
content_recommender('The Lion King', cosine_sim2, df, indices2)
| Content Based Recommender.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
import glob
import matplotlib.pyplot as plt
from scipy import optimize
import numpy as np
import pandas as pd
def load_data(dset, group=True):
if dset == 'texas':
path = "../../data/texas/texas_20m*"
elif dset == 'purchase':
path = "../../data/purchase/purchase_20m*"
elif dset == 'cifar':
path = "../../data/cifar/cifar_m*.p"
idx_tups = []
infos = []
for file in glob.glob(path):
f = pickle.load(open(file, 'rb'))
if path == "../../data/cifar/cifar_m*.p":
var = file.split("_")[-4:]
if var[-4] == 'mb':
var.insert(0,'dp')
else:
var.insert(0, 'is')
var[-4] = 256
else:
var = file.split("_")[-5:]
if var[-5] == '20mb' or var[-5] == 'mb':
var[-5] = 'dp'
else:
var[-5] = 'is'
var[-4] = int(var[-4])
var[-1] = int(var[-1].split(".")[0])
var[-3] = int(var[-3])
var[-2] = float(var[-2]) if var[-2] != 'False' else False
# IDX tups follow the format (epsilon, throw out threshold, batch size)
for fd in f:
idx_tups.append(var)
infos.append(fd)
inf_scalars = []
for inf, idx in zip(infos, idx_tups):
for i , (yt, yf, acc) in enumerate(zip(inf['yeom_tpr'], inf['yeom_fpr'], inf['acc'])):
inf_scalars.append((i, acc, yt - yf, *idx))
df = pd.DataFrame(inf_scalars)
df.columns = ['epoch', 'acc', 'yeom', 'method', 'width', 'epsilon', 'throw out', 'batch_size']
if group:
grouped = df.groupby(['epoch', 'method', 'width', 'epsilon', 'throw out', 'batch_size']
).agg({'acc' : ['mean', 'std',], 'yeom': ['mean', 'std']}).reset_index()
grouped.columns = ['epoch', 'method', 'width', 'epsilon', 'throw out', 'batch_size',
'acc','acc_std', 'yeom', 'yeom_std']
return grouped
else:
return df
cifar = load_data('cifar')
texas = load_data('texas')
purchase = load_data('purchase')
cdp, cis = cifar[(cifar['method'] == 'dp')], cifar[(cifar['method'] == 'is')]
tdp, tis = texas[(texas['method'] == 'dp')], texas[(texas['method'] == 'is')]
pdp, pis = purchase[(purchase['method'] == 'dp')], purchase[(purchase['method'] == 'is')]
# +
def is_pareto(costs, return_mask = True):
"""
Find the pareto-efficient points
:param costs: An (n_points, n_costs) array
:param return_mask: True to return a mask
:return: An array of indices of pareto-efficient points.
If return_mask is True, this will be an (n_points, ) boolean array
Otherwise it will be a (n_efficient_points, ) integer array of indices.
"""
is_efficient = np.arange(costs.shape[0])
n_points = costs.shape[0]
next_point_index = 0 # Next index in the is_efficient array to search for
while next_point_index<len(costs):
nondominated_point_mask = np.any(costs<costs[next_point_index], axis=1)
nondominated_point_mask[next_point_index] = True
is_efficient = is_efficient[nondominated_point_mask] # Remove dominated points
costs = costs[nondominated_point_mask]
next_point_index = np.sum(nondominated_point_mask[:next_point_index])+1
if return_mask:
is_efficient_mask = np.zeros(n_points, dtype = bool)
is_efficient_mask[is_efficient] = True
return is_efficient_mask
else:
return is_efficient
def plot_acc_yeom_pareto(ip, dp, axis, fill=False):
dp = dp.sort_values('acc')
ip = ip.sort_values('acc')
dp_costs = ((-1, 1, 1) * (dp[['acc', 'yeom', 'yeom_std']])).to_numpy()
is_costs = ((-1, 1, 1) * (ip[['acc', 'yeom', 'yeom_std']])).to_numpy()
dp_mask = is_pareto(dp_costs[:, :2])
is_mask = is_pareto(is_costs[:, :2])
ip_idxes = ip.groupby(['method', 'width', 'epsilon', 'throw out', 'batch_size']).agg({'acc': 'idxmax'}).reset_index()['acc']
dp_idxes = dp.groupby(['method', 'width', 'epsilon', 'throw out', 'batch_size']).agg({'acc': 'idxmax'}).reset_index()['acc']
axis.plot(0 - dp_costs[dp_mask, 0], dp_costs[dp_mask,1], '-x', c='C0', label='Gradient Clipping')
axis.plot(0 - is_costs[is_mask, 0], is_costs[is_mask,1], '-x', c='C1', label='Immediate Sensitivity')
#axis.errorbar(dp['acc'][dp_idxes], dp['yeom'][dp_idxes], fmt='o',c='C0',) # yerr=dp['yeom_std'][dp_idxes], xerr=dp['acc_std'][dp_idxes], )
#axis.errorbar(ip['acc'][ip_idxes], ip['yeom'][ip_idxes], fmt='o',c='C1',) #yerr=ip['yeom_std'][ip_idxes], xerr=ip['acc_std'][ip_idxes], )
if fill:
axis.fill_between(0 - dp_costs[dp_mask, 0],
dp_costs[dp_mask,1] + dp_costs[dp_mask,2],
dp_costs[dp_mask,1] - dp_costs[dp_mask,2],alpha=.3)
axis.fill_between(0 - is_costs[is_mask, 0],
is_costs[is_mask,1] + is_costs[is_mask,2],
is_costs[is_mask,1] - is_costs[is_mask,2],alpha=.3)
else:
axis.scatter(dp['acc'], dp['yeom'], alpha=.1)
axis.scatter(ip['acc'], ip['yeom'], alpha=.1)
axis.set_xlabel('Accuracy')
# +
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True)
fig.set_size_inches(12, 4)
ax1.set_xlabel('Advantage')
plot_acc_yeom_pareto(cis, cdp, ax1)
ax1.set_title('CIFAR-10')
plot_acc_yeom_pareto(tis, tdp, ax2)
ax2.set_title('Texas-100')
plot_acc_yeom_pareto(pis, pdp, ax3)
ax3.set_title('Purchase-100X')
ax1.set_ylim(0, .25)
ax2.set_ylim(0, .25)
ax3.set_ylim(0, .25)
ax1.set_xlim(.4, .63)
ax2.set_xlim(.4, .58)
ax3.set_xlim(.4, .7)
ax1.set_ylabel('Advantage')
ax3.legend()
plt.savefig('/home/ubuntu/6058f04dd79997b3e3ffcbad/figures/paretos.png', dpi=400)
# +
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True)
fig.set_size_inches(12, 4)
ax1.set_xlabel('Advantage')
plot_acc_yeom_pareto(cis, cdp, ax1, fill=True)
ax1.set_title('CIFAR-10')
tex = tis[tis['width'] == 256]
ted = tdp[tdp['width'] == 256]
plot_acc_yeom_pareto(tex, ted, ax2,fill=True)
ax2.set_title('Texas-100')
plot_acc_yeom_pareto(pis, pdp, ax3, fill=True)
ax3.set_title('Purchase-100X')
ax1.set_ylim(0, .25)
ax2.set_ylim(0, .25)
ax3.set_ylim(0, .25)
ax1.set_xlim(.4, .63)
ax2.set_xlim(.4, .58)
ax3.set_xlim(.4, .7)
ax1.set_ylabel('Advantage')
ax3.legend()
plt.savefig('/home/ubuntu/6058f04dd79997b3e3ffcbad/figures/var_paretos.png', dpi=400)
# +
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
fig.set_size_inches(8, 4)
tex = tis[tis['width'] == 128]
ted = tdp[tdp['width'] == 256]
plot_acc_yeom_pareto(tex, ted, ax1,fill=False)
ax1.set_title('Texas-100: Width=128')
tex = tis[tis['width'] == 256]
ted = tdp[tdp['width'] == 256]
plot_acc_yeom_pareto(tex, ted, ax2,fill=False)
ax2.set_title('Texas-100: Width=256')
ax1.set_ylabel('Advantage')
ax2.legend()
ax1.set_xlim(.3, .55)
ax2.set_xlim(.3, .55)
ax1.set_ylim(0, .25)
ax2.set_ylim(0, .25)
plt.savefig('/home/ubuntu/6058f04dd79997b3e3ffcbad/figures/tex_paretos.png', dpi=400)
# +
d = texas[texas['width'] == 256]
d[(d['yeom'] < .2) & (d['method'] == 'is')].sort_values('acc', ascending=False).head(1)
# +
.48, .52, .53
.51, .53, .54
.48, .51, .53
.50, .53, .54
# -
| experiments/immediate_sensitivity/pareto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### New to Plotly?
# Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
# <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
# <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
# #### Basic Population Pyramid Chart
# If you're starting with binned data, use a `go.Bar` trace.
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
women_bins = np.array([-600, -623, -653, -650, -670, -578, -541, -411, -322, -230])
men_bins = np.array([600, 623, 653, 650, 670, 578, 541, 360, 312, 170])
y = list(range(0, 100, 10))
layout = go.Layout(yaxis=go.layout.YAxis(title='Age'),
xaxis=go.layout.XAxis(
range=[-1200, 1200],
tickvals=[-1000, -700, -300, 0, 300, 700, 1000],
ticktext=[1000, 700, 300, 0, 300, 700, 1000],
title='Number'),
barmode='overlay',
bargap=0.1)
data = [go.Bar(y=y,
x=men_bins,
orientation='h',
name='Men',
hoverinfo='x',
marker=dict(color='powderblue')
),
go.Bar(y=y,
x=women_bins,
orientation='h',
name='Women',
text=-1 * women_bins.astype('int'),
hoverinfo='text',
marker=dict(color='seagreen')
)]
py.iplot(dict(data=data, layout=layout), filename='EXAMPLES/bar_pyramid')
# -
# #### Stacked Population Pyramid
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
women_bins = np.array([-600, -623, -653, -650, -670, -578, -541, -411, -322, -230])
men_bins = np.array([600, 623, 653, 650, 670, 578, 541, 360, 312, 170])
women_with_dogs_bins = np.array([-0, -3, -308, -281, -245, -231, -212, -132, -74, -76])
men_with_dogs_bins = np.array([0, 1, 300, 273, 256, 211, 201, 170, 145, 43])
y = list(range(0, 100, 10))
layout = go.Layout(yaxis=go.layout.YAxis(title='Age'),
xaxis=go.layout.XAxis(
range=[-1200, 1200],
tickvals=[-1000, -700, -300, 0, 300, 700, 1000],
ticktext=[1000, 700, 300, 0, 300, 700, 1000],
title='Number'),
barmode='overlay',
bargap=0.1)
data = [go.Bar(y=y,
x=men_bins,
orientation='h',
name='Men',
hoverinfo='x',
marker=dict(color='powderblue')
),
go.Bar(y=y,
x=women_bins,
orientation='h',
name='Women',
text=-1 * women_bins.astype('int'),
hoverinfo='text',
marker=dict(color='seagreen')
),
go.Bar(y=y,
x=men_with_dogs_bins,
orientation='h',
hoverinfo='x',
showlegend=False,
opacity=0.5,
marker=dict(color='teal')
),
go.Bar(y=y,
x=women_with_dogs_bins,
orientation='h',
text=-1 * women_bins.astype('int'),
hoverinfo='text',
showlegend=False,
opacity=0.5,
marker=dict(color='darkgreen')
)]
py.iplot(dict(data=data, layout=layout), filename='EXAMPLES/stacked_bar_pyramid')
# -
# #### Population Pyramid with Binning
# If you want to quickly create a Population Pyramid from raw data, try `go.Histogram`.
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
layout = go.Layout(barmode='overlay',
yaxis=go.layout.YAxis(range=[0, 90], title='Age'),
xaxis=go.layout.XAxis(
tickvals=[-150, -100, -50, 0, 50, 100, 150],
ticktext=[150, 100, 50, 0, 50, 100, 150],
title='Number'))
data = [go.Histogram(
y=np.random.exponential(50, 1000),
orientation='h',
name='Men',
marker=dict(color='plum'),
hoverinfo='skip'
),
go.Histogram(
y=np.random.exponential(55, 1000),
orientation='h',
name='Women',
marker=dict(color='purple'),
hoverinfo='skip',
x=-1 * np.ones(1000),
histfunc="sum"
)
]
py.iplot(dict(data=data, layout=layout), filename='EXAMPLES/histogram_pyramid')
# -
# ### More Bar and Histogram Examples
# See more examples of [horizontal bar charts](https://plot.ly/python/horizontal-bar-charts/), [bar charts](https://plot.ly/python/bar-charts/) and [histograms](https://plot.ly/python/histograms/).
# ### Reference
# See https://plot.ly/python/reference/#bar and https://plot.ly/python/reference/#histogram for more information and chart attribute options!
# +
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
# ! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'pyramid-charts.ipynb', 'python/population-pyramid-charts/', 'Python Population Pyramid Charts | Plotly',
'How to make Population Pyramid Charts in Python with Plotly.',
title = 'Population Pyramid Charts | Plotly',
name = 'Population Pyramid Charts',
thumbnail='thumbnail/pyramid.jpg', language='python',
has_thumbnail='true', display_as='basic', order=5.01,
ipynb= '~notebook_demo/221')
# -
| _posts/python-v3/basic/pyramid/pyramid-charts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ZCQ4MIS2mv4N"
# ---
# layout: post
# title: "데이터 분석가가 Python에서 한번씩 보지만 궁금하지 않았던 것들"
# author: "<NAME>"
# categories: Data분석
# tags: [Python, tip, underbar, Asterisk, decorator, magickey, jupyter, f string]
# image: 09_Python.png
# ---
# + [markdown] id="66ofBhh4mv4Q"
# ## **학습목적**
# 데이터 분석가로써 가끔 한번씩 혹은 미쳐 몰랐던 파이썬의 기능들의 몇가지를 소개하고, 이해해보도록 한다.
# -
# - **목차**
# 1. f string
# 2. decorator
# 3. underbar/under score
# 4. Asterisk
# 5. jupyter magic key
# + [markdown] id="oGJTcUcZdOnt"
# ---
#
# ### **F string**
#
# - 분석을 하다보면 확인을 위해서 print한다던지, 혹은 query를 자동화한다던지 또는 어떤 데이터셋을 만들 때도 흔히 쓰이는 "".format을 많이 사용하게 됩니다. r에서는 paste나 sprintf를 사용하게 됩니다. 파이썬에서는 ~r보다 조금 자유롭고 편한하게(제 기준)~ 여러가지로 구현이 되어있습니다.
# <br>
# <br>
# <br>
# - 간단한 for문의 상태를 알 수 있는 print문과 sql문 등의 예시를 통해서 알아보도록 하겠습니다.
#
# -
# ---
#
# 1. 가장 간단한 방식의 string + string 으로 표현합니다.
# - string의 + 와 int, float의 + 의 연산이 다르기 때문에 숫자를 str()로 다시 감싸주어야 사용할 수 있습니다.
import os
import sys
for i in range(101) :
if i % 10 == 0 :
print(str(i) + "번째 for문")
# 2. 조금 생소해보일 수도 있는 f string입니다.
# - 간단한 작업을 할 때는 효율적이지만 복잡하거나 길어지면 가독성이나 코드가 간단하지는 않습니다.
for i in range(101) :
if i % 10 == 0 :
print(f"{i}번째 for문")
# 3. 아마 가장 자주보는 방식의 string.format() 입니다.
# - 간단해 보이지만 응용할 수 있는 방법들이 다양하게 있습니다.
for i in range(101) :
if i % 10 == 0 :
print("{}번째 for문".format(i))
# - 어떠한 연속된 값 혹은 list의 값을 순서대로 formatting을 하고 싶으면 아래와 같은 방법으로 깔끔하게 해결할 수 있다.
print("첫번째 값 : {} \n두번째 값 : {} \n세번째 값 : {}".format(0, 1, 2))
print("첫번째 값 : {} \n두번째 값 : {} \n세번째 값 : {}".format(*[i for i in range(3)]))
# - 그리고 어떤 값을 반복적으로 활용해야하는 상황에서 반복작업을 줄이기 위해서는 아래와 같이 key와 value를 매핑해주면 된다. dictionary를 활용할 때는 * 두개를 붙여 활용한다.
print("첫번째 값 : {first_value} \n두번째 값 : {second_value} \n첫번째 값 AGAIN: {first_value} \n두번째 값 AGAIN : {second_value}".format(first_value = 1, second_value = 2))
print("첫번째 값 : {first_value} \n두번째 값 : {second_value} \n첫번째 값 AGAIN: {first_value} \n두번째 값 AGAIN : {second_value}".format(**{"first_value" : 1, "second_value" : 2}))
# - f"" 같은 경우는 간단하게 표현할 때 조금 더 직관적이고 간단해보인다.
# - "".format() 같은 경우는 조금 복잡한 string을 만들 때(ex. sql 등)을 만들 때 활용하면 조금 더 효과적이다.
# + [markdown] id="oGJTcUcZdOnt"
# <br>
# <br>
# <br>
#
# ---
#
# ### **Decorator**
#
# - 함수 위에 @표시로 되어있는 것을 데코레이터라고 합니다. 사실 데이터분석가가 decorator를 보는 일은 굉장히 드문 것 같습니다. 그런데 tensorflow를 사용하다보면 @tf.function 이라는 것을 심심치 않게 볼 수 있는데요, @가 의미하는 것이 무엇일지 알아보겠습니다.
# <br>
# -
# ---
#
# - 기본적으로 decorator는 함수를 사용할 때마다 똑같은 기능을 함수에 넣기보다는 함수에 다시 씌워버리도록 하는 기능입니다.
# - 보통 코드가 얼마나 걸리는 지 확인하기 위해서 사용하는 코드를 데코레이터로 실행한 뒤 시간을 제보도록 하겠습니다.
# - A부터 B까지의 합을 구하고 결과를 print해주는 함수를 만들어보겠습니다.
def sigma_fromA_toB(A, B) :
print("result : {}".format(sum(range(A, B + 1))))
sigma_fromA_toB(1,2)
# - 보통 이 함수의 시간을 재기 위해서는 time 함수를 통하여 함수가 동작한 시간을 빼주게 됩니다.
# <br>
# 1. 첫번째 방법은 보통은 테스트 용으로 활용하기 위해서 간단하게 함수 앞뒤에 time을 정의해주고 차이를 코드 실행시간으로 확인을 합니다.
# - 테스트하고 싶을 때만 앞뒤로 붙이면 되지만, 테스트할 경우가 많아지면 일일이 다 붙여넣기를 해주어야합니다.
# 2. 두번째 방법은 함수 안에 코드의 실행 시간을 print하게 기능을 넣었습니다.
# - 이 역시 필요한 함수에만 기능을 추가하면 되겠지만, 함수가 많아진다면 이 역시 매우 불편할 것입니다.
import time
stime = time.time()
sigma_fromA_toB(100000,1000000)
etime = time.time()
print("duringtime : {:.4f}".format(etime - stime))
def sigma_fromA_toB_toTime(A, B) :
stime = time.time()
print("result : {}".format(sum(range(A, B + 1))))
etime = time.time()
print("duringtime : {:.4f}".format(etime - stime))
sigma_fromA_toB_toTime(100000,1000000)
# - 아래의 measure_time이라는 func를 중간에 실행하고 시간을 출력해주는 decorator 형식의 함수를 만들어 sigma_fromA_toB의 함수에 씌워보겠습니다.
# <br>
# <br>
# - 결과는 아래와 같이 결과가 제대로 나오는 것을 볼 수 있습니다.
# - 함수의 위 아래로 time을 정의할 필요가 없이 @measure_time이라고 붙여줬을 뿐인데 기능들이 실현될 수 있다는 것을 확인할 수 있습니다.
# - 앞으로 @ tf.function가 코드에 있더라도 tensorflow에서 잘 돌아갈 수 있도록 잘 정의해놓은 decorator function이라고 생각하고 쓰시면 될 것 같습니다.
def measure_time(func) :
def measure_time(*args, **kwargs) :
stime = time.time()
func(*args, **kwargs)
etime = time.time()
print("duringtime : {:.4f}".format(etime - stime))
return measure_time
@measure_time
def sigma_fromA_toB(A, B) :
print("result : {}".format(sum(range(A, B + 1))))
sigma_fromA_toB(100000,1000000)
# + [markdown] id="oGJTcUcZdOnt"
# <br>
# <br>
# <br>
#
# ---
#
# ### **Under bar/ Under score**
#
# - "_" 기호가 파이썬에 어디에서 나왔는지 궁금하신 분들도 있겠지만 생각보다 유용하고 자주 쓰인다는 것을 확인해보겠습니다.
# <br>
# -
# ---
#
# ### **언더바 하나만 사용되는 경우 1**
# - 가장 자주 쓰이는 경우는 for문에서 딱히 쓸 필요가 없을 때 입니다.
# - 보통은 for loop 안의 index나 값을 쓰기 위해서 사용하지만 그냥 반복만 하면 될 경우입니다.
for _ in range(10) :
sigma_fromA_toB(100000,1000000)
for _ in range(10) :
sigma_fromA_toB(100000,1000000)
print(f"_ size is {sys.getsizeof(_)}")
# - 개인적인 의견이지만 "_"를 사용해도 memory에 저장이 안되는건 아니기 때문에 속도나 성능상 개선이 되진 않을 것 같습니다. 다만 의미상의 영역으로 보입니다. 그렇기 때문에 쓰지 않는 변수명을 적어도 성능이 저하될 것으로 보이진 않습니다.
# ### **언더바 하나만 사용되는 경우 2**
# - Under bar는 최근 출력됬던 변수들(결과값)을 자동으로 저장하고 가지고 있습니다.
# 1. a가 정의되고 한번 print되면 "_"에 최근 값이 저장이 됩니다.
a = 1 + 1
a
_
b = 2 + 2
b
_
c = 3 + 3
_
# <br>
# <br>
#
# ### **언더바 위치와 개수에 따른 의미**
# - 변수명에서 "_" 의 위치와 개수에 따라서 약속된 의미가 다릅니다. 사용자 정의 클래스나 함수를 import하는 과정에서 접근성에 대한 정의들입니다. 사실 제가 직접적으로 함수를 import할 일이 많지 않았기 때문에 와닿지 않아서 자세한 내용은 출처 블로그에서 확인하시길 바랍니다.
# <br>
#
# - 앞에 하나의 언더바 \_variable : 내부 사용 코드, 즉 main문 안에서만 사용되는 함수
# - 뒤에 하나의 언더바 variable_ : 함수명이 겹칠 때 추가하여 중복을 방지(결국 그냥 어떤 특수문자를 붙이고 싶은데 _만 붙일 수 있다는 의미랑 비슷한 것 같습니다.)
# - 앞에 둘의 언더바 \_\_variable : 특정 변수가 override가 되는 것을 방지하기 위함, 즉 고유하게 유지하기 위한 방법(언제 쓰일지는 잘 모르겠습니다)
# - 앞과 뒤에 두개의 언더바 \_\_variable\_\_ : 매직메소드라고 불립니다. 가급적 이 방법을 직접 사용하는 것을 추천하지 않는다고 합니다. 즉 이미 만들어진 객체들이나 약속이 아니면 직접 이 형태의 객체를 만들지 않는 것이 좋다고 합니다. 보통 if __name__ == __main__, __init__을 할 때 많이 보입니다.
#
#
# 출처: https://eine.tistory.com/entry/파이썬에서-언더바언더스코어-의-의미와-역할 [아인스트라세의 SW 블로그]
# + [markdown] id="oGJTcUcZdOnt"
# <br>
# <br>
# <br>
#
# ---
#
# ### **Asterisk(\*표)**
#
# - 기본적으로 곱셈이나 거듭제곱을 사용할 때 많이 보긴하지만 심심치 않게 func(\*args, \*\*kwargs) 이렇게 사용하는 함수들을 볼 수 있습니다. 이것이 의미하는 것과 왜 사용하는지에 대해서 알아보겠습니다.
# <br>
# -
# <br>
#
# #### 1. list 앞에 \*이 하나만 쓰일 때 : 기본적으로 리스트를 풀 때? 사용된다고 이해하시면 됩니다. 하지만 개별적으로 사용은 안됩니다.
tmp1 = [1,2,3]
tmp2 = [4,5,6]
[tmp1]
[*tmp1]
[tmp1, tmp2]
[*tmp1, *tmp2]
*tmp
# <br>
#
# #### 2. dict 앞에 \*이 하나만 쓰일 때 : dict를 list와 같이 풀 때는 \*를 두개 사용해주어야됩니다.
tmp1 = {"a" : 1, "b" : 2, "c" : 3}
tmp2 = {"d" : 4, "e" : 5, "f" : 6}
{**tmp1}
[tmp1]
[*tmp1]
[**tmp1]
{**tmp1, **tmp2}
[*tmp1, *tmp2]
[**tmp1, **tmp2]
# <br>
#
# #### 3. 위의 특징을 활용하여 함수에 적용하기
def test_func(a = None, b = None, c = None) :
print(f"a = {a}, b = {b}, c = {c}")
test_func(1,2,3)
# - 만일 [[1, 2, 3], [4,5,6], ["ㄱ", "ㄴ", "ㄷ"]] 이라는 list를 iterable하게 test_func에 적용해야하는 상황이 생겼을 때 어떻게 돌릴 수 있을까요?
tmp = [[1, 2, 3], [4,5,6], ["ㄱ", "ㄴ", "ㄷ"]]
for t in tmp :
test_func(t[0], t[1], t[2])
# - 이런식으로 접근할 수도 있겠지만 위의 예시를 활용하여 Asterisk를 사용한다면 훨씬 간단하게 코드를 적용할 수 있습니다.
for t in tmp :
test_func(*t)
# - \*[]를 활용하면 간결한 코딩을 할 수 있지만 순서대로 들어가기 때문에 argument들이 많아지고 랜덤으로 부여됬을 때 순서를 다시 맞춰줘야하는 경우가 있습니다. 머신러닝/딥러닝을 하다보면 많은 하이퍼 파라미터가 있는데 batch_size, epoch, learning_rate같은 여러 하이퍼파라미터의 순서를 고려하는 것은 어려운 일이 될 것입니다. 이러한 경우는 dict를 활용하면 훨씬 간결해질 수 있습니다.
def test_func(lr = 0.001, epoch = 100, batch_size = 64) :
print(f"lr = {lr}, epoch = {epoch}, batch_size = {batch_size}")
tmp1 = {"lr" : 0.000001, "epoch" : 200}
tmp2 = {"epoch" : 200, "batch_size" : 32}
tmp3 = {"epoch" : 200, "lr" : 0.000001, "batch_size" : 128}
tmp = [tmp1, tmp2, tmp3]
for t in tmp :
test_func(**t)
# - 앞에서 얘기했던 func(\*args, \*\*kwargs)은 결국에 list 형태의 파라미터는 \*args로 dict형식의 파라미터는 \*\*kwargs로 적용하는 것이다. kwargs는 keyword arguments로 key값이 있어 적용이 가능하다는 의미이다.
# + [markdown] id="oGJTcUcZdOnt"
# <br>
# <br>
# <br>
#
# ---
#
# ### **Jupyter Magic key**
#
# - 파이썬을 활용하여 분석을 할 때 jupyter 환경을 사용하는 경우가 많은데, jupyter project는 분석에 필요한 패키지 관리 뿐 아니라 다양한 기능들을 magic key로 만들어 제공하고 있습니다. 대표적으로 코딩을 시작할 때 시작하는 %matplotlib inline도 jupyter magic key의 일부분입니다. 이렇듯 magic key는 %로 시작하게 됩니다.
# - 시작 전에 어떤 기능들이 있는 지 알려주는 %lsmagic을 활용해서 확인해보겠습니다.
# <br>
# -
# - magic키는 기본적으로 line magic key와 cell magic key로 나뉘게 됩니다. 이름을 듣고 유추할 수도 있겠지만, line magic key의 경우에는 % 하나로 이루어져있으며, 해당 매직키가 있는 라인의 코드에 대해서만 적용을 하게 되며, cell magic key는 cell 전체의 코드에 대해서 기능을 적용하게 됩니다.
# %lsmagic
# - 위의 decorator 설명을 할 때도 코드의 실행 시간에 대해서 어떻게 코딩하는 가에 대한 부분이 있었는데 이런 부분도 매직키로 해결할 수 있습니다.
# %time time.sleep(5)
# %time
time.sleep(3)
time.sleep(2)
# %%time
time.sleep(2)
time.sleep(3)
# - 위에서 보듯이 똑같이 5초가 걸리게 실행했을 때 차이가 보입니다. line 매직키인 %time은 그 라인에 있는 코드에 대한 실행 시간을 측정하기 때문에 첫 코드는 5초의 실행시간을 뱉지만, 두번째 줄의 코드는 그 라인에 실행시킬것이 아무것도 없기 때문에 사실상 그냥 최소 시간을 측정하게 됩니다.
# - 세번째의 cell 매직키인 %%time은 셀 전체의 시간을 측정하기 때문에 5초로 올바르게 측정하고 있습니다.
# - 주의할 것은 매직키의 결과를 return해주는 것이 아니라 그냥 출력만 해주는 것입니다. %ls 등의 경로등을 변수로 사용할 때는 glob이나 sys 안에 있는 객체로 return해주는 함수를 활용해야합니다.
# - 위에서 보듯 많은 기능들이 있지만 제가 보통 쓰는 것은 matplotlib, time 관련 매직키들이고 코랩을 활용할 경우 chdir, ls등을 사용할 경우가 많습니다. 보통 %matplotlib이 어디서 나온지 모른채 사용하는 경우가 많기 때문에 이번 기회에 %의 근본?에 대해서 한번 더 생각하는 기회가 됬으면 좋겠습니다.
# + [markdown] id="sFxcbBrNmv4f"
# ---
#
# code : https://github.com/Chanjun-kim/Chanjun-kim.github.io/blob/main/_ipynb/2021-08-08-PythonTip.ipynb
#
# 감사합니다.
| _ipynb/2021-08-08-PythonTip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import datasets
from raft_baselines.classifiers import GPT3Classifier
train = datasets.load_dataset(
"ought/raft", "neurips_impact_statement_risks", split="train"
)
classifier = GPT3Classifier(
train, config="neurips_impact_statement_risks", do_semantic_selection=True
)
print(classifier.classify({"Paper title": "GNN research", "Impact statement": "test2"}))
| src/raft_baselines/scripts/cohere_raft.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Advanced Spatial Analysis: Spatial Accessibility 2
#
# ## Overview
#
# In this lecture, we will advance the 2SFCA method, which was covered in the previous lecture, by integrating **travel time and distance decay functions**. Compared to the original 2SFCA method, this method is called **Enhanced 2SFCA (E2SFCA)** method. The method is defined as follows:
#
# ### First step:
#
# $$\huge R_j = \frac{S_j}{\sum_{k\in {\left\{\color{blue}{t_{kj}} \le \color{blue}{t_0} \right\}}}^{}{P_k}\color{blue}{W_k}}$$
# where<br>
# $R_j$: the supply-to-demand ratio of location $j$. <br>
# $S_j$: the degree of supply (e.g., number of doctors) at location $j$. <br>
# $P_k$: the degree of demand (e.g., population) at location $k$. <br>
# $\color{blue}{t_{kj}}$: the travel <font color='blue'>time</font> between locations $k$ and $j$. <br>
# $\color{blue}{t_0}$: the threshold travel <font color='blue'>time</font> of the analysis. <br>
# $\color{blue}{W_k}$: Weight based on a distance decay function
#
# ### Second step:
# $$\huge A_i = \sum_{j\in {\left\{\color{blue}{t_{ij}} \le \color{blue}{t_0} \right\}}} R_j\color{blue}{W_j}$$
# where<br>
# $A_i$: the accessibility measures at location $i$. <br>
# $R_j$: the supply-to-demand ratio of location $j$. <br>
# $\color{blue}{W_j}$: Weight based on a distance decay function<br>
import geopandas as gpd
import pandas as pd
import osmnx as ox
import time
from tqdm import tqdm, trange
from shapely.geometry import Point, MultiPoint
import networkx as nx
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from shapely.ops import cascaded_union, unary_union
import utils # Local file
import warnings
warnings.filterwarnings("ignore")
# ## Let's see the result first
#
# In the maps below, E2SFCA method shows the result measured with threshold travel time **15 minutes**. <br>
# The original 2SFCA method shows the result measured with threshold travel **distance 20Km** (=50 mph (80 km/h) * 0.25 hr (15 minutes)).
E_step2 = gpd.read_file('./data/result_E2SFCA.shp')
step2 = gpd.read_file('./data/result_2SFCA.shp')
# +
# Plotting accessibility measurement result.
fig, ax = plt.subplots(1, 2, figsize=(15, 10))
# Enhanced 2SFCA method
E_step2.plot('access', ax=ax[0], figsize=(10,10), legend=True, cmap='Blues', scheme='FisherJenks')
E_step2.loc[E_step2['access'] == 0].plot(ax=ax[0], color='grey', zorder=1)
E_step2.boundary.plot(ax=ax[0], linestyle='dotted', lw=0.5, color='black', zorder=1)
# Original 2SFCA method
step2.plot('access', ax=ax[1], figsize=(10,10), legend=True, cmap='Blues', scheme='FisherJenks')
step2.loc[step2['access'] == 0].plot(ax=ax[1], color='grey', zorder=1)
step2.boundary.plot(ax=ax[1], linestyle='dotted', lw=0.5, color='black', zorder=1)
# -
# ## Import Data - same data as the previous lecture
# Supply: hospitals in the city of Chicago
hospitals = gpd.read_file('./data/Chicago_Hospital_Info.shp')
hospitals.head(1)
# Demand: population per census tract
tracts = gpd.read_file('./data/Chicago_Tract.shp')
tracts.head(1)
# +
fig, ax = plt.subplots(figsize=(10, 10))
tracts.plot('TotalPop', ax=ax, scheme='FisherJenks', cmap='Blues')
hospitals.plot(markersize='Total_Bed', ax=ax, color='black')
# -
# Mobility: Chicago Road Network
G = ox.io.load_graphml('./data/chicago_road.graphml')
ox.plot_graph(G)
# +
# This function helps you to find the nearest OSM node from a given GeoDataFrame
# If geom type is point, it will take it without modification, but
# IF geom type is polygon or multipolygon, it will take its centroid to calculate the nearest element.
def find_nearest_osm(network, gdf):
for idx, row in tqdm(gdf.iterrows(), total=gdf.shape[0]):
if row.geometry.geom_type == 'Point':
nearest_osm = ox.distance.nearest_nodes(network,
X=row.geometry.x,
Y=row.geometry.y
)
elif row.geometry.geom_type == 'Polygon' or row.geometry.geom_type == 'MultiPolygon':
nearest_osm = ox.distance.nearest_nodes(network,
X=row.geometry.centroid.x,
Y=row.geometry.centroid.y
)
else:
print(row.geometry.geom_type)
continue
gdf.at[idx, 'nearest_osm'] = nearest_osm
return gdf
supply = find_nearest_osm(G, hospitals)
demand = find_nearest_osm(G, tracts)
# -
# ## Advancement 1: Caculate the estimated travel time for each edge
#
# To calculate the catchment area based on threshold travel time, we need to calculate how long it would take to travel each network edge. <br>
# OSM network has two attributes that are helpful to calculate the estimated travel time for each edge: `length` and `maxspeed`. If we divide `length` into `maxspeed`, we will get the minimum travel time necessary to travel to the edge.
# Extract the nodes and edges of the network dataset for the future analysis.
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
edges.head()
# You can iterate through the edges in the graph (`G`) with `G.edges()` method. This will return the entry of each edge as the form of dictionary.
for u, v, data in G.edges(data=True):
print(type(data), data.keys())
# Check the data in the `maxspeed` column. It has the forms of either **string or list**. But, we need to have it as a **numerical form** to do the calculation.
for u, v, data in G.edges(data=True):
if 'maxspeed' in data.keys():
print(data['maxspeed'])
str_test = '55 mph'
str_test.split(' ')
list_test = ['35 mph', '30 mph']
list_test[0].split(' ')
# By splitting either list or string, we can obtain the numerical value of max speed, as shown below.
for u, v, data in G.edges(data=True):
if 'maxspeed' in data.keys():
if type(data['maxspeed']) == list:
temp_speed = data['maxspeed'][0] # extract only the first entry if there are many
else:
temp_speed = data['maxspeed']
temp_speed = temp_speed.split(' ')[0] # Extract only the number
data['maxspeed'] = temp_speed # Assign back to the original entry
# Examine the replaced values in `maxspeed` column. You will notice that some rows have `NaN` value.
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
edges.head()
# If the `maxspeed` column is empty, we can assign maximum travel speed based on their road type. The type is stored in `highway` column. This <a href=https://wiki.openstreetmap.org/wiki/Key:highway> website </a> shows the kinds of attributes and their meanings.
def assign_max_speed_with_highway_type(row_):
"""
Assign the maximum speed of an edge based on its attribute 'highway'
# https://wiki.openstreetmap.org/wiki/Key:highway
Args:
row_: (dict) a row of OSMnx network data
Returns:
temp_speed_: (int) the maximum speed of an edge
"""
max_speed_per_type = {'motorway': 50,
'motorway_link': 30,
'trunk': 50,
'trunk_link': 30,
'primary': 40,
'primary_link': 30,
'secondary': 40,
'secondary_link': 30,
'tertiary': 40,
'tertiary_link': 20,
'residential': 30,
'living_street': 20,
'unclassified': 20
}
# if the variable is a list, grab just the first one.
if type(row_['highway']) == list:
road_type = row_['highway'][0]
else:
road_type = row_['highway']
# If the maximum speed of the road_type is predefined.
if road_type in max_speed_per_type.keys():
temp_speed_ = max_speed_per_type[road_type]
else: # If not defined, just use 20 mph.
temp_speed_ = 20
return temp_speed_
for u, v, data in G.edges(data=True):
if 'maxspeed' in data.keys():
if type(data['maxspeed']) == list:
temp_speed = data['maxspeed'][0] # extract only numbers
else:
temp_speed = data['maxspeed']
temp_speed = temp_speed.split(' ')[0]
else:
temp_speed = assign_max_speed_with_highway_type(data)
data['maxspeed'] = temp_speed
# Check the `maxspeed` column one more time. You will see all the rows are populated with numerical values.
# Extract the nodes and edges of the network dataset for the future analysis.
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
edges.head()
# Since we have `maxspeed` and `length` data ready for every edge, now we can calculate the estimated travel time per edge. You can simply create/add column like the below.
for u, v, data in G.edges(data=True):
data['maxspeed_meters'] = int(data['maxspeed']) * 26.8223 # MPH * 1.6 * 1000 / 60; meter per minute
data['time'] = float(data['length'] / data['maxspeed_meters']) # Unit: minutes
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
edges.head()
# In summary, the following codes are what I have explained.
#
# ```python
# def network_settings(network):
# for u, v, data in network.edges(data=True):
# if 'maxspeed' in data.keys():
# if type(data['maxspeed']) == list:
# temp_speed = data['maxspeed'][0] # extract only numbers
# else:
# temp_speed = data['maxspeed']
#
# temp_speed = temp_speed.split(' ')[0]
#
# else:
# temp_speed = assign_max_speed_with_highway_type(data)
#
# data['maxspeed'] = temp_speed
# data['maxspeed_meters'] = int(data['maxspeed']) * 26.8223 # MPH * 1.6 * 1000 / 60; meter per minute
# data['time'] = float(data['length'] / data['maxspeed_meters'])
#
# # create point geometries for the entire graph
# for node, data in network.nodes(data=True):
# data['geometry'] = Point(data['x'], data['y'])
#
# return network
#
#
# def assign_max_speed_with_highway_type(row_):
# max_speed_per_type = {'motorway': 50,
# 'motorway_link': 30,
# 'trunk': 50,
# 'trunk_link': 30,
# 'primary': 40,
# 'primary_link': 30,
# 'secondary': 40,
# 'secondary_link': 30,
# 'tertiary': 40,
# 'tertiary_link': 20,
# 'residential': 30,
# 'living_street': 20,
# 'unclassified': 20
# }
#
# # if the variable is a list, obtain just the first one.
# if type(row_['highway']) == list:
# road_type = row_['highway'][0]
# else:
# road_type = row_['highway']
#
# # If the maximum speed of the road_type is predefined.
# if road_type in max_speed_per_type.keys():
# temp_speed_ = max_speed_per_type[road_type]
# else: # If not defined, just use 20 mph.
# temp_speed_ = 20
#
# return temp_speed_
#
# ```
# +
def network_settings(network):
for u, v, data in network.edges(data=True):
if 'maxspeed' in data.keys():
if type(data['maxspeed']) == list:
temp_speed = data['maxspeed'][0] # extract only numbers
else:
temp_speed = data['maxspeed']
temp_speed = temp_speed.split(' ')[0]
else:
temp_speed = assign_max_speed_with_highway_type(data)
data['maxspeed'] = temp_speed
data['maxspeed_meters'] = int(data['maxspeed']) * 26.8223 # MPH * 1.6 * 1000 / 60; meter per minute
data['time'] = float(data['length'] / data['maxspeed_meters'])
# create point geometries for the entire graph
for node, data in network.nodes(data=True):
data['geometry'] = Point(data['x'], data['y'])
return network
def assign_max_speed_with_highway_type(row_):
max_speed_per_type = {'motorway': 50,
'motorway_link': 30,
'trunk': 50,
'trunk_link': 30,
'primary': 40,
'primary_link': 30,
'secondary': 40,
'secondary_link': 30,
'tertiary': 40,
'tertiary_link': 20,
'residential': 30,
'living_street': 20,
'unclassified': 20
}
# if the variable is a list, obtain just the first one.
if type(row_['highway']) == list:
road_type = row_['highway'][0]
else:
road_type = row_['highway']
# If the maximum speed of the road_type is predefined.
if road_type in max_speed_per_type.keys():
temp_speed_ = max_speed_per_type[road_type]
else: # If not defined, just use 20 mph.
temp_speed_ = 20
return temp_speed_
# +
# Mobility: Chicago Road Network
G = ox.io.load_graphml('./data/chicago_road.graphml')
G = network_settings(G)
# Extract the nodes and edges of the network dataset for the future analysis.
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
edges.head()
# -
# ---
# ### *Exercise*
#
# Now we will investigate how the catchment area differs if we utilize threshold travel distance or travel time. <br> Assuming the overall travel speed in the study area is 50 mph, we will compare the catchment areas drawn by 20 km (15 minutes driving distance with 50 mph) and 15 minutes. Change the value of `supply_idx` from 0 to 33, and investigate how the catchment looks different at different supply locations.
#
# ```python
# # In summary, the following is the necessary code to create a catchment area from a given location.
# threshold_dist = 20000 # 15 minute driving distance with 50mph.
# threshold_time = 15
#
# supply_idx = 0 # Range can be 0 - 33
#
# # 1. Calculate accessible nodes in the network dataset from a given location
# temp_nodes_time = nx.single_source_dijkstra_path_length(G, supply.loc[supply_idx, 'nearest_osm'], threshold_time, weight='time')
# temp_nodes_dist = nx.single_source_dijkstra_path_length(G, supply.loc[supply_idx, 'nearest_osm'], threshold_dist, weight='length')
#
# # 2. Extract the locations (or coordinates) of accessible nodes based on the OSMID.
# access_nodes_time = nodes.loc[nodes.index.isin(temp_nodes_time.keys()), 'geometry']
# access_nodes_dist = nodes.loc[nodes.index.isin(temp_nodes_dist.keys()), 'geometry']
#
# # 3. Create a convex hull with the locations of the nodes.
# access_nodes_time = gpd.GeoSeries(access_nodes_time.unary_union.convex_hull, crs="EPSG:4326")
# access_nodes_dist = gpd.GeoSeries(access_nodes_dist.unary_union.convex_hull, crs="EPSG:4326")
#
# # Result.
# demand_time = demand.loc[demand['geometry'].centroid.within(access_nodes_time[0])]
# demand_dist = demand.loc[demand['geometry'].centroid.within(access_nodes_dist[0])]
#
# print(f"threshold by time: {demand_time.shape[0]}")
# print(f"threshold by distance: {demand_dist.shape[0]}")
#
# # Plot graphs
# fig, ax = plt.subplots(figsize=(10, 10))
#
# tracts.plot('TotalPop', ax=ax, scheme='FisherJenks', cmap='Blues')
#
# access_nodes_time.boundary.plot(ax=ax, color='red', linewidth=4)
# access_nodes_dist.boundary.plot(ax=ax, color='blue', linewidth=4)
#
# edges.plot(ax=ax, color='black', lw=0.5)
#
# supply_loc = supply.loc[supply.index==supply_idx]
# supply_loc.plot(markersize='Total_Bed', ax=ax, color='black')
#
# ```
#
# ---
# +
# In summary, the following is the necessary code to create a catchment area from a given location.
threshold_dist = 20000 # 15 minute driving distance with 50mph.
threshold_time = 15
supply_idx = 0 # Range can be 0 - 33
# 1. Calculate accessible nodes in the network dataset from a given location
temp_nodes_time = nx.single_source_dijkstra_path_length(G, supply.loc[supply_idx, 'nearest_osm'], threshold_time, weight='time')
temp_nodes_dist = nx.single_source_dijkstra_path_length(G, supply.loc[supply_idx, 'nearest_osm'], threshold_dist, weight='length')
# 2. Extract the locations (or coordinates) of accessible nodes based on the OSMID.
access_nodes_time = nodes.loc[nodes.index.isin(temp_nodes_time.keys()), 'geometry']
access_nodes_dist = nodes.loc[nodes.index.isin(temp_nodes_dist.keys()), 'geometry']
# 3. Create a convex hull with the locations of the nodes.
access_nodes_time = gpd.GeoSeries(access_nodes_time.unary_union.convex_hull, crs="EPSG:4326")
access_nodes_dist = gpd.GeoSeries(access_nodes_dist.unary_union.convex_hull, crs="EPSG:4326")
# Result.
demand_time = demand.loc[demand['geometry'].centroid.within(access_nodes_time[0])]
demand_dist = demand.loc[demand['geometry'].centroid.within(access_nodes_dist[0])]
print(f"threshold by time: {demand_time.shape[0]}")
print(f"threshold by distance: {demand_dist.shape[0]}")
# Plot graphs
fig, ax = plt.subplots(figsize=(10, 10))
tracts.plot('TotalPop', ax=ax, scheme='FisherJenks', cmap='Blues')
access_nodes_time.boundary.plot(ax=ax, color='red', linewidth=4)
access_nodes_dist.boundary.plot(ax=ax, color='blue', linewidth=4)
edges.plot(ax=ax, color='black', lw=0.5)
supply_loc = supply.loc[supply.index==supply_idx]
supply_loc.plot(markersize='Total_Bed', ax=ax, color='yellow', zorder=2)
# -
# Here, the main difference is created from <a href=https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.shortest_paths.weighted.single_source_dijkstra_path_length.html>`nx.single_source_dijkstra_path_length()`</a>, which is the function calculating the nodes that are accessible within a certain threshold.
nx.single_source_dijkstra_path_length(G=G,
source=supply.loc[0, 'nearest_osm'],
cutoff=20000,
weight='length'
)
nx.single_source_dijkstra_path_length(G=G,
source=supply.loc[0, 'nearest_osm'],
cutoff=15,
weight='time'
)
nx.single_source_dijkstra_path_length(G=G,
source=supply.loc[0, 'nearest_osm'],
cutoff=15,
# weight='time'
)
# ## Advancement 2: Apply distance decay functions for catchment areas
#
# Here, we will also start with the result first.
def calculate_catchment_area(network, nearest_osm, minutes, distance_unit='time'):
polygons = gpd.GeoDataFrame(crs="EPSG:4326")
# Create convex hull for each travel time (minutes), respectively.
for minute in minutes:
access_nodes = nx.single_source_dijkstra_path_length(network, nearest_osm, minute, weight=distance_unit)
convex_hull = gpd.GeoSeries(nx.get_node_attributes(network.subgraph(access_nodes), 'geometry')).unary_union.convex_hull
polygon = gpd.GeoDataFrame({'minutes': [minute], 'geometry': [convex_hull]}, crs="EPSG:4326")
polygon = polygon.set_index('minutes')
polygons = polygons.append(polygon)
# Calculate the differences between convex hulls which created in the previous section.
polygons_ = polygons.copy(deep=True)
for idx, minute in enumerate(minutes):
if idx != 0:
current_polygon = polygons.loc[[minute]]
previous_polygons = polygons.loc[[minutes[idx-1]]]
diff_polygon = gpd.overlay(current_polygon, previous_polygons, how="difference")
if diff_polygon.shape[0] != 0:
polygons_.at[minute, 'geometry'] = diff_polygon['geometry'].values[0]
return polygons_.copy(deep=True)
# Demonstration of a catchment area drawn from the supply location 0 for 5, 10 15 minutes threshold travel time.
# +
supply_idx = 0 # Range can be 0 - 33
# Calculate catchment areas
areas = calculate_catchment_area(G, supply.loc[supply_idx, 'nearest_osm'], [5, 10,15], distance_unit='time')
areas['val'] = areas.index.astype(str)
# Plot graphs
fig, ax = plt.subplots(figsize=(10, 10))
tracts.plot('TotalPop', ax=ax, scheme='FisherJenks', cmap='Blues')
areas.plot('val', categorical=True, alpha=0.7, ax=ax)
areas.boundary.plot(ax=ax, color='black')
edges.plot(ax=ax, color='black', lw=0.5)
supply_loc = supply.loc[supply.index==supply_idx]
supply_loc.plot(markersize='Total_Bed', ax=ax, color='black')
# -
areas = calculate_catchment_area(G, supply.loc[0, 'nearest_osm'], [5, 10,15], distance_unit='time')
areas
areas.loc[5, 'geometry']
areas.loc[10, 'geometry']
areas.loc[15, 'geometry']
# The steps are as follows to create multiple polygons with a hole inside.
# 1. Create polygons based on each step of threshold travel time (e.g., 5, 10, 15 minutes)
# 2. Calculate the difference between a polygon with a bigger threshold travel time and the one with a smaller threshold travel time (e.g., 15 minute polygon - 10 minute polygon).
# +
minutes = [5, 10, 15]
polygons = gpd.GeoDataFrame(crs="EPSG:4326")
# Create convex hull for each travel time (minutes), respectively.
for minute in minutes:
# Get the accessible nodes within a certain threshold travel time from the network
access_nodes = nx.single_source_dijkstra_path_length(G,
supply.loc[supply_idx, 'nearest_osm'],
minute,
weight='time'
)
# Create the convex hull of accessible nodes
convex_hull = gpd.GeoSeries(nx.get_node_attributes(G.subgraph(access_nodes), 'geometry')).unary_union.convex_hull
# `convex_hull` is a Shapely Polygon, so need to convert it to GeoDataFrame `polygon`
polygon = gpd.GeoDataFrame({'minutes': [minute], 'geometry': [convex_hull]}, crs="EPSG:4326")
# Append a GeoDataFrame to another GeoDataFrame
polygons = polygons.append(polygon)
polygons = polygons.set_index('minutes')
polygons
# -
# The result will have multiple polygons but they don't have a hole in it.
polygons.loc[5, 'geometry']
polygons.loc[10, 'geometry']
polygons.loc[15, 'geometry']
# We can take advantage of <a href=https://geopandas.org/en/stable/docs/reference/api/geopandas.overlay.html>`gpd.overlay()`</a> to calculate the difference between two polygons.
gpd.overlay(polygons.loc[[15]], polygons.loc[[10]], how="difference")
# gpd.overlay(polygons.loc[[15]], polygons.loc[[10]], how="difference").plot()
# Key here is that you are subtracting a polygon at the second location from the one at the first location
gpd.overlay(polygons.loc[[10]], polygons.loc[[15]], how="difference")
# If you make a loop to automate this process, it will be as below.
# +
# Calculate the differences between convex hulls which created in the previous section.
minutes = [5, 10, 15]
polygons_ = polygons.copy(deep=True)
for idx, minute in enumerate(minutes):
print(f'The index of {minute} is {idx}')
current_idx = idx
previous_idx = idx-1
print(f'In the loop, the current index is {current_idx}, and previous index is {previous_idx}')
current_threshold = minutes[current_idx]
previous_threshold = minutes[previous_idx]
print(f'In the loop, the current threshold time is {current_threshold}, and previous threshold time is {previous_threshold}')
print('#-----------#')
# if idx != 0:
# current_polygon = polygons.loc[[minute]]
# previous_polygons = polygons.loc[[minutes[idx-1]]]
# diff_polygon = gpd.overlay(current_polygon, previous_polygons, how="difference")
# if diff_polygon.shape[0] != 0:
# polygons_.at[minute, 'geometry'] = diff_polygon['geometry'].values[0]
# -
# Again, the following summarizes steps that create multiple catchment areas from a single origin.
def calculate_catchment_area(network, nearest_osm, minutes, distance_unit='time'):
polygons = gpd.GeoDataFrame(crs="EPSG:4326")
# Create convex hull for each travel time (minutes), respectively.
for minute in minutes:
access_nodes = nx.single_source_dijkstra_path_length(network, nearest_osm, minute, weight=distance_unit)
convex_hull = gpd.GeoSeries(nx.get_node_attributes(network.subgraph(access_nodes), 'geometry')).unary_union.convex_hull
polygon = gpd.GeoDataFrame({'minutes': [minute], 'geometry': [convex_hull]}, crs="EPSG:4326")
polygon = polygon.set_index('minutes')
polygons = polygons.append(polygon)
# Calculate the differences between convex hulls which created in the previous section.
polygons_ = polygons.copy(deep=True)
for idx, minute in enumerate(minutes):
if idx != 0:
current_polygon = polygons.loc[[minute]]
previous_polygons = polygons.loc[[minutes[idx-1]]]
diff_polygon = gpd.overlay(current_polygon, previous_polygons, how="difference")
if diff_polygon.shape[0] != 0:
polygons_.at[minute, 'geometry'] = diff_polygon['geometry'].values[0]
return polygons_.copy(deep=True)
# ## Implementation of the advancements to the accessibility measurements
# The original 2SFCA method calculates supply-to-demand ratio (Step 1) as shown below.
# +
# Calculate supply-to-demand ratio of supply location 0
i= 0
dist = 20000
supply_ = supply.copy(deep=True)
supply_['ratio'] = 0
# Create a catchment area from a given location
temp_nodes = nx.single_source_dijkstra_path_length(G, supply.loc[i, 'nearest_osm'], dist, weight='length')
access_nodes = nodes.loc[nodes.index.isin(temp_nodes.keys()), 'geometry']
access_nodes = gpd.GeoSeries(access_nodes.unary_union.convex_hull, crs="EPSG:4326")
# Calculate the population within the catchment area
temp_demand = demand.loc[demand['geometry'].centroid.within(access_nodes[0]), 'TotalPop'].sum()
# Calculate the number of hospital beds in each hospital
temp_supply = supply.loc[i, 'Total_Bed']
# Calculate the number of hospital beds available for 100,000 people
supply_.at[i, 'ratio'] = temp_supply / temp_demand * 100000
supply_.at[i, 'ratio']
# -
# The Enhanced 2SFCA method calculates supply-to-demand ratio (Step 1) as shown below.
# +
minutes = [5, 10, 15]
weights = {5: 1, 10: 0.68, 15: 0.22}
i= 0
supply_ = supply.copy(deep=True)
supply_['ratio'] = 0
# Create multiple catchment areas from a given location
ctmt_area = calculate_catchment_area(G, supply.loc[i, 'nearest_osm'], minutes)
# Calculate the population within each catchment area
ctmt_area_pops = 0
for c_idx, c_row in ctmt_area.iterrows():
temp_pop = demand.loc[demand['geometry'].centroid.within(c_row['geometry']), 'TotalPop'].sum()
print(f'Catchment area within {c_idx} minutes has {temp_pop} people and its weight is {weights[c_idx]}')
ctmt_area_pops += temp_pop * weights[c_idx]
print(f'Accumulated pop is {ctmt_area_pops}')
# Calculate the number of hospital beds in each hospital
temp_supply = supply.loc[i, 'Total_Bed']
# Calculate the number of hospital beds available for 100,000 people
supply_.at[i, 'ratio'] = temp_supply / ctmt_area_pops * 100000
supply_.at[i, 'ratio']
# -
ctmt_area.loc[5, 'geometry']
demand.loc[demand['geometry'].centroid.within(ctmt_area.loc[5, 'geometry'])].plot()
ctmt_area.loc[10, 'geometry']
demand.loc[demand['geometry'].centroid.within(ctmt_area.loc[10, 'geometry'])].plot()
ctmt_area.loc[15, 'geometry']
demand.loc[demand['geometry'].centroid.within(ctmt_area.loc[15, 'geometry'])].plot()
# In summary, we can define the functions for Enhanced 2SFCA method as shown below.
# +
def step1_E2SFCA(supply, supply_attr, demand, demand_attr, mobility, thresholds, weights):
"""
Input:
- supply (GeoDataFrame): stores locations and attributes of supply
- supply_attr (str): the column of `supply` to be used for the analysis
- demand (GeoDataFrame): stores locations and attributes of demand
- demand_attr (str): the column of `demand` to be used for the analysis
- mobility (NetworkX MultiDiGraph): Network Dataset obtained from OSMnx
- thresholds (list): the list of threshold travel times e.g., [5, 10, 15]
- weights (dict): keys: threshold travel time, values: weigths according to the threshold travel times
e.g., [5: 1, 10: 0.68, 15: 0.22]
Output:
- supply_ (GeoDataFrame):
a copy of supply and it stores supply-to-demand ratio of each supply at `ratio` column
"""
# Your code here (Change the name of the variable according to the inputs)
supply_ = supply.copy(deep=True)
supply_['ratio'] = 0
for i in trange(supply.shape[0]):
# Create multiple catchment areas from a given location
ctmt_area = calculate_catchment_area(mobility, supply.loc[i, 'nearest_osm'], thresholds)
# Calculate the population within each catchment area
ctmt_area_pops = 0
for c_idx, c_row in ctmt_area.iterrows():
temp_pop = demand.loc[demand['geometry'].centroid.within(c_row['geometry']), demand_attr].sum()
ctmt_area_pops += temp_pop*weights[c_idx]
# Calculate the number of hospital beds in each hospital
temp_supply = supply.loc[i, supply_attr]
# Calculate the number of hospital beds available for 100,000 people
supply_.at[i, 'ratio'] = temp_supply / ctmt_area_pops * 100000
return supply_
def step2_E2SFCA(result_step1, demand, mobility, thresholds, weights):
"""
Input:
- result_step1 (GeoDataFrame): stores locations and 'ratio' attribute that resulted in step1
- demand (GeoDataFrame): stores locations and attributes of demand
- mobility (NetworkX MultiDiGraph): Network Dataset obtained from OSMnx
- thresholds (list): the list of threshold travel times e.g., [5, 10, 15]
- weights (dict): keys: threshold travel time, values: weigths according to the threshold travel times
e.g., [5: 1, 10: 0.68, 15: 0.22]
Output:
- demand_ (GeoDataFrame):
a copy of demand and it stores the final accessibility measures of each demand location at `ratio` column
"""
# Your code here (Change the name of the variable according to the inputs)
demand_ = demand.copy(deep=True)
demand_['access'] = 0
for j in trange(demand.shape[0]):
ctmt_area = calculate_catchment_area(mobility, demand.loc[j, 'nearest_osm'], thresholds)
ctmt_area_ratio = 0
for c_idx, c_row in ctmt_area.iterrows():
temp_ratio = result_step1.loc[result_step1['geometry'].centroid.within(c_row['geometry']), 'ratio'].sum()
ctmt_area_ratio += temp_ratio * weights[c_idx]
demand_.at[j, 'access'] = ctmt_area_ratio
return demand_
# +
minutes = [5, 10, 15]
weights = {5: 1, 10: 0.68, 15: 0.22}
E_step1 = step1_E2SFCA(supply, 'Total_Bed', demand, 'TotalPop', G, minutes, weights)
E_step2 = step2_E2SFCA(E_step1, demand, G, minutes, weights)
# +
# Plotting accessibility measurement result.
fig, ax = plt.subplots(figsize=(10,10))
hospitals.plot(markersize='Total_Bed', ax=ax, color='black', zorder=2)
E_step2.plot('access', ax=ax, figsize=(10,10), legend=True, cmap='Blues', scheme='FisherJenks')
E_step2.loc[step2['access'] == 0].plot(ax=ax, color='grey', zorder=1)
E_step2.boundary.plot(ax=ax, linestyle='dotted', lw=0.5, color='black', zorder=1)
| Week11/Advanced_Spatial_Analysis_Spatial_Accessibility_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing Required Python Packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns',None)
#pd.set_option('display.max_rows',None)
# Reading the final dataset for Feature Selection
bank_df = pd.read_csv('Final_df.csv')
# Getting the info
bank_df.info()
# ### Performing Train, Test Split of the dataset.
# Segregating the Feature Space and the Response Variable
y = bank_df['y']
X = bank_df.drop(columns='y')
# Importing the train test split module from Sklearn
from sklearn.model_selection import train_test_split
# Performing Train Test split using sklearn . Here the split is stratified using the class labels
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42,test_size =.2,stratify=y)
X_train.info()
# ## Feature Selection using Mutual Information.
# Loading the numerical columns dataset which contains all the numerical columns & the response variable
num_df = pd.read_csv('Num_df.csv')
# Dropping the Response variable
num_cols = (num_df.drop(columns='y')).columns
# Displaying the names of the numerical columns
num_cols
# Extracting the Dummy or categorical Columns
cat_cols = list(set(X_train.columns)- set(num_cols))
len(cat_cols)
# Importing MinMaxScaler from sklearn
from sklearn.preprocessing import MinMaxScaler
# Normalizing the Numerical columns of X_train before calculating the Mutual Information using KNN algorithm.
scale = MinMaxScaler()
X_train_norm = scale.fit_transform(X_train[num_cols])
# Converting the numpy array to dataframe. V-IMP: Make sure to put back the original row labels,otherwise new
# range_index begining from 0 is assigned to new df: Num_df_scaled
Num_df_scaled = pd.DataFrame(X_train_norm,columns=num_cols,index=X_train.index)
Num_df_scaled.info()
X_train[cat_cols].info()
# Concatenating Scaled Numerical columns with the categorical columns of X_train
X_train_scaled = pd.concat([Num_df_scaled,X_train[cat_cols]],axis=1)
X_train_scaled.info()
# Importing Mutual Information classifier ferom sklearn
from sklearn.feature_selection import mutual_info_classif
# Computing Mutual info. between feature space and Response variable.
MI_X_train_y_train = mutual_info_classif(X_train_scaled,y_train,random_state=42)
# converting Mutual Info. values to corresponding Series.
MI = pd.Series(MI_X_train_y_train,index=X_train_scaled.columns)
# Plotting the Mutual information corresponding to various features on the horizontal barplot.
fig = plt.figure(figsize=(20,20))
MI.plot.barh()
plt.show()
# Displaying no. of features having MI with y >.001
sum((MI>.001))
# Selecting only features having MI >= .001.
X_train_red = X_train[(MI.loc[MI >= .001]).index]
# getting the info of the reduced dataset
X_train_red.info()
# Removing the corresponding features having MI less than .001 from the test set
X_test_red = X_test[(MI.loc[MI>=.001]).index]
X_test_red.info()
# ## Standardizing the numerical columns of the Feature space
num_cols
len(num_cols)
# Getting the numerical columns from the reduced training set
num_cols_r = list(set(num_cols).intersection(set(X_train_red.columns)))
num_cols_r
len(num_cols_r)
# ### From the above output we can clearly see that all the numerical columns are present in the reduced training set.
# Getting the Categorical columns of the reduced training set
cat_cols_r = list(set(X_train_red.columns) - set(num_cols_r))
len(cat_cols_r)
# Importing the Standard scaler from Sklearn
from sklearn.preprocessing import StandardScaler
# Instantiating the Standard Scaler object & fit_transforming the Training set.
st_scaler = StandardScaler()
arr = st_scaler.fit_transform(X_train_red[num_cols_r])
# Converting the array to the corresponding dataframe
Num_df_scaled = pd.DataFrame(arr,columns=num_cols_r,index=X_train_red.index)
Num_df_scaled.info()
# Concatenating the Numerical columns with Categorical columns of Training Data
X_train_red_st = pd.concat([Num_df_scaled,X_train_red[cat_cols_r]],axis=1)
X_train_red_st.info()
# ### Standardizing the numerical columns of the test data
# Transforming the Numerical columns of the test set with the same standard scaler object as was used to fit Training set
arr1 = st_scaler.transform(X_test_red[num_cols_r])
# Converting the array to the corresponding dataframe
Num_df_scaled = pd.DataFrame(arr1,columns=num_cols_r,index=X_test_red.index)
Num_df_scaled.info()
# Concatenating the Numerical columns with Categorical columns of Test Data
X_test_red_st = pd.concat([Num_df_scaled,X_test_red[cat_cols_r]],axis=1)
X_test_red_st.info()
# Saving the Reduced Training set as a CSV File
X_train_red_st.to_csv('X_train_final.csv',index=False)
# Saving the Reduced Test set as a CSV File
X_test_red_st.to_csv('X_test_final.csv',index=False)
# Saving the training labels set as a CSV File
y_train.to_csv('y_train.final.csv',index=False)
# Saving the test labels set as a CSV File
y_test.to_csv('y_test.final.csv',index=False)
# ## Creating Full Feature Standardized X_train and X_test with
# Loading the numerical columns dataset
num_df = pd.read_csv('Num_df.csv')
# Dropping the Response variable & extracting columns
num_cols = (num_df.drop(columns='y')).columns
num_cols
len(num_cols)
# Extracting the Dummy Columns
cat_cols = list(set(X_train.columns)- set(num_cols))
len(cat_cols)
# Making use of Standard scaler to fit transform Numerical columns of full X_train
st_scaler = StandardScaler()
arr = st_scaler.fit_transform(X_train[num_cols])
# Converting transformed array arr to Dataframe
df_scaled = pd.DataFrame(arr,columns=num_cols,index=X_train.index)
df_scaled.info()
# Combining the scaled numerical columns of the X_train with Categorical columns
X_train_full = pd.concat([df_scaled,X_train[cat_cols]],axis=1)
# Standardizing the numerical columns of X_test
arr_test = st_scaler.transform(X_test[num_cols])
# Converting transformed array arr_test to Dataframe
df_scaled_test = pd.DataFrame(arr_test,columns=num_cols,index=X_test.index)
# Combining the scaled numerical columns of the X_test with Categorical columns
X_test_full = pd.concat([df_scaled_test,X_test[cat_cols]],axis=1)
# Saving the Full Feature Training Feature set as a CSV File
X_train_full.to_csv('X_train_full_final.csv',index=False)
# Saving the Full Feature Test Feature set as a CSV File
X_test_full.to_csv('X_test_full_final.csv',index=False)
| 4_Feature_Selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: manning
# language: python
# name: manning
# ---
# ### Objective
# - Use the model that was fit to obtain the accuracy score, check for overfitting/underfitting and compare it to a dummy baseline model.
# - Use the fitted model to obtain the confusion matrix and the classification report and understand the various metrics that can be derived from it.
#
#
import pandas as pd
import seaborn as sns
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_fscore_support
from sklearn.preprocessing import binarize
import matplotlib.pyplot as plt
# ##### 1. Load the dataset that was cleaned (from the data directory) and see if it requires any more cleaning after reading it (hint: Check the first column). Feed the train data into a Logistic Regression model with an arbitrary random state.
# * Feel free to play around with the parameters of the LogisticRegression class.
# Read cleaned data training, test, labels
X_train = pd.read_pickle("../data/X_train.pkl")
X_test = pd.read_pickle("../data/X_test.pkl")
y_train = pd.read_pickle("../data/y_train.pkl")
y_test = pd.read_pickle("../data/y_test.pkl")
X_train.columns
X_train['MinTemp'].describe()
# Not seeing any irregularities in the first column
# **Logistic Regression**
# Create Logistic Regression - Rain Prediction
log_regression = LogisticRegression(solver='liblinear', random_state=0)
# Train the model
log_regression.fit(X_train, y_train)
# ##### Obtain the accuracy of both the train set and the test set.
# * Compare the two accuracies and observe if there are any signs of underfitting/overfitting.
# * Create two dummy baseline models, i.e., models in which the labels are all ‘Yes’ or all ‘No’.
# * Obtain the accuracy of these two models and compare it to the Rain Prediction model.
#
# Predict using test data
y_prediction_test = log_regression.predict(X_test)
test_accuracy_score = accuracy_score(y_test, y_prediction_test)
print(f"Accuracy Score is for test data is :{test_accuracy_score*100:.2f}%")
# Predict using train data
y_prediction_train = log_regression.predict(X_train)
train_accuracy_score = accuracy_score(y_train, y_prediction_train)
print(f"Accuracy Score is for train data is :{train_accuracy_score*100:.2f}%")
# Accuracy scores for both test data and train data seemed to be very similar. No indication of overfitting can be found.
# Creating two dummy baseline models by making the y variable all yes(1) or all no(0)
# All No(0) baseline
baseline_no = np.repeat(0,len(y_train))
all_no_accuracy_score = accuracy_score(baseline_no, y_prediction_train)
print(f"Accuracy Score for a baseline wehere y variable is all no (0) :{all_no_accuracy_score*100:.2f}%")
# All Yes(1) baseline
baseline_yes = np.repeat(1,len(y_train))
all_yes_accuracy_score = accuracy_score(baseline_yes, y_prediction_train)
print(f"Accuracy Score for a baseline wehere y variable is all yes (1) :{all_yes_accuracy_score*100:.2f}%")
# ##### 2.Obtain the confusion matrix and the classification report for the model.
# * Understand the difference between the four splits of the data in the confusion matrix, and research about the different metrics that can be obtained using this matrix - precision, recall, f1-score, specificity, true positive rate and false positive rate.
# * Likewise, understand the values of the metrics obtained from the classification report.
#
confusion_m = confusion_matrix(y_test,y_prediction_test)
confusion_m
# Obtain tp, fp, fn, tp
TN, FP, FN, TP = confusion_matrix(y_test,y_prediction_test).ravel()
# Print the four pieces of the confusion matrix
print(f"True Positive (TP):{TP:,}")
print(f"True Negative (TN):{TN:,}")
print(f"False Positive (FP):{FP:,}")
print(f"False Negative (FN):{FN:,}")
# Create dataframe to create graph
cm_dict={'Actual Positive':[TP,FN],'Actual Negative':[FP,TN]}
df_confusion_m = pd.DataFrame(data=cm_dict, columns=["Actual Positive", "Actual Negative"], index=["Predictive Positive","Predictive Negative"])
# Display correlation heatmap
fig, ax = plt.subplots(figsize=(12,10))
sns.heatmap(
df_confusion_m,
annot=True,
fmt='d',
cmap=sns.diverging_palette(1, 255, n=50),
ax=ax
)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=45,
horizontalalignment='right'
);
# Show classification report showing precicion, recall, f1-score, support
print(classification_report(y_test,y_prediction_test))
# Accuracy
accuracy = (TP+TN)/(TP+TN+FP+FN)
accuracy
# Display precision, recall, fscore from function
precision_recall_fscore_support(y_test,y_prediction_test, average='binary')
# Calculate manually
# Precision calculation TP/(TP+FP)
precision = TP/(TP+FP)
precision
# Calculate manually
# Recall (true positive rate) calculation TP/(TP+FN)
recall = TP/(TP+FN)
recall
# Calculate manually
# false positive rate calculation FP/(FP+TN)
false_PR = FP/(FP+TN)
false_PR
# Calculate manually
# F1 calculation 2*TP/(2*TP+FP+FN)
recall = 2*TP/(2*TP+FP+FN)
recall
# Calculate manually
#specifity calculation TN / (TN + FP)
specifity = TN / (TN + FP)
specifity
# ##### 3 By default, the split between “Yes” and “No” occurs at a probability of 0.5. Try changing this threshold to a value in which Type 2 error decreases (which would increase type 1 error as a trade-off)
# * Try to understand why one would need to alter these threshold values, and its implications on the model.
#
# Probability of no rain(0) and rain(1) for the first 10 records
# With a threshold of 0.5, any value with greater than 0.5 indicates yes, it will rain
log_regression.predict_proba(X_test)[0:10]
# * Type 1 error = False Positives - Predicted Rain, Actual no rain
# * Type 2 Error = False Negatives - Predicted No Rain, Actual rain
# Generate thresholds 0.1 to 0.9 and calculate confusion matrix
for i in range(1,10):
# Predict original probability of rain (column 1)
y_rain_prediction = log_regression.predict_proba(X_test)[:,1]
# Reshape to pass to binarize function
y_rain_prediction = y_rain_prediction.reshape(-1,1)
new_threshold = i/10
print(f"\nThreshold:{new_threshold}")
y_new_rain_prediction = binarize(y_rain_prediction,threshold=new_threshold)
new_confusion_m = confusion_matrix(y_test,y_new_rain_prediction)
# Obtain tp, fp, fn, tp
TN, FP, FN, TP = new_confusion_m.ravel()
print(f"False Positives (Type I errors):{FP:,}")
print(f"False Negatives (Type II errors):{FN:,}")
accuracy = (TP+TN)/(TP+TN+FP+FN)
precision = TP/(TP+FP)
recall = TP/(TP+FN)
print(f"Accuracy:{accuracy*100:.2f}%")
print(f"Precision:{precision*100:.2f}%")
print(f"Recall:{recall*100:.2f}%")
| part_two/Part 2 - Milestone 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
df=pd.read_csv('../data/SavingAccount.csv')
#convert timeStamp from object to actual time stamp
df['opening_date'] = pd.to_datetime(df['opening_date'])
type(df['opening_date'].iloc[0])
df['year'] = df['opening_date'].apply(lambda time: time.year)
df['Month'] = df['opening_date'].apply(lambda time: time.month)
df['Day'] = df['opening_date'].apply(lambda time: time.day)
# # Dataset
df.head()
tmp=df['region'].value_counts().to_dict()
plt.title('Saving Account by Region')
sns.countplot(x='region',data=df)
plt.title('Saving Account by Region with %')
plt.pie(x=list(tmp.values()),labels=list(tmp.keys()),data=df,shadow=True,startangle=-40,autopct='%1.1f%%',explode=(0.1,0.1,0,0,0))
plt.Circle((0,0),0.80,color='black', fc='white',linewidth=1.25)
plt.title('Accounts by Gender ratio in each Region')
sns.countplot(x='region',hue='gender',data=df)
plt.title('Region wise Account type')
sns.countplot(x='region',hue='acc_type',data=df)
plt.title('Region wise Qualification')
sns.countplot(x='region',hue='qualification',data=df)
plt.title('Customer having Bike')
sns.countplot(x='region',hue='having_bike',data=df)
plt.title('Customer having Car')
sns.countplot(x='region',hue='having_car',data=df)
plt.title('Customer having Home')
sns.countplot(x='region',hue='having_home',data=df)
x=df['gender'].value_counts().to_dict()
plt.title('Saving Account by Gender')
plt.pie(x=list(x.values()),labels=list(x.keys()),data=df,shadow=True,startangle=-40,autopct='%1.1f%%',explode=(0.1,0))
plt.Circle((0,0),0.80,color='black', fc='white',linewidth=1.25)
plt.title('Saving Account Type')
sns.countplot(x='acc_type',data=df)
tm=df['qualification'].value_counts().to_dict()
plt.title('Customer by Qualification')
plt.pie(x=list(tm.values()),labels=list(tm.keys()),data=df,shadow=True,startangle=-40,autopct='%1.1f%%',explode=(0.1,0.1,0.1,0,0,0))
plt.Circle((0,0),15,color='black', fc='white',linewidth=1.25)
plt.title('Customer Account opening by Year')
sns.countplot(x='year',data=df)
plt.figure(figsize=(10,5))
plt.title('Customer Account opening by Year in each Region')
sns.countplot(x='year',hue='region',data=df)
plt.title('Monthly Account Opening')
sns.countplot(x='Month',data=df)
plt.figure(figsize=(12,5))
plt.title('Monthly Account Opening By Region')
sns.countplot(x='Month',hue='region',data=df)
x=df['Day'].value_counts().to_dict()
plt.title('Daily Account Opening')
sns.countplot(x='Day',data=df)
| Notebooks/SavingAccount.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loan Repayment Prediction
# by <NAME>
#
# In this decision trees and random forest project I will explore publicly available data from [LendingClub.com](www.lendingclub.com). Lending Club connects people who need money (borrowers) with people who have money (investors).
#
# I will use lending data from 2007-2010 and try to classify and predict whether or not the borrower paid back their loan in full. The link to download the data is [here](https://www.lendingclub.com/info/download-data.action). I have the cleaned version of the data from the above link in a csv format which I will be using for this project.
#
# Here are what the columns in the dataset represent:
# * credit.policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.
# * purpose: The purpose of the loan (takes values "credit_card", "debt_consolidation", "educational", "major_purchase", "small_business", and "all_other").
# * int.rate: The interest rate of the loan, as a proportion (a rate of 11% would be stored as 0.11). Borrowers judged by LendingClub.com to be more risky are assigned higher interest rates.
# * installment: The monthly installments owed by the borrower if the loan is funded.
# * log.annual.inc: The natural log of the self-reported annual income of the borrower.
# * dti: The debt-to-income ratio of the borrower (amount of debt divided by annual income).
# * fico: The FICO credit score of the borrower.
# * days.with.cr.line: The number of days the borrower has had a credit line.
# * revol.bal: The borrower's revolving balance (amount unpaid at the end of the credit card billing cycle).
# * revol.util: The borrower's revolving line utilization rate (the amount of the credit line used relative to total credit available).
# * inq.last.6mths: The borrower's number of inquiries by creditors in the last 6 months.
# * delinq.2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.
# * pub.rec: The borrower's number of derogatory public records (bankruptcy filings, tax liens, or judgments).
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set_style('whitegrid')
# +
# Extracting the data from csv file
loans = pd.read_csv('loan_data.csv')
loans.head()
# +
# Info of the loans dataframe
loans.info()
# +
# Statistical description of the loans dataframe
loans.describe()
# -
# **Exploratory Data Analysis**
# +
# Histogram of two FICO distributions, one for each credit.policy outcome
plt.figure(figsize=(11,7))
loans[loans['credit.policy']==1]['fico'].hist(alpha=0.5, color='blue', bins=30, label='Credit.Policy=1')
loans[loans['credit.policy']==0]['fico'].hist(alpha=0.5, color='red', bins=30, label='Credit.Policy=0')
plt.legend()
plt.xlabel('FICO')
# +
# Histogram of two FICO distributions, one for each not.fully.paid outcome
plt.figure(figsize=(11,7))
loans[loans['not.fully.paid']==1]['fico'].hist(alpha=0.5, color='blue', bins=30, label='not.fully.paid=1')
loans[loans['not.fully.paid']==0]['fico'].hist(alpha=0.5, color='red', bins=30, label='not.fully.paid=0')
plt.legend()
plt.xlabel('FICO')
# -
# Countplot for the counts of loans by purpose, with hue by not.fully.paid
plt.figure(figsize=(11,7))
sns.countplot(data=loans, hue='not.fully.paid', x='purpose', palette='Set1')
# +
# Trend between FICO score and interest rate
sns.jointplot(data=loans, x='fico', y='int.rate', color='blue')
# +
# Comparing the trend between not.fully.paid outcomes and credit.policy
plt.figure(figsize=(11,7))
sns.lmplot(data=loans, x='fico', y='int.rate', hue='credit.policy', col='not.fully.paid', palette='Set1')
# -
# **Dealing with the categorical columns of the data**
# +
# Creating dummy variables for the 'purpose' column of the dataframe
categorical_ft = ['purpose']
final_data = pd.get_dummies(loans, columns=categorical_ft, drop_first=True)
final_data.head()
# -
# **Decision Tree Model**
# +
# Splitting the data into training and testing data
from sklearn.model_selection import train_test_split
X = final_data.drop('not.fully.paid', axis=1)
y = final_data['not.fully.paid']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# +
# Training the decision tree model
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
# +
# Predicting from the model
pred_dt = dt.predict(X_test)
# -
# **Random Forest Model**
# +
# Training the random forest model
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=600)
rf.fit(X_train, y_train)
# +
# Predicting from the model
pred_rf = rf.predict(X_test)
# -
# **Evaluating the Metrics and Comparing Decision Tree Model with Random Forest Model**
# +
# Metrics for decision tree
from sklearn import metrics
print('Confusion Matrix for Decision Tree:', '\n', metrics.confusion_matrix(y_test, pred_dt), '\n')
print('Classification Report for Decision Tree:', '\n', metrics.classification_report(y_test, pred_dt))
# +
# Metrics for random forest
print('Confusion Matrix for Random Forest:', '\n', metrics.confusion_matrix(y_test, pred_rf), '\n')
print('Classification Report for Random Forest:', '\n', metrics.classification_report(y_test, pred_rf))
# -
# To conclude, it appears that the Random Forest Model performed better than the Decision Tree Model, mainly. But then it depends on what metric one is trying to optimize for.
| Loan-Repayment-Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/google/timesketch/blob/master/notebooks/MUS2019_CTF.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="2y1Dij2Z7C4n" colab_type="text"
# # Magnet User Summit CTF 2019
#
# + [markdown] id="V3DWqc_275Jm" colab_type="text"
# The folks at [Magnet Forensics](https://www.magnetforensics.com/) had a [conference](https://magnetusersummit.com/) recently, and as part of it they put together a digital forensics-themed Capture the Flag competition. I wasn't able to attend, but thankfully they [released the CTF online](https://www.hecfblog.com/2019/04/daily-blog-657-mus2019-dfir-ctf-open-to.html) a few days after the live competition ended.
#
# It looked like a lot of fun and I wanted to take a crack at it using the open source tools we use/build here at Google.
#
# + [markdown] id="1_pWpmTdnYJM" colab_type="text"
# ## Forensics Preprocessing
# + [markdown] id="2Hph_KETIvOT" colab_type="text"
# I'm going to focus on how to find the answers to the CTF questions after all the processing has been done. I'll quickly summarize the processing steps I did to get to the state when I pick up my walkthrough.
#
# I started off by processing the provided E01 image with a basic log2timeline command; nothing special added:
#
#
# ```
# log2timeline.py MUS2019-CTF.plaso MUS-CTF-19-DESKTOP-001.E01
# ```
#
# Once that finished, I went to [Timesketch](https://github.com/google/timesketch), made a new sketch, and uploaded the MUS2019-CTF.plaso file I just made. The *.plaso* file is a database containing the results of my log2timeline run; Timesketch can read it and provide a nice, collaborative interface for reviewing and exploring that data.
#
# Most of what I'm going to show you is done in Colab by accessing the Timesketch API in Python. You can do most of the steps in the Timesketch web interface directly, but I wanted to demonstrate how you can use Python, Colab, Timesketch, and Plaso together to work a case.
#
# + [markdown] id="w-UPKnDLnbBF" colab_type="text"
# ## Timesketch & Colab Setup
#
# + [markdown] id="0dmFr3P_9Ao7" colab_type="text"
# First, if you want to run this notebook and play along, click the 'Connect' button at the top right of the page. The [Timesketch GitHub](https://github.com/google/timesketch) has Colab ([Timesketch and Colab](https://colab.research.google.com/github/google/timesketch/blob/master/notebooks/colab-timesketch-demo.ipynb)) that walks through how to install, connect, and explore a Sketch using Colab. Please check it out if you want a more thorough explanation of the setup; I'm just going to show the commands you need to run to get it working:
# + id="h35lMbAxIeYE" colab_type="code" colab={}
# Install the TimeSketch API client if you don't have it
# !pip install timesketch-api-client
# Import some things we'll need
from timesketch_api_client import client
import pandas as pd
pd.options.display.max_colwidth = 60
# + [markdown] id="ph1jGHR5JjZo" colab_type="text"
# ### Connect to Timesketch
# + [markdown] id="i5JCNPkL87Cq" colab_type="text"
# By default, this will connect to the public demo Timesketch server, which [<NAME>](https://twitter.com/HECFBlog) has graciously allowed to host a copy of the Plaso timeline of the MUS2019-CTF. Thanks Dave!
# + id="dqwwIBOpJfZi" colab_type="code" cellView="form" colab={}
#@title Client Information { run: "auto"}
SERVER = 'https://demo.timesketch.org' #@param {type: "string"}
USER = 'demo' #@param {type: "string"}
PASSWORD = '<PASSWORD>' #@param {type: "string"}
ts_client = client.TimesketchApi(SERVER, USER, PASSWORD)
# + [markdown] id="OZ4CahZZPjht" colab_type="text"
# Now that we've connected to the Timesketch server, we need to select the Sketch that has the CTF timeline.
#
# First we'll list the available sketches, then print their names:
# + id="vsEZI45porba" colab_type="code" colab={}
sketches = ts_client.list_sketches()
for i, sketch in enumerate(sketches):
print('[{0:d}] {1:s}'.format(i, sketch.name))
# + [markdown] id="E6RX0jqzQrRq" colab_type="text"
# Then we'll select the MUS2019-CTF sketch (shown as sketch 0 above; you can change the number below to select a different sketch):
# + id="hvCl6L7ZQrsZ" colab_type="code" colab={}
ctf = sketches[0]
# + [markdown] id="4P9vvN2KWRjX" colab_type="text"
# Lastly, I'll briefly explain a few paramters of the **explore** function, which we'll use heavily when answering questions.
#
# <sketch_name>.explore() is how we send queries to Timesketch and get results back. **query_string**, **return_fields**, and **as_pandas** are the main parameters I'll be using:
# - query_string: This is the same as the query you'd enter if you were using the Timesketch web interface.
# - return_fields: Here we specify what fields we want back from Timesketch. This is where we can get really specific using Colab and only get the things we're interested in (which varies depending on what data types we're expecting back).
# - as_pandas: This just a boolen value which tells Timesketch to return a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html), rather than a dictionary. We'll have this set to True in all our queries, since DataFrames are awesome!
#
# Okay, enough setup. Let's get to answering questions!
# + [markdown] id="tzvfE55GLKhB" colab_type="text"
# # Questions
#
# + [markdown] id="Pfy7e0iO9MJK" colab_type="text"
# 
#
# I grouped the questions from the 'Basic - Desktop' section into three categories: NTFS, TeamViewer, and Registry.
# + [markdown] id="SBtoNqZcp_Oi" colab_type="text"
# ## NTFS Questions
# This first set of questions relate to aspects of NTFS: MFT entries, sequence numbers, USN entries, and VSNs.
# + [markdown] id="t4G6-lVUqN_V" colab_type="text"
# As a little refresher, the 64-bit **file reference address** (or number) is made up of the **MFT entry** (48 bits) and **sequence** (16 bits) numbers. We often see this represented as something like 1234-2, with 1234 being the MFT entry number and 2 being the sequence number. Plaso calls the MFT entry number the **inode**, since that's the more generic term that applies across file systems.
# + [markdown] id="aVcSxh925V2T" colab_type="text"
# ### Q: What is the name of the file associated with MFT entry number 102698?
# + [markdown] id="m6apzEsfAGeo" colab_type="text"
# Since Plaso parses out the MFT entry (or as it calls it, inode) into its own field, let's do a query for all records with that value:
# + id="0n6k7v-ioy4_" colab_type="code" colab={}
ts_results = ctf.explore(
'inode:102698',
return_fields='datetime,timestamp_desc,data_type,inode,filename',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','inode','filename']]
# + [markdown] id="zuBoWImjAjs-" colab_type="text"
# Multiple results, as is expected since Plaso creates multiple records for different types of timestamps, but they all point to the same filename: **/Users/Administrator/Downloads/TeamViewer_Setup.exe**
# + [markdown] id="RPH7R0tKtLXn" colab_type="text"
# ### Q: What is the file name that represented MFT entry 60725 with a sequence number of 10?
# + [markdown] id="lHvfLumPlC5J" colab_type="text"
# The quick way to answer this is to just search for the MFT entry number (60725) and look for references to sequence number 10 in the message field:
# + id="idzQZADYj8LF" colab_type="code" colab={}
ts_results = ctf.explore(
'60725',
return_fields='datetime,timestamp_desc,data_type,filename,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','filename','message']]
# + [markdown] id="lEAo1lLcUeBb" colab_type="text"
# That's a bunch of rows, so let's filter it down by searching for messages that contain '60725-10':
# + id="iPCYySiEVB6P" colab_type="code" colab={}
ts_results[ts_results.message.str.contains('60725-10')]
# + [markdown] id="ShIpZoBRV-xi" colab_type="text"
# That filename is really long and cut off; let's just select that field, then deduplicate using set():
# + id="7xRO5-FFV_KN" colab_type="code" colab={}
set(ts_results[ts_results.message.str.contains('60725-10')].filename)
# + [markdown] id="VX8kLLl4ltAI" colab_type="text"
# Another way to solve this is to query for the file reference number directly. That's not as easy as it sounds, since Plaso stores it in the hex form ([I'm working on fixing that](https://github.com/log2timeline/plaso/issues/2453)). We can work with that though!
#
# Let's do the same query as above, but add the file_reference field:
# + id="qzemkUIAl-dX" colab_type="code" colab={}
ts_results = ctf.explore(
'60725',
return_fields='datetime,timestamp_desc,data_type,file_reference,filename,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','file_reference','filename','message']]
# + [markdown] id="ujdlJk_a9Hxo" colab_type="text"
# The *file_reference* value is not the format we want, since it's hard to tell what the sequence number is. We can convert it to a more useful form though:
# + id="qM4QlMgqmQRk" colab_type="code" colab={}
# Drop any rows with NaN, since they aren't what we're looking for and will
# break the below function.
ts_results = ts_results.dropna()
pd.options.display.max_colwidth = 110
# Replace the file_reference hex value with the human-readable MFT-Seq version.
# This is basically what Plaso does to display the result in the 'message'
# string we searched for.
ts_results['file_reference'] = ts_results['file_reference'].map(
lambda x: '{0:d}-{1:d}'.format(int(x) & 0xffffffffffff, int(x) >> 48))
ts_results[['datetime','timestamp_desc','data_type','file_reference','filename']]
# + [markdown] id="5zq9iqSJ9bA6" colab_type="text"
# There. Now we have the file_reference number in an easier-to-read format, and the history of all filenames that MFT entry 60725 has had! It's easy to look for the entry with a sequence number of 10 and get our answer.
# + [markdown] id="hoj0JdKQrDru" colab_type="text"
# ### Q: Which file name represents the USN record where the USN number is 546416480?
# + [markdown] id="tDnlbF-8rfIF" colab_type="text"
# Like other questions, the quick, generic way to answer is to just search for the unique detail; in this case, search in Timesketch for '546416480'. I'll show the more targeted way below, but it's pretty simple:
# + id="lPObip1NrOtn" colab_type="code" colab={}
ts_results = ctf.explore(
'update_sequence_number:546416480',
return_fields='datetime,timestamp_desc,data_type,update_sequence_number,filename',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','update_sequence_number','filename']]
# + [markdown] id="TfRrngTvysQi" colab_type="text"
# ### Q: What is the MFT sequence number associated with the file "\Users\Administrator\Desktop\FTK_Imager_Lite_3.1.1\FTK Imager.exe"?
# + [markdown] id="Xq0HG3CV2s1s" colab_type="text"
# We'll handle this question like other ones involving the file reference address, except in this case we first need to find the MFT entry number (or inode) from the file name. Searching for the whole file path in Timesketch is problematic (slashes among other things), so let's search for the file name and then verify the path is right:
# + id="lvxp9ltq0plT" colab_type="code" colab={}
ts_results = ctf.explore(
'FTK Imager.exe',
return_fields='datetime,timestamp_desc,data_type,inode,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','inode','message']]
# + [markdown] id="MZ14rd0f3rCC" colab_type="text"
# In the second row of the results, we can find the correct path we're looking for in the message and see that the corresponding inode is 99916. We could do another search, similar to how we answered other questions... or we could just look down a few rows for a USN entry that shows: "FTK Imager.exe File reference: 99916-**4**". There's the answer!
# + [markdown] id="VjF1NkzwsROK" colab_type="text"
# ### Q: What is the Volume Serial Number of the Desktop's OS volume?
# + [markdown] id="3d-el0ezsoQi" colab_type="text"
# I know the VSN can be found in multiple places, but the first one I thought of was as part of a Prefetch file, so let's do it that way.
#
# I'll search for all 'volume creation' Prefetch records, since I don't really care about which particular one, beyond that it's from the OS drive.
# + id="1gt8_VMxsQ77" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:volume:creation"',
return_fields='datetime,timestamp_desc,data_type,device_path,hostname,serial_number,message',
as_pandas=True)
pd.options.display.max_colwidth = 70
ts_results[['datetime','timestamp_desc','data_type','device_path','hostname','serial_number','message']]
# + [markdown] id="KUtZWLKmuWet" colab_type="text"
# You can see the VSN in a readable format at the end of the device_path or in the message string. I'm only seeing one value here, so we don't need to determine which drive was the OS one. If we did, I'd look for some system processes that need to run from the OS drive to get the right VSN.
#
# That's good enough for the question, but let's also convert the serial_number field from an integar to the hex format the answer wants, just to be sure:
# + id="CVcrtsTfvF6T" colab_type="code" colab={}
'{0:08X}'.format(3438183451)
# + [markdown] id="4Ig0sZEC8fYa" colab_type="text"
# ## TeamViewer Questions
# The next group of questions involved [TeamViewer](https://www.teamviewer.com/en-us/), a common remote desktop program.
# + [markdown] id="qsH4fUlqIUho" colab_type="text"
# ### Q: Which user installed Team Viewer?
# + [markdown] id="Hl_qGSxrLzN6" colab_type="text"
# We can start searching very broadly, then focus in on anything that stands out. Let's just search everything we have for "TeamViewer":
# + id="Hs2NyOf0IUS2" colab_type="code" colab={}
ts_results = ctf.explore(
'TeamViewer',
return_fields='datetime,timestamp_desc,data_type,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','message']]
# + [markdown] id="ACBv7solSKa9" colab_type="text"
# That returned a lot of results (600+). We could page through them all, but why not see if there are any interesting clusters first? That sounds like a job for a visualization!
#
# You can do this multiple ways; I'll do it in Python in a second, but the explanation is a bit complicated. The easier way is to do the search in TImesketch, then go to Charts > Histogram:
#
# 
# + [markdown] id="7zRjoel2wRdA" colab_type="text"
# And here's how you'd do something similar in Python:
# + id="fxJN_YxSScJu" colab_type="code" colab={}
ts_results = ts_results.set_index('datetime')
ts_results['2018':].message.resample('D').count().plot()
# + [markdown] id="Khe0KmZbyWl0" colab_type="text"
# Okay, so from the graphs it looks like we have a good cluster at the end of February; let's look closer. I'll slice the results to only show after 2019-02-20:
# + id="y-hGtMtYyrt6" colab_type="code" colab={}
ts_results = ctf.explore(
'TeamViewer',
return_fields='datetime,timestamp_desc,data_type,filename,message',
as_pandas=True)
ts_results = ts_results.set_index('datetime')
ts_results['2019-02-20':][['timestamp_desc','data_type','filename','message']]
# + [markdown] id="-3krCPn50csr" colab_type="text"
# So from this, in a short interval starting 2019-02-25T20:39, we can see:
# * a Google search for "teamviewer"
# * a visit in Chrome to teamviewer.com,
# * then teamviewer.com/en-us/teamviewer-automatic-download/,
# * and lastly a bunch of TeamViewer related files being created.
#
# The web browser and files created were done under the Administrator account (per the path filename), so that's our answer.
# + [markdown] id="5EzZwgne_V44" colab_type="text"
# ### Q: How Many Times
# At least how many times did the teamviewer_desktop.exe run?
# + [markdown] id="--29x7kjADg3" colab_type="text"
# Prefetch is a great artifact for "how many times did something run"-type questions, so let's look for Prefetch execution entries for the program in question:
# + id="v2ZjHcsv_Voj" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:prefetch:execution" AND teamviewer_desktop.exe',
return_fields='datetime,timestamp_desc,data_type,executable,run_count,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','executable','run_count','message']]
# + [markdown] id="QwCaUO1TBLis" colab_type="text"
# ### Q: Execute Where
# After looking at the TEAMVIEWER_DESKTOP.EXE prefetch file, which path was the executable in at the time of execution?
#
#
# + [markdown] id="8gu7QiD-Bkou" colab_type="text"
# We did all the work for this question with the previous query (the answer is in the message string), but we can explicitly query for the path:
# + id="Y1jC_-hMBLUG" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:prefetch:execution" AND teamviewer_desktop.exe',
return_fields='datetime,timestamp_desc,data_type,executable,run_count,path',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','executable','run_count','path']]
# + [markdown] id="4Z5KJzwABK7V" colab_type="text"
# ## Registry Questions
# This last set of questions can be answered using the Windows Registry (and one from event logs).
# + [markdown] id="oYn41EUYFf5O" colab_type="text"
# Lots of registry questions depend on the Current Control Set, so let's verify what it is:
# + id="qbT0cf4dB_aJ" colab_type="code" colab={}
# Escaping fun: We need to esacpe the slashes in the key_path once for Timesketch and once for Python, so we'll have triple slashes (\\\)
ts_results = ctf.explore(
'data_type:"windows:registry:key_value" AND key_path:"HKEY_LOCAL_MACHINE\\\System\\\Select"',
return_fields='datetime,timestamp_desc,data_type,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','message']]
# + [markdown] id="FoqCvGtlFnZ1" colab_type="text"
# From the message, the Current control set is 1.
# + [markdown] id="3QuYQux5FtlO" colab_type="text"
# ### Q: What was the timezone offset at the time of imaging? and What is the timezone of the Desktop
# + [markdown] id="sX5xYElTHxOh" colab_type="text"
# I'm combining these, since the answer is in the same query:
# + id="f8CE4NYEFzWu" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:registry:key_value" AND key_path:"HKEY_LOCAL_MACHINE\\\System\\\ControlSet001\\\Control\\\TimeZoneInformation"',
return_fields='datetime,timestamp_desc,data_type,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','message']]
# + [markdown] id="HJDjeFc0bzu4" colab_type="text"
# The message is really long; let's pull it out:
# + id="s8UMrq0tb3u6" colab_type="code" colab={}
set(ts_results.message)
# + [markdown] id="NAloEuihHke2" colab_type="text"
# The name of the Timezone is in the message string, as is the ActiveTimeBias, which we can use to get the UTC offset:
# + id="5R57xnzQGzEB" colab_type="code" colab={}
# The ActiveTimeBias is in minutes, so divide by -60 (I don't know why it's stored negative):
420 / -60
# + [markdown] id="yU8rM-2fQC-g" colab_type="text"
# ### Q: When was the Windows OS installed?
# + [markdown] id="yGnx4554QICl" colab_type="text"
# Plaso actually parses this out as it's own data_type, so querying for it is easy:
# + id="_DTNDAiAMSgz" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:registry:installation"',
return_fields='datetime,timestamp_desc,data_type,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','message']]
# + [markdown] id="h6OG8tRbRf6b" colab_type="text"
# ### Q: What is the IP address of the Desktop?
# + [markdown] id="kNmlBEFAR1lI" colab_type="text"
# We already confirmed the Control Set is 001, so let's query for the registry key under that control set that holds the Interface information:
# + id="angYvM_3RfA1" colab_type="code" colab={}
ts_results = ctf.explore(
'key_path:"System\\\ControlSet001\\\Services\\\Tcpip\\\Parameters\\\Interfaces"',
return_fields='datetime,timestamp_desc,data_type,message',
as_pandas=True)
ts_results[['datetime','timestamp_desc','data_type','message']]
# + [markdown] id="LckTVA9eSDCR" colab_type="text"
# There are a few entries, but only the last one has what we want. Reading through it (or using Ctrl+F) we can find the 'IPAddress' is 192.168.3.11.
# + id="55kVv0nOcVKf" colab_type="code" colab={}
set(ts_results.message)
# + [markdown] id="VXtkOaKTU5YP" colab_type="text"
# ### Q: Which User Shutdown Windows on February 25th 2019?
# + [markdown] id="UbuECyqADgrp" colab_type="text"
# Event logs seem like a good place to look for this answer, since a shutdown generates a 1074 event in the System event log. From the question, we have a fairly-narrow timeframe, so let's slice the results down to that after we do our query:
# + id="6dq7IoM_U7Qo" colab_type="code" colab={}
ts_results = ctf.explore(
'data_type:"windows:evtx:record" AND filename:"System.evtx" AND 1074',
return_fields='datetime,timestamp_desc,data_type,username,message',
as_pandas=True)
ts_results = ts_results.set_index('datetime')
ts_results['2019-02-25':'2019-02-26'][['timestamp_desc','data_type','username','message']]
# + [markdown] id="AlWim_NuGMeV" colab_type="text"
# # Wrap Up
# + [markdown] id="BbHOAd3X9Sq-" colab_type="text"
# That's it! Thanks for reading and I hope you found this useful. This walkthrough covered most of the questions from the 'Basic - Desktop' category; I may do other sections as well if there is time/interest. If you found this useful, check out Kristinn's demonstration of [Timesketch and Colab](https://colab.research.google.com/github/google/timesketch/blob/master/notebooks/colab-timesketch-demo.ipynb).
#
# You can get the free, open source tools I used to solve the CTF:
# * Plaso / Log2Timeline: https://github.com/log2timeline/plaso
# * Timesketch: https://github.com/google/timesketch
# * Colab(oratory): https://colab.sandbox.google.com/notebooks/welcome.ipynb
| notebooks/MUS2019_CTF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="4oxk273fWvwK"
# # ```superautodiff``` Documentation
# #### CS207 Fall '19 Final Project
# #### Group 1: _Team Gillet_
# #### <NAME>, <NAME>, <NAME>, <NAME>
#
# + [markdown] colab_type="text" id="r9-e0WQ0Xn1B"
# ---
# + [markdown] colab_type="text" id="I855YlHkXpGz"
# # Introduction
#
# <br>
#
# Derivatives play a critical role in the natural and applied sciences, with optimization being one of the core applications involving derivatives. Traditionally, derivatives have been approached either symbolically or through numerical analysis (_e.g._ finite differences). Although numerical approaches to solving derivatives are simple to compute, they are prone to stability issues and round-off errors. Meanwhile, although symbolic derivatives enable the evaluation of derivatives to machine precision, the process is limited by its computational intensity. Recently, the size and complexity of the functions involving derivatives have grown; these demands necessitate an alternative to symbolic and numerical methods that is able to compute derivatives with higher accuracy at a lower cost. Automatic Differentiation (AD) addresses these issues by executing a sequence of elementary arithmetic operations to compute accurate derivatives.
#
# <br>
#
# Our team aims to develop a Python package, ```superautodiff```, that implements both forward-mode and reverse-mode AD (the latter approach being our project extension). This document will review some of the the mathematical foundations behind our approach and provide relevant information on documentation and usage of ```superautodiff```. Finally, the documentation will discuss some of the underlying implementation details along with plans for how the project might develop in the future.
# + [markdown] colab_type="text" id="4tnOdVJ8XpNy"
# ---
# + [markdown] colab_type="text" id="jqgLFkC1XpJ8"
# # Background
# <br>
#
# ## Mathematical Foundations
#
# AD relies heavily on the chain rule and several other key mathematical concepts in order to compute derivatives. We now consider some background mathematical foundations that form the theoretical basis of our approach to AD.
#
# <br>
#
# **Differential calculus**
#
# Differential calculus is concerned with the evaluation and study of gradients and/or rates of change. Numerically, we can formally define the derivative of a function $f$ evaluated at $a$ as:
#
# $$f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}$$.
#
#
# **Elementary functions and their derivatives**
#
# Here are some examples of elementary functions used by AD and their corresponding derivatives:
#
# <br>
#
# **<center> Table 1. Elementary functions and their derivatives </center>**
# <br>
#
# | $$f(x)$$ | $$f'(x)$$ |
# | ------------- |:-------------:|
# | $$c$$ | $0$ |
# | $$x$$ | $1$ |
# | $$x^n$$ | $$nx^{n-1}$$ |
# | $$\frac{1}{x}$$ | $$-\frac{1}{x^2}$$ |
# | $$e^x$$ | $$e^x$$ |
# | $$log_ax$$ | $$\frac{1}{x \ln a}$$ |
# | $$\ln x$$ | $$\frac{1}{x}$$ |
# | $$\sin(x)$$ | $$\cos(x)$$ |
# | $$\cos(x)$$ | $$-\sin(x)$$ |
# | $$\tan(x)$$ | $$\frac{1}{\cos^2x}$$ |<br>
#
#
# <br>
#
# **Chain rule for composite functions**
#
# The chain rule is a formula used to compute composite derivatives containing multiple variables. For instance, if we have a variable $z$ depending on $y$, which itself depends on $x$, we can subsequently employ the chain rule to express the derivative of $z$ with respect to $x$ is given by:
#
# <br>
#
# $${\frac {dz}{dx}}={\frac {dz}{dy}}\cdot {\frac {dy}{dx}}$$
#
# <br>
#
# **<center> The chain rule </center>**
#
# <br>
#
# **Forward and reverse mode**
#
# For functions where we have intermediate components in our derivatives, we can keep track of the derivatives of each component using either of the following two modes: the forward mode and the reverse mode.
# - The forward mode starts with the input and computes the derivative with respect to the input using the chain rule at each subcomponent. The process involves storing the intermediate values of the derivatives of variables with respect to the input in order to evaluate the overall derivative: <br> <br>
# $$\frac{dw_i}{dx} = \frac{dw_i}{dw_{i-1}}\frac{dw_{i-1}}{dx}$$<br>
# **<center> Forward mode </center>**
#
# <br>
#
# - The reverse mode, meanwhile, involves both a forward pass that evaluates the values of the functions along with a backward pass that stores the derivatives of the output with respect to the different variables: <br> <br> $$\frac{dy}{dw_i} = \frac{dy}{dw_{i+1}}\frac{dw_{i+1}}{dw_i}$$ <br> **<center> Reverse mode </center>**
#
# <br>
#
# **Computational graph representation**
#
# The elementary operations involved in the forward accumulation involved in the forward mode can be visually represented through a computational graph. For instance, the computational graph of the function $f(x)=x−\exp\{−2\sin^2(4x)\}^{[1]}$ is illustrated on Figure 1; Figure 2 presents a more complex computational graph.
#
# <br>
#
# The graph breaks down the given function into a sequence of elementary operations that are visually charted out through the computational graph. The graph operates similarly to a flowchart and illustrates how each elementary operation modifies our initial parameter inputs in order to recover the function.
#
#
# <br>
#
# <img src="fig/graph_1.png" style="height:300px;">
#
# **<center> Figure 1. A computational graph for $f(x)=x−\exp\{−2\sin^2(4x)\}^{[1]}$</center>**
#
# <br>
#
# <img src="fig/graph_2.png" style="height:450px;">
#
# **<center> Figure 2. A more complex computational graph</center>**
#
# <br>
#
# [1] <NAME>, lecture 10, CS207 Fall '19
#
# <br>
#
# ## What our package is doing
# Essentially, our package utilizes the aforementioned mathematical concepts in order to implement the AD through the forward mode. A primary function in our package, ```autodiff()```, takes in mathematical functions and corresponding points at which to evaluate the mathemetical functions and obtains an evaluative trace (similar to that of the graph structure above). Subsequently, this trace is used to perform differentiation on said mathematical function, using the chain rule to evaluate both the derivatives, the derivative values, and the current values at each component of the trace.
#
# Under the hood, we might perceive of the function's calculations as equivalent to populating the table illustrated in Table 2. This is basically the core of forward-mode AD; the functionality and operation of our package is discussed in greater detail in the subsequent section.
#
# <br>
#
# **<center>Table 2. An evaluation table for a foward-mode neural network</center>**
#
# | Trace | Elementary Function | Current Value | Elementary Function Derivative | $\nabla_{x}$ Value | $\nabla_{y}$ Value |
# | :---: | :-----------------: | :-----------: | :----------------------------: | :-----------------: | :-----------------: |
# | $x_{1}$ | $x$ | $x$ | $\dot{ x}_{1}$ | $1$ | $0$ |
# | $x_{2}$ | $y$ | $y$ | $\dot{x}_{2}$ | $0$ | $1$ |
# | $x_{3}$ | $w_{21}x_1$ | $w_{21}x$ | $w_{21}\dot{x}_{1}$ | $w_{21}$ | $0$ |
# | $x_{4}$ | $w_{12}x_2$ | $w_{12}y$ | $w_{12}\dot{x}_{2}$ | $0$ | $w_{12}$ |
# | $x_{5}$ | $w_{11}x_1$ | $w_{11}x$ | $w_{11}\dot{x}_{1}$ | $w_{11}$ | $0$ |
# | $x_{6}$ | $w_{22}x_2$ | $w_{22}y$ | $w_{22}\dot{x}_{2}$ | $0$ | $w_{22}$ |
# | $x_{7}$ | $x_4 + x_5$ | $w_{11}x + w_{12}y$ | $$\dot{x}_{4} + \dot{x}_{5}$$ | $w_{11}$ | $w_{12}$ |
# | $x_{8}$ | $x_3 + x_6$ | $w_{21}x + w_{22}y$ | $$\dot{x}_{3} + \dot{x}_{6}$$ | $w_{21}$ | $w_{22}$ |
# | $x_{9}$ | $z(x_7)$ | $z(w_{11}x + w_{12}y)$ | $$\dot{x}_{7}z'(x_7)$$ | $w_{11}z'(w_{11}x + w_{12}y)$ | $w_{12}z'(w_{11}x + w_{12}y)$ |
# | $x_{10}$ | $z(x_8)$ | $z(w_{21}x + w_{22}y)$ | $$\dot{x}_{8}z'(x_8)$$ | $w_{21}z'(w_{21}x + w_{22}y)$ | $w_{22}z'(w_{21}x + w_{22}y)$ |
# | $x_{11}$ | $w_{out,1}x_9$ | $$w_{out,1}z(w_{11}x + w_{12}y) $$ | $$w_{out,1}\dot{x}_9$$ | $w_{out,1}w_{11}z'(w_{11}x + w_{12}y)$ | $w_{out,1}w_{12}z'(w_{11}x + w_{12}y)$ |
# | $x_{12}$ | $w_{out,2}x_{10}$ | $$w_{out,2}z(w_{21}x + w_{22}y) $$ | $$w_{out,2}\dot{x}_{10}$$ | $w_{out,2}w_{21}z'(w_{21}x + w_{22}y)$ | $w_{out,2}w_{22}z'(w_{21}x + w_{22}y)$ |
# | $x_{13}$ | $x_{11} + x_{12}$ | $$w_{out,1}z(w_{11}x + w_{12}y) + w_{out,2}z(w_{21}x + w_{22}y) $$ | $$\dot{x}_{11} + \dot{x}_{12}$$ | $$w_{out,1}w_{11}z'(w_{11}x + w_{12}y) + w_{out,2}w_{21}z'(w_{21}x + w_{22}y)$$ | $$w_{out,1}w_{12}z'(w_{11}x + w_{12}y) + w_{out,2}w_{22}z'(w_{21}x + w_{22}y)$$ |
#
#
#
#
# + [markdown] colab_type="text" id="_r1OdOYlfrGh"
# ---
# + [markdown] colab_type="text" id="KCWRWPWCrHHM"
# # How to use ```superautodiff```
# ## User interaction with the package
# ### Installation
# Our package will be distributed throughy PyPI (which will be detailed in the subsequent section). Users will first install the package by running (this will work only if the user has our ```requirements.txt``` file in their working directory):
#
# ```pip install superautodiff -r requirements.txt```
#
# For more users who think they have the required Python dependencies and do not wish to reinstall said dependencies, the following command should be run instead:
#
# ```pip install superautodiff```
#
# an alternative command which would similarly install our package is as follows:
#
# ```pip install -i https://test.pypi.org/simple/ superautodiff==1.0.5```
#
# Users will then need to import ```superautodiff``` as in the above use case and will need to import our modules in order to access the package functionalities. Most importantly, users will have to import ```autodiff``` to instantiate AD objects. Subsequently, users will simply have to instantiate their functions and points within the objects in order to perform AD. For the use of the other modules, users will need to import them from our package.
#
# Alternatively if users experience some issues with the ```pip``` installation or would prefer to manually install the package, users can download or clone the repository onto their local machine. Subsequently, users need only ensure that the modules are in their working directory and should be able to import the various modules into their Python environment.
#
# The approach of downloading from our package repository is less convenient and is not recommended for basic users; however, the approach should be considered an important alternative for developers and those experiencing issues with ```pip install```.
#
# ### Virtual Environments
# Although this is not required, users can choose to create a virtual environment in which the user can install ```superautodiff``` and interact with the package's functionalities within the virtual environment. We will walk through how virtual environments can be created using ```conda``` and ```virtualenv```.
#
# ##### On ```virtualenv```
# Users should run the following command to set-up ```virtualenv```:
#
# ```sudo easy_install virtualenv```
#
# Subsequently, users can create a virtual environment ```env``` through the following command:
#
# ```virtualenv env```
#
# Next, users will need to activate the virtual environment with the following command:
#
# ```source env/bin/activate```
#
# Now that the user has created and activated a virtual environment ```env```, the user should be able to either manually install ```superautodiff``` through a download or through a ```pip install```.
#
# Finally, once the user is done working in the virtual environment, the user can deactivate the virtual environment by running the following command in the terminal in the package directory:
#
# ```deactivate ```
#
# ##### On ```conda```
# To set up a ```conda``` environment ```env_name``` with Python at version 3.7, users should run the following command:
#
# ```conda create --name env_name python=3.7```
#
# Next, users will need to activate the virtual environment with the following command:
#
# ```conda activate env_name```
#
# Alternatively, if the shell is not configured to use ```conda activate```, users can either run
# ```conda init``` before activating the virtual environment or run the following command:
#
# ```source activate env_name```
#
# As before, now that the user has created and activated a virtual environment ```env```, the user should be able to either manually install ```superautodiff``` through a download or through a ```pip install```.
#
# Finally, once the user is done working in the virtual environment, the user can deactivate the virtual environment by running the following command in the terminal in the package directory:
#
# ```conda deactivate ```
#
# ### Importing
# After installing the package, users need to subsequently import the various modules into their Python environment. For simplicity's sake, users can just import ```superautodiff``` using the following import command to retrieve all the modules in the package:
#
# ```python
# import superautodiff
# ```
#
# Alternatively, it is recommended that users run the following import alias command for concision and consistency:
# ```python
# import superautodiff as sad
# ```
# + [markdown] colab_type="text" id="P6AempE-gyjo"
# ## Instantiating AD objects
#
# ```superautodiff``` is a Python package and its core module is ```autodiff```. Within ```autodiff``` is the ```AutoDiff``` class, where class objects accepts an input $x \in \mathbb{R}$ (stored as the ```val``` attribute) and initializes the derivative (```der``` attribute) at $1$. The ```AutoDiff``` object then supports basic arithmetic operations (_e.g._ addition, multiplication) with integers, floats, and other ```AutoDiff``` objects. These operations will be implemented commutatively through dunder methods as appropriate. With an ```AutoDiff``` object, the user can evaluate the derivatives of a vector of functions at a specified vector of points.
# <br><br>
#
#
# -
# ## Usage
#
# We illustrate several use cases of our package's core functionality and show how it can be used to evaluate derivatives for functions about a given point.
#
# Summarily, the approach that is illustrated below involves importing the module and instantiating an ```AutoDiff``` object. Subsequently, users should use mathematical operations as they see fit in order to map the ```AutoDiff``` object to their target mathematical function. The usage of vectorized ```AutoDiffVector``` objects is similar.
#
# ### Scalar case
# +
# Command to import autodiff module
# %cd ../../cs207-FinalProject
import superautodiff as sad
import pandas as pd
import numpy as np
import math
# Initalize variable inputs and instantiate AutoDiff object
f1 = sad.AutoDiff("x", 3.0)
# Examine initial values
print("Value of f1: {};\nValue of first derivative of f1: {}".format(f1.val, f1.der['x']))
# -
# For target function f_a(x) = x**2 + 2x + 3
# We expect the value to be 18 and the value of the derivative to be 8
f1_a = f1 ** 2 + 2 * f1 + 3
print("Value of f1_a: {};\nValue of first derivative of f1_a: {}".format(f1_a.val, f1_a.der['x']))
# For target function f_b(x) = cos(πx) + 5x + 4
# We expect the value to be 18 and the value of the derivative to be 5 (approximately)
f1_b = sad.cos(f1 * math.pi) + 5 * f1 + 4
print("Value of f1_b: {};\nValue of first derivative of f1_b: {}".format(f1_b.val, f1_b.der['x']))
# For target function f_c(x) = exp(3x) + 2ln(x) - 12x
# We expect the value to be 8069.2 and the value of the derivative to be 24297.9 (approximately)
f1_c = sad.exp(f1 * 3) + 2 * sad.log(f1) - 12 * f1
print("Value of f1_c: {};\nValue of first derivative of f1_c: {}".format(f1_c.val, f1_c.der['x']))
# ### Vector case
#
# We have two approaches to generating our ```AutoDiffVector``` objects that we will illustrate:
# +
# First appraoch using AutoDiff objects
# Initalize variable inputs and instantiate AutoDiff objects
fv_a = sad.AutoDiff("a", 3.5)
fv_b = sad.AutoDiff("b", -5)
fv_c = sad.AutoDiff("c", 7)
# Create an array of AutoDiff objects
f_vect = [fv_a, fv_b, fv_c]
# Use the array to instantiate AutoDiffVector Objects
fv_1 = sad.AutoDiffVector(f_vect)
# Examine initial values
print("AutoDiffVector object: ", fv_1)
print("Value of fv_a: {};\nValue of first derivative of fv_a: {}".format(fv_1.objects['a'].val, fv_1.objects['a'].der))
print("Value of fv_b: {};\nValue of first derivative of fv_b: {}".format(fv_1.objects['b'].val, fv_1.objects['b'].der))
print("Value of fv_c: {};\nValue of first derivative of fv_c: {}".format(fv_1.objects['c'].val, fv_1.objects['c'].der))
# +
# Second approach using arrays of variable names and values
# Initialize variable and value arrays
variables = ['d', 'e', 'f']
values = [4,-1, 1.1]
# Use vectorize to generate ADV objects
fv_2 = sad.vectorize(variables, values)
# Examine initial values
print("AutoDiffVector object: ", fv_2)
print("Value of fv_d: {};\nValue of first derivative of fv_d: {}".format(fv_2.objects['d'].val, fv_2.objects['d'].der))
print("Value of fv_e: {};\nValue of first derivative of fv_e: {}".format(fv_2.objects['e'].val, fv_2.objects['e'].der))
print("Value of fv_f: {};\nValue of first derivative of fv_f: {}".format(fv_2.objects['f'].val, fv_2.objects['f'].der))
# +
# We can perform mathematical operations on our AutoDiffVector object
fv_3 = fv_1 + 3
fv_4 = sad.tan(fv_1) * 4
# Addition
print("\nfv_3 Case:")
print("Value of fv_a: {};\nValue of first derivative of fv_a: {}".format(fv_3.objects['a'].val, fv_3.objects['a'].der))
print("Value of fv_b: {};\nValue of first derivative of fv_b: {}".format(fv_3.objects['b'].val, fv_3.objects['b'].der))
print("Value of fv_c: {};\nValue of first derivative of fv_c: {}".format(fv_3.objects['c'].val, fv_3.objects['c'].der))
# Trigonometric operation and multiplication
print("fv_4 Case:")
print("Value of fv_a: {};\nValue of first derivative of fv_a: {}".format(fv_4.objects['a'].val, fv_4.objects['a'].der))
print("Value of fv_b: {};\nValue of first derivative of fv_b: {}".format(fv_4.objects['b'].val, fv_4.objects['b'].der))
print("Value of fv_c: {};\nValue of first derivative of fv_c: {}".format(fv_4.objects['c'].val, fv_4.objects['c'].der))
# -
# ### Jacobian matrix generation
# Additionally, our package supports the generation of Jacobian matrices via our ```jacobian``` function. The function takes in an array of functions (defined through ```AutoDiff``` objects) and an array of variables that we differentiate our functions by. The function, then, prints out a ```NumPy``` array corresponding to the Jacobian matrix.
# +
# Initialize autodiff objects
g = sad.AutoDiff('g', 3.4)
h = sad.AutoDiff('h', -4)
i = sad.AutoDiff('i', 7)
# Create functions
f1 = sad.tan(g) + 4
f2 = h**2 + 2*h - 12
f3 = sad.log(i) - sad.exp(i * 2)
# List of functions and variables
variables = ['g', 'h', 'i', 'j']
functions = [f1, f2, f3]
# Obtain Jacobian
sad.jacobian(variables, functions)
# + [markdown] colab_type="text" id="LELN7aJrhBSM"
# ---
#
# + [markdown] colab_type="text" id="7DS9-gwUXpVe"
# # Software Organization
#
# ## Directory structure
#
#
#
# cs207-FinalProject/
# .coverage
# .coverage.xml
# .travis.yml
# LICENSE
# README.md
# requirements.txt
# setup.py
# setup.cfg
# build/
# docs/
# milestone_1.ipynb
# milestone_2.ipynb
# documentation.ipynb
# documentation.md
# fig/
# graph_1.png
# graph_2.png
# dist/
# htmlcov/
# superautodiff/
# .coverage
# __init__.py
# autodiff.py
# autodiffreverse.py
# functions.py
# superautodiff.egg-info/
# test-reports/
# tests/
# __init__.py
# .coverage
# .coverage.XML
# tests_autodiff.py
# tests_autodiffreverse.py
# tests_autodiffvector.py
#
# <br>
#
# ## Modules
#
# ```superautodiff``` contains four modules in our corresponding to our package's four main competencies. The modules are summarily described here and explained in detail in the subsequent sections.
# - ```autodiff.py```: This module contains the core functionality of package—a forward mode AD library that is able to work with a vector of input variables for a vector of functions.
# - ```autodiffreverse.py```: This module contains the our reverse mode AD implementation.
# - ```functions.py```: This module contains the bulk of the mathematical operations used by our module along with our Jacobian function.
#
# <br>
#
# ## Docs
#
# The docs folder contains our documentation as a Python Notebook and as a Markdown file. We also have the notebooks for our first and second milestones.
#
# <br>
#
# ## Testing
# Testing is largely relevant to developers looking to edit and/or build upon our package; general users need not read this section. Our test suites - ``` test_autodiff.py ```, ``` test_autodiffreverse.py ```, and ``` test_autodiffvector.py ``` - are stored in our ```tests/``` folder and each script governs the testing of different aspects of our package. Our testing will be largely monitored through both Travis CI and CodeCov. Our GitHub repository will be fully integrated with Travis CI and CodeCov with relevant badges on our ```README.md``` to reflect the build status on Travis CI and the code coverage status on CodeCov.
#
# ```superautodiff``` also supports ```pytest```. To run our tests, users will need to have ```pytest``` installed on their environment and navigate to the repository. Subsequently, users should run the following code:
#
# ```python -m pytest```
#
# or:
#
# ``` pytest ```
#
# This will run all our tests and provide summary statistics on the outcome of said tests.
#
# <br>
#
# ## Package Distribution
# Our package is distributed using PyPI. We use _setuptools_ and _wheel_ to generate our distribution archives and we use _twine_ to upload our package to PyPI.
#
# The reason for this choice of tools is that they are simple, easy-to-use, and reliable. Our package does not have many complicated dependencies; we, therefore, want to employ simple packaging and distribution tools to ensure that our package is easily distributed to users with minimal hassle.
#
# As mentioned above, users will simply have to call ```pip install superautodiff``` in order to install our package. The installation instructions and troubleshooting will be available on our GitHub repository.
#
# The dist, superautodiff.egg-info, and build folders are used for our package distribution.
# + [markdown] colab_type="text" id="gNRkcZgmsaWt"
# ---
# + [markdown] colab_type="text" id="A3qiF7M_COe4"
# # Implementation
#
# Thus far, ```superautodiff``` has a working forward mode implementation and we have partly implemented multivariable automatic differentiation.
#
# ## Data structures
# In our present implementation, the primary data structures used are Counters that we use to store our variable names, values, and derivative values in our ```AutoDiff``` class objects. The reason for this design choice is that we want to prevent cases where we have repeated variables when we implement multivariable automatic differentiation in subsequent milestones. We use ```Counter``` objects because they enable us to easily store our data in key-value pairs which makes it easier to evaluate the derivatives with respect to a particular variable.
#
# Our ```AutoDiffVector``` objects use the same underlying data structures but include an additional dictionary with the variable name as the key and ```AutoDiff``` objects as the values corresponding to each key. This helps to ensure that the ```AutoDiff``` objects are properly stored within the vectors and can be reliably called in order to return the specific ```AutoDiff``` objects.
#
# Our ```jacobian``` function uses ```NumPy``` arrays to generate Jacobian matrices largely because the arrays enable us to easily visualize and print out matrices; further, this enables our Jacobian matrix outputs to be usable in other matrix operations.
#
# ## Dependencies
# Our package relies on the following external packages:
# - ```NumPy```: We use this to specify relevant mathematical operations within our package.
#
# - ```collections```: We use this to store our data in ```Counter``` objects.
#
# - ```math```: We use this for additional mathematical operations.
#
# ## Dunder methods
# The following dunder methods have been overloaded in our implementation in order for our ```AutoDiff``` objects to be easily used in mathematical operations and the construction of mathematical functions:
# - ```__add__```: Modified to update the counter objects accordingly when addition is performed; modified to return ```AutoDiff``` objects.
#
# - ```__radd__```: Modified to update the counter objects accordingly when addition is performed; modified to return ```AutoDiff``` objects.
#
# - ```__sub__```: Modified to update the counter objects accordingly when subtraction is performed; modified to return ```AutoDiff``` objects.
#
# - ```__rsub__```: Modified to update the counter objects accordingly when subtraction is performed; modified to return ```AutoDiff``` objects.
#
# - ```__mul__```: Modified to update the counter objects accordingly when multiplication is performed; modified to return ```AutoDiff``` objects.
#
# - ```__rmul__```: Modified to update the counter objects accordingly when multiplication is performed; modified to return ```AutoDiff``` objects.
#
# - ```__neg__```: Modified such that all counter elements are made negative; modified to return ```AutoDiff``` objects.
#
# - ```__truediv__```: Modified such that all counter elements are divided accordingly; modified to return ```AutoDiff``` objects.
#
# - ```__rtruediv__```: Modified such that all counter elements are divided accordingly; modified to return ```AutoDiff``` objects.
#
# - ```__pow__```: Modified such that the counter elements are appropriately exponentiated; modified to return ```AutoDiff``` objects.
#
# ## Mathematical operations
# Our package implements the following mathematical operations using ```NumPy``` and ```math``` such that they can be used on ```AutoDiff``` objects with ease. All of the following functions can take in scalar values, vectors (Python lists), and ```AutoDiff``` objects. This is useful to users that seek to perform mathematical calculations and/or build up complicated mathematical functions using ```AutoDiff``` objects for derivative evaluation.
#
# ### Trigonometric functions
# - ```sin(x)```
#
# - ```cos(x)```
#
# - ```tan(x)```
#
# - ```sec(x)```
#
# - ```csc(x)```
#
# - ```cot(x)```
#
# - ```arcsin(x)```
#
# - ```arccos(x)```
#
# - ```arctan(x)```
#
# - ```arcsec(x)```
#
# - ```arccsc(x)```
#
# - ```arccot(x)```
#
# - ```sinh(x)```
#
# - ```cosh(x)```
#
# - ```tanh(x)```
#
#
# ### Logarithms and exponentials
#
# - ```log(x)``` of user specified base
#
# - ```exp(x)```
# + [markdown] colab_type="text" id="12G47dsVtQVB"
# ## Implementation illustrations
# -
# ### Initialization and instantiation of objects
# Unlike the earlier section where we illustrate the usage of our package, here, we focus on the underlying methods used; the content is somewhat repetitive, but is retained for completeness.
#
# Once our module is imported, we can create ```AutoDiff``` objects that store the variable name and the value at which to evaluate the variables at. The object is mutable and can undergo mathematical operations in order to create complex mathematical functions; the object stores variable names, the values of the variables (given the value at which to evaluate the variables at), and the values of first derivatives of the variables (given the value at which to evaluate the variables at).
#
#
# Our package defines a class ```autodiff``` that takes a variable ```x``` as input. An ```autodiff``` object has two important attributes:
# - ```val``` - a scalar that contains the value of the function
# - ```der``` - a dictionary that stores the derivatives. For example:
#
# ```{"a":1, "b":1}```
# +
# Import module
import superautodiff as sad
# Initalize variable inputs and instantiate AutoDiff object
value_to_evaluate = 5.0
variable_name = "x_1"
f1 = sad.AutoDiff(variable_name, value_to_evaluate)
# Illustrate how values and derivative values are stored
print("Value of f1: {};\nValue of first derivative of f1: {}".format(f1.val, f1.der))
# -
# ### Basic operations using dunder methods
# The overloaded dunder methods enable the use of basic mathematical operations with ```AutoDiff``` objects. We do not check for the accuracy of our calculations here since that is already covered above in our Usage section; instead, we merely illustrate how the functions are used and the outputs they return in order to showcase our implementation.
# +
# Addition example
f1_a = f1 + f1
# Subtraction example
f1_b = 3*f1 - f1
# Multiplication example
f1_c = f1 * 3
# Exponent example
f1_d = f1 ** 2
# Division example
f1_e = f1/4
print("Value of f1_a: {};\nValue of first derivative of f1_a: {}\n".format(f1_a.val, f1_a.der))
print("Value of f1_b: {};\nValue of first derivative of f1_b: {}\n".format(f1_b.val, f1_b.der))
print("Value of f1_c: {};\nValue of first derivative of f1_c: {}\n".format(f1_c.val, f1_c.der))
print("Value of f1_d: {};\nValue of first derivative of f1_d: {}\n".format(f1_d.val, f1_d.der))
print("Value of f1_e: {};\nValue of first derivative of f1_e: {}\n".format(f1_e.val, f1_e.der))
# -
# ### Trigonometric and logarithmic operations
# Similarly, our ```AutoDiff``` objects can be passed through our trigonometric and logarithmic functions. As before, we do not evaluate check the accuracy of the values as this has already been done above.
# +
# Sine example
f1_f = sad.sin(f1)
# Cosine example
f1_g = sad.cos(f1*2)
# Tangent example
f1_h = sad.tan(f1/2)
# Exp example
f1_i = sad.exp(f1*3)
# Natural logarithm example
f1_j = sad.log(f1+5)
print("Value of f1_f: {};\nValue of first derivative of f1_f: {}\n".format(f1_f.val, f1_f.der))
print("Value of f1_g: {};\nValue of first derivative of f1_g: {}\n".format(f1_g.val, f1_g.der))
print("Value of f1_h: {};\nValue of first derivative of f1_h: {}\n".format(f1_h.val, f1_h.der))
print("Value of f1_i: {};\nValue of first derivative of f1_i: {}\n".format(f1_i.val, f1_i.der))
print("Value of f1_j: {};\nValue of first derivative of f1_j: {}\n".format(f1_j.val, f1_j.der))
# -
# ### Vector operations
# Vector operations are identical to those in the ```AutoDiff``` case. The ```AutoDiffVector``` objects essentially operate as dictionaries containing a set of ```AutoDiff``` objects that can undergo mathematical operations by relying on the ```AutoDiff``` methods that underlie the ```AutoDiffVector``` class attributes and methods.
# ---
# # Package Extension: Reverse Mode
#
# As an extension to our package, we have implemented reverse mode AD after having received approval for our extension. This section of our documentation details our reverse mode implementation and how it can be used.
#
# ## Reverse mode
# The reverse mode is a method of performing AD that rests on an important mathematical property: that any differentiable algorithm can be translated into a sequecnce of assignments of basic mathematical operations.
#
# This, property, then motivates the first part of the reverse mode - namely, calculating the forward pass which essentially regenerates the function we would like to evaluate through its variable inputs. At this juncture, we store the value of the partial derivatives of each of the elementary functions.
#
# Subsequently, we compute all the derivatives in reverse order using the partial derivatives we have already obtained from our forward pass. This, then, enables us to evaluate the derivative of a function through the reverse mode.
#
#
#
# ## ```AutoDiffReverse```
# We have create a new class of ```AutoDiff``` objects called ```AutoDiffReverse``` that operate similarly to regular ```AutoDiff``` objects except that these ```AutoDiffReverse``` objects rely on the reverse mode to evaluate derivatives rather than the forward mode.
#
# Much like the ```AutoDiff``` objects, our ```AutoDiffReverse``` objects have the following three attributes:
# - ```var```: name of variable
# - ```val```: value of the variable
# - ```der```: value of the derivative of the variable
#
# In the case of ```AutoDiffReverse```, the first input is the value at which to evaluate the derivative whilst the second input is the variable name (as opposed to how ```AutoDiff``` objects have it the other way around). This is because variable names are optional here since ```AutoDiffReverse``` objects are used in intermediary steps in our evaluative table.
#
# ## Dependencies
# Our reverse mode implementation relies on ```NumPy``` for its mathematical functions used in ```AutoDiffReverse```'s mathematical operations.
#
# Additionally ```AutoDiffReverse``` relies on ```Pandas``` as the evaluation table that is generated for the forward pass is created using a ```Pandas``` dataframe.
#
# ## Data structures
# As mentioned, earlier, a key data structure used in our implementation is the ```Pandas``` dataframe that is used to store the evaluation table. The reason for this usage is because ```Pandas``` allows for the simple and quick generation of tables with which we can present our evaluation table cleanly. This improves user interpretation and makes it easy for users to extract specific values from our evaluation table.
#
#
# ## Mathematical operations
# Not unlike our ```AutoDiff``` implementation, ```AutoDiffReverse``` includes the following mathematical operations using ```NumPy``` and ```math```. All of the following functions can take in scalar values, vectors (Python lists), and ```AutoDiffReverse``` objects.
#
# ### Trigonometric functions
# - ```sin(x)```
#
# - ```cos(x)```
#
# - ```tan(x)```
#
# - ```sec(x)```
#
# - ```csc(x)```
#
# - ```cot(x)```
#
# - ```arcsin(x)```
#
# - ```arccos(x)```
#
# - ```arctan(x)```
#
# - ```arcsec(x)```
#
# - ```arccsc(x)```
#
# - ```arccot(x)```
#
# - ```sinh(x)```
#
# - ```cosh(x)```
#
# - ```tanh(x)```
#
#
# ### Logarithms and exponentials
#
# - ```log(x)``` of user specified base
#
# - ```exp(x)```
#
#
# ## Usage
#
# We will now illustrate a typical use case of our reverse mode implementation in order to evaluate the derivatives of a given function at a given point and to generate the corresponding evaluation table.
# +
from superautodiff import AutoDiffReverse
from superautodiff.autodiffreverse import *
# Instantiate AutoDiffReverse objects
x1 = sad.AutoDiffReverse(4,"x1")
x2 = sad.AutoDiffReverse(7,"x2")
x3 = sad.AutoDiffReverse(3,"x3")
# +
# We create a function using the AutoDiffReverse objects
f = x1 - 3*x2 + x3*x2
# Already we can obtain the value of f
print("Value of f: ", f.val)
# -
# Assign and display the evaluation table for function f and examine
forward_table = f.pass_table()
display(forward_table)
# +
# Input the forward table and array of variable names into reversepass to obtain derivatives
der = reversepass(forward_table,['x1', 'x2', 'x3'])
print("Value of derivatives: ",der)
# Clear the table
f.clear_table()
# -
# ## Evaluation table
# As illustrated above, the evaluation table is stored in ```forward_pass``` and can be accessed at any juncture in the reverse mode automatic differentiation process.
#
# The evaluation table consists of five columns; the first column (Node) contains the elements that are used to calculate the composite variables that are stored in the second (d1) and fourth columns (d2). Meanwhile, the third (d1value) and fifth (d2value) columns contain the derivative of each node with respect to d1 and d2.
#
# The table, therefore, contains the value of composite and intermediary variables that are used in the reverse mode AD process.
# ---
# # Future Work and Possible Extensions
#
# Although ```superautodiff``` is to be considered as a finished product that users should be able to satisfactorily install and use, our team has several ideas for possible extensions and developments that we hope to implement in the future.
# ### Further vectorization
# Presently, although our ```AutoDiff``` objects are vectorizable as ```AutoDiffVector``` objects, the functionality of these ```AutoDiffVector``` objects is still very much limited. Our ```AutoDiffVector``` objects are able to work with scalars numerics and single ```AutoDiff``` objects but not vectors of these objects yet.
#
# We hope to implement further vectorization in order to allow for more flexible and complex vector operations for our ```AutoDiffVector``` objects. This implementation will involve working with ```NumPy``` arrays and potentially ```Numba``` if it is the case that our vector functions are found to be slow.
#
# We foresee that this will involve an overhaul of the ```AutoDiffVector``` class with more overrides so as to ensure that the vectors are fully functional and that we have some means of tracking the variables being passed in (which will be valuable as the vectorized operations might get quite messy when large numbers of variables are involved).
#
# Additionally, we expect that we will have to write several functions that help to simplify matrix operations; some functions are detailed as follows:
# - ```sad.dot(v_1, v_2)```: Takes in two ```AutoDiffVector``` objects and returns the dot product of the two.
# - ```sad.cross(v_1, v_2)```: Takes in two ```AutoDiffVector``` objects and returns the cross product of the two.
# - ```sad.determinant(v_1)```: Takes in a square ```AutoDiffVector``` object and returns the matrix determinant.
# - ```sad.eye(n)```: Takes in a scalar integer ```n``` and generates an $n \times n$ identity matrix that can operate with ```AutoDiffVector``` objects
# - ```sad.trace(v_1)```: Takes in a square ```AutoDiffVector``` object and returns the trace of the matrix.
# - ```sad.reshape(v_1, dim)```: This will either be implemented as a function or an attribute; if it were implemented as a function, the function would take in a matrix and a tuple containing the matrix dimensions and reshape the given matrix to fit the specified dimensions similarly to how ```NumPy```'s reshape operates.
#
# This implementation will require quite a bit of work but will be very useful as it will enable users to perform a much broader set of operations with greater convenience. Additionally, if we were to revamp our vectorization implementation, we can also get our vectors to perform these operations far more quickly than in our current implementation which still relies heavily on inefficient loops. Presently, the speed at which our vectorized objects operate is somewhat of a shortcoming of our existing implementation due to its reliance on loops.
# ### Support for higher-order derivatives
# Another possible extension we hope to implement in the future is to build up our package's functionality in order to support operations involving higher-order derivatives.
#
# In theory, this approach would not be very difficult because both the forward mode and the reverse mode that we have implemented will have the core competency required to evaluate derivatives at higher-orders as well as cross-partial derivatives. We essentially need to extend our existing capabilities to differentiate variables multiple times and support cross-differentiation. The difficulty, we foresee, will most likely come from how we will need to overhaul our base classes in order to support this and how we can ensure that this additional functionality is user-friendly and intuitive; i.e. that we don't want our package to become cluttered, messy, and hard to use.
#
# Provisionally, we think that this will involve taking in the order of derivatives required as an input and developing a method of categorizing our derivatives and tracking all the derivatives and their orders. We will also need to refine our Counter output and variable naming convention to ensure that the outputs remain clear and informative for users.
#
# We think that expanding our functionality for higher-order derivatives is an important future feature that we definitely would like to have as we believe that it is important and useful for scientific computing. We can subsequently extend this slightly further and include functionality to allow for the generation of Hessian and border Hessian matrices which are extremely useful for optimization proclems.
# ### Visual interface
#
# Presenly, our reverse mode implementation implicitly constructs an evaluation table of values and a computational graph as part of its back-end operations that users do not really see (even though they can examine a simple evaluation table that we draw with ```Pandas```).
#
# Our team hopes that we can develop an extension that will enable our package to output more clear and informative evaluative tables that abide by design principles such that are informative and interpretable. The hope is that users will not only be able to use our package to evaluate derivatives, but also learn about the underlying processes (such as each step of the forward pass). Users should also be able to easily observe the composite derivative values in case they wish to obtain said values.
#
# Additionally, we hope to use tools such as ```Graphviz``` to generate the computational graph from our existing implmentation and/or use the d3 JavaScript library in order to create scalable vector graphics that can be used to visualize the computational graph. This can be particularly useful for users who are keen on learning and understanding the automatic differentiation process.
# ---
# # References
# - <NAME>. Lecture 10: Automatic Differentiation: The Forward Mode. Cambridge, MA; CS207 Fall '19
# - Hoffman, <NAME>. A Hitchhiker’s Guide to Automatic Differentiation. Numerical Algorithms, 72, 24 October 2015, 775-811, Springer Link, DOI 10.1007/s11075-015-0067-6.
# - <NAME>. A simple explanation of reverse-mode automatic differentiation. 24 March 2009. https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/
#
# We would also like to thank our head instructor, Professor <NAME>, and our Teaching Fellow, <NAME>, for their input, contribution, and support.
| docs/.ipynb_checkpoints/documentation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py3-gpu-jupyter-ds
# language: python
# name: py3-gpu-jupyter-ds
# ---
# + [markdown] id="hRTa3Ee15WsJ"
# # Transfer learning and fine-tuning base EfficientNet_B0 model
# + id="gjWM9Lkqm0C1"
# # !pip install tf-nightly
# + id="TqOt6Sv7AsMi"
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# + [markdown] id="v77rlkCKW0IJ"
# ## Data preprocessing
# + [markdown] id="0GoKGm1duzgk"
# ### Data download
# + [markdown] id="vHP9qMJxt2oz"
# In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a `tf.data.Dataset` for training and validation using the `tf.keras.preprocessing.image_dataset_from_directory` utility. You can learn more about loading images in this [tutorial](https://www.tensorflow.org/tutorials/load_data/images).
# + id="ro4oYaEmxe4r"
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (224, 224)
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
# + id="cAvtLwi7_J__"
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
# +
# # ?image_dataset_from_directory
# -
validation_dir
# !ls /root/.keras/datasets/cats_and_dogs_filtered/validation
# + [markdown] id="yO1Q2JaW5sIy"
# Show the first nine images and labels from the training set:
# + id="K5BeQyKThC_Y"
class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
# 从train_dataset里面拿一个batch size数量的图片和正解标签
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
# + [markdown] id="EZqCX_mpV3Mx"
# As the original dataset doesn't contains a test set, you will create one. To do so, determine how many batches of data are available in the validation set using ```tf.data.experimental.cardinality```, then move 20% of them to a test set.
# + id="uFFIYrTFV9RO"
val_batches = tf.data.experimental.cardinality(validation_dataset)
# 验证集中一共有32个批次的batch数据(一个batch里面有32张图片),32个批次分成26和6,2个部分(验证和测试)
test_dataset = validation_dataset.take(val_batches//5)
validation_dataset = validation_dataset.skip(val_batches//5)
# + id="Q9pFlFWgBKgH"
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
# + [markdown] id="MakSrdd--RKg"
# ### Configure the dataset for performance
# + [markdown] id="22XWC7yjkZu4"
# Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the [data performance](https://www.tensorflow.org/guide/data_performance) guide.
# + id="p3UUPdm86LNC"
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
# -
len(train_dataset),len(validation_dataset),len(test_dataset)
# + [markdown] id="MYfcVwYLiR98"
# ### Use data augmentation
# + [markdown] id="bDWc5Oad1daX"
# When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit). You can learn more about data augmentation in this [tutorial](https://www.tensorflow.org/tutorials/images/data_augmentation).
# -
data_augmentation=tf.keras.Sequential([
# 水平 垂直 翻转
tf.keras.layers.experimental.preprocessing.RandomFlip(mode='horizontal'),
# 旋转
tf.keras.layers.experimental.preprocessing.RandomRotation(0.4),
# 缩放
tf.keras.layers.experimental.preprocessing.RandomZoom(.5, .2),
# 图片上下shift20%,左右shift20%
tf.keras.layers.experimental.preprocessing.RandomTranslation(height_factor=0.2, width_factor=0.2),
# 独立的调整每个图片每个通道的对比度
tf.keras.layers.experimental.preprocessing.RandomContrast(factor=0.1),
])
# + [markdown] id="s9SlcbhrarOO"
# Note: These layers are active only during training, when you call `model.fit`. They are inactive when the model is used in inference mode in `model.evaulate` or `model.fit`.
# + [markdown] id="9mD3rE2Lm7-d"
# Let's repeatedly apply these layers to the same image and see the result.
# -
image, label = next(iter(validation_dataset))
len(image)
# + id="aQullOUHkm67"
plt.figure(figsize=(3, 3))
first_image = image[0]
plt.subplot(1,1,1)
plt.imshow(first_image.numpy().astype("uint8"))
plt.title("original")
plt.axis('off')
plt.show()
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.title("Data Augmentation")
plt.axis('off')
# + [markdown] id="bAywKtuVn8uK"
# ### Rescale pixel values
# + [markdown] id="xnr81qRMzcs5"
# Note: Alternatively, you could rescale pixel values from `[0,255]` to `[-1, 1]` using a [Rescaling](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling) layer.
# + id="R2NyJn4KQMux"
rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
# -
# 通过preprocess_input处理,数据变成了-1到1之间的数值
a=rescale(images[0])
print(np.min(a),np.max(a))
# 原始图片是0到255之间的数值
np.min(images[0]),np.max(images[0])
# + [markdown] id="OkH-kazQecHB"
# ## Create the base model from the pre-trained convnets
# + id="19IQ2gqneqmS"
# Create the base model from the pre-trained model EfficientNetB0
# 设定维度为 (224, 224, 3)
IMG_SHAPE = IMG_SIZE + (3, )
base_model = tf.keras.applications.EfficientNetB0(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet',
drop_connect_rate=0.4)
# + [markdown] id="AqcsxoJIEVXZ"
# This feature extractor converts each `160x160x3` image into a `5x5x1280` block of features. Let's see what it does to an example batch of images:
# + id="Y-2LJL0EEUcx"
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
# + [markdown] id="rlx56nQtfe8Y"
# ## Feature extraction
# In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
# + [markdown] id="CnMLieHBCwil"
# ### Freeze the convolutional base
# + [markdown] id="7fL6upiN3ekS"
# It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's `trainable` flag to False will freeze all of them.
# + id="OTCJH4bphOeo"
base_model.trainable = False
# + [markdown] id="jsNHwpm7BeVM"
# ### Important note about BatchNormalization layers
#
# Many models contain `tf.keras.layers.BatchNormalization` layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.
#
# When you set `layer.trainable = False`, the `BatchNormalization` layer will run in inference mode, and will not update its mean and variance statistics.
#
# When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing `training = False` when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.
#
# For details, see the [Transfer learning guide](https://www.tensorflow.org/guide/keras/transfer_learning).
# + id="KpbzSmPkDa-N"
# Let's take a look at the base model architecture
base_model.summary()
# -
# 查看层数和层的名字
len(base_model.layers),base_model.layers[0].name
# + [markdown] id="wdMRM8YModbk"
# ### Add a classification head
# + [markdown] id="QBc31c4tMOdH"
# To generate predictions from the block of features, average over the spatial `5x5` spatial locations, using a `tf.keras.layers.GlobalAveragePooling2D` layer to convert the features to a single 1280-element vector per image.
# + id="dLnpMF5KOALm"
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
# -
# 一共有1280个元素,通过这些元素来判断是猫还是狗
feature_batch_average[0]
# + [markdown] id="O1p0OJBR6dOT"
# Apply a `tf.keras.layers.Dense` layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a `logit`, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0.
# + id="Wv4afXKj6cVa"
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
# -
prediction_batch[0]
# + [markdown] id="HXvz-ZkTa9b3"
# 通过使用functional API链接 data augmentation, rescaling, base_model and feature extractor layers
# Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the [Keras Functional API](https://www.tensorflow.org/guide/keras/functional). As previously mentioned, use training=False as our model contains a BatchNormalization layer.
# + id="DgzQX6Veb2WT"
# 使用functional API链接各个层
inputs = tf.keras.Input(shape=(IMG_SIZE[0], IMG_SIZE[1], 3))
x = data_augmentation(inputs)
# 数据标准化,使用的话精度不能收缩
# x = rescale(x)
# training=False表示推论模式,training=True表示训练模式
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.BatchNormalization()(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
# -
model.summary()
# + [markdown] id="g0ylJXE_kRLi"
# ### Compile the model
#
# Compile the model before training it. Since there are two classes, use a binary cross-entropy loss with `from_logits=True` since the model provides a linear output.
# + id="RpR8HdyMhukJ"
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# + id="I8ARiyMFsgbH"
model.summary()
# + [markdown] id="lxOcmVr0ydFZ"
# The 2.5M parameters in MobileNet are frozen, but there are 1.2K _trainable_ parameters in the Dense layer. These are divided between two `tf.Variable` objects, the weights and biases.
# + id="krvBumovycVA"
len(model.trainable_variables)
# -
model.trainable_variables
# + [markdown] id="RxvgOYTDSWTx"
# ### Train the model
#
# After training for 10 epochs, you should see ~94% accuracy on the validation set.
#
# + id="Om4O3EESkab1"
initial_epochs = 10
loss0, accuracy0 = model.evaluate(validation_dataset)
# + id="8cYT1c48CuSd"
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
# + id="JsaRFlZ9B6WK"
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
# + [markdown] id="Hd94CKImf8vi"
# ### Learning curves
#
# Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNet V2 base model as a fixed feature extractor.
# + id="53OTCh3jnbwV"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
# + [markdown] id="CqwV-CRdS6Nv"
# ## Fine tuning
# + [markdown] id="CPXnzUK0QonF"
# ### Un-freeze the top layers of the model
#
# + [markdown] id="rfxv_ifotQak"
# All you need to do is unfreeze the `base_model` and set the bottom layers to be un-trainable. Then, you should recompile the model (necessary for these changes to take effect), and resume training.
# + id="4nzcagVitLQm"
base_model.trainable = True
# + id="-4HgVAacRs5v"
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
# 因为这种架构包含了每一块到从第一层到最后一层的捷径。不关心块会严重损害最终表现。
# 119层开始是block5a_expand_conv的开始
fine_tune_at = 119
# 冻结了前面的118层,从119开始到末尾可以训练,这样提高了拟合的层数,应该可以获得更好的精度
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
# + [markdown] id="4Uk1dgsxT0IS"
# ### Compile the model
# -
# 如果使用Adam作为激活函数最终测试结果是(推荐)
# 6/6 [==============================] - 0s 26ms/step - loss: 0.0465 - accuracy: 0.9896
# Test accuracy : 0.9895833134651184
#
# 如果使用RMSProp作为激活函数测试结果是
# 6/6 [==============================] - 0s 26ms/step - loss: 0.0479 - accuracy: 0.9792
# Test accuracy : 0.979166
# + id="NtUnaz0WUDva"
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.Adam(lr=base_learning_rate/10),
metrics=['accuracy'])
# + id="WwBWy7J2kZvA"
model.summary()
# + id="bNXelbMQtonr"
# 可以训练的层数
len(model.trainable_variables)
# + [markdown] id="4G5O4jd6TuAG"
# ### Continue training the model
# + id="ECQLkAsFTlun"
# initial_epoch的目的是从第10次epoch开始,再执行10次epoch
fine_tune_epochs = 7
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_dataset,
epochs=total_epochs,
initial_epoch=history.epoch[-1],
validation_data=validation_dataset)
# + id="PpA8PlpQKygw"
# 把上一次的acc加上这次history_fine.history['accuracy']的10次结果,一共是20次的结果保存
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
# + id="chW103JUItdk"
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
# plt.ylim([0.8, 1])
plt.ylim([min(plt.ylim()),1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
# + [markdown] id="R6cWgjgfrsn5"
# ### Evaluation and prediction
# -
# ### 使用MobileNet模型进行迁移学习,最终的结果
# + id="2KyNhagHwfar"
# 使用MobileNet模型进行迁移学习,最终的结果
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
# + [markdown] id="8UjS5ukZfOcR"
# And now you are all set to use this model to predict if your pet is a cat or dog.
# -
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
len(image_batch)
# + id="RUNoQNgtfNgt"
#Retrieve a batch of images from the test set
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
print('Labels:\n', label_batch)
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].astype("uint8"))
plt.title(class_names[predictions[i]])
plt.axis("off")
# -
| experiment/.ipynb_checkpoints/transfer_learning_base_EfficientNet_B0_2020-11-14-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: BOBSIM
# language: python
# name: bobsim
# ---
# +
from data_pipeline.open_data_raw_material_price.core import OpenDataRawMaterialPrice
from data_pipeline.open_data_marine_weather.core import OpenDataMarineWeather
from data_pipeline.open_data_terrestrial_weather.core import OpenDataTerrestrialWeather
import pandas as pd
import functools
from sklearn.preprocessing import QuantileTransformer, PowerTransformer ,MinMaxScaler, StandardScaler, RobustScaler, MaxAbsScaler, Normalizer
import numpy as np
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, mean_squared_error
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import TimeSeriesSplit
# transformer
def log_transform(df):
return np.log1p(df)
def sqrt_transform(df):
return np.sqrt(df)
log = log_transform
sqrt = sqrt_transform
min_max = MinMaxScaler()
standard = StandardScaler()
robust = RobustScaler()
max_abs = MaxAbsScaler()
normal = Normalizer()
quantile = QuantileTransformer()
box_cox = PowerTransformer(method='box-cox')
yeo_johnson = PowerTransformer(method='yeo-johnson')
t_names = ['log', 'sqrt', 'min_max', 'standard', 'robust', 'max_abs', 'quantile', 'yeo_johnson', 'None']
transformers = [log, sqrt, min_max, standard, robust, max_abs, quantile, yeo_johnson, 'None']
# +
def sum_corr(df):
# default: method=pearson, min_periods=1
# method{‘pearson’, ‘kendall’, ‘spearman’}
corr = df.corr()
return abs(corr['당일조사가격'].drop('당일조사가격')).sum()
def analyze_skew(df):
return get_skews(df)
def analyze_coef(df):
return sum_coef(df)
def transform(transformer, df):
if isinstance(transformer, TransformerMixin):
return pd.DataFrame(transformer.fit_transform(df), columns=df.columns)
elif transformer == 'None':
return df
else:
return transformer(df)
def build_dataset(date="201908"):
t = OpenDataTerrestrialWeather(
date=date
)
t_df = t.clean(t.filter(t.input_df))
m = OpenDataMarineWeather(
date=date
)
m_df = m.clean(m.filter(m.input_df))
p = OpenDataRawMaterialPrice(
date=date
)
p_df = p.clean(p.filter(p.input_df))
print(p_df)
w_df = pd.merge(
t_df, m_df,
how='inner', on="일시"
)
origin_df = pd.merge(
p_df, w_df, how="inner", left_on="조사일자", right_on="일시"
).drop("일시", axis=1).astype(dtype={"조사일자": "datetime64"})
return origin_df
def split_xy(df):
X = df.drop("당일조사가격" ,axis=1)
y = df['당일조사가격'].rename('price')
return X, y
def corr_xy(x, y):
corr = pd.concat([x,y] ,axis=1).corr()
return abs(corr['price']).drop('price').sum()
def search_transformers(column, X: pd.DataFrame, y: pd.Series):
"""
iterate transformer for X and compare with y (corr_xy)
"""
x = X[column]
l_tx = list(map(functools.partial(transform, df=pd.DataFrame(x)), transformers))
l_coef = list(map(functools.partial(corr_xy, y=y), l_tx))
# find max coef and index
max_coef = max(l_coef)
max_index = l_coef.index(max_coef)
transformed_column = l_tx[max_index]
# proper_transformer = t_names[max_index]
return transformed_column
def iterate_x(y: pd.Series, X: pd.DataFrame):
# iterate X
return pd.concat([pd.concat(list(map(functools.partial(search_transformers, X=X, y=y), X.columns.tolist())), axis=1), y], axis=1)
def grid_search(X: pd.DataFrame, y: pd.Series):
"""
return: result grid, pd DataFrame
"""
l_ty = list(map(functools.partial(transform, df=pd.DataFrame(y)), transformers))
# iterate y
result = list(map(functools.partial(iterate_x, X=X), l_ty))
#print(result)
return result
def customized_rmse(y, y_pred):
error = y - y_pred
def penalize(x):
if x > 0:
# if y > y_pred, penalize 10%
return x * 1.1
else:
return x
X = np.vectorize(penalize)(error)
return np.sqrt(np.square(X).mean())
def set_train_test(df:pd.DataFrame):
"""
TODO: search grid to find proper train test volume
:param df: dataset
:return: train Xy, test Xy
"""
predict_days = 7
# TODO: it should be processed in data_pipeline
reversed_time = df["조사일자"].drop_duplicates().sort_values(ascending=False).tolist()
standard_date = reversed_time[predict_days]
train = df[df.조사일자.dt.date < standard_date]
test = df[df.조사일자.dt.date >= standard_date]
return train, test
def f(df:pd.DataFrame):
X = df.drop(columns =["price","조사일자"])
y = df['price']
return X, y
def inverser_transform(step, x, transformer):
"""
log = log_transform
sqrt = sqrt_transform
min_max = MinMaxScaler()
standard = StandardScaler()
robust = RobustScaler()
max_abs = MaxAbsScaler()
normal = Normalizer()
quantile = QuantileTransformer()
box_cox = PowerTransformer(method='box-cox')
yeo_johnson = PowerTransformer(method='yeo-johnson')
"""
# inverse_log(Y) = np.e**Y
# inverse_sqrt(x) = ""
# inverse_min_max(x) = min_max.inverse_transform
if step == 'log':
return np.expm1(x)
elif step == 'sqrt':
return x**2
else:
transformer.inverse_transform(x)
def grid_search_matrix(df:pd.DataFrame, c, t_name, transformer):
"""
params: df(already gridsearched df about tansfomer)
return: score(rmse)
"""
print(pd.concat([c,df], axis=1))
train, test = set_train_test(pd.concat([c,df], axis=1))
train_X, train_y = f(train)
test_X, test_y = f(test)
enet = ElasticNet()
tscv = TimeSeriesSplit(n_splits=2)
parametersGrid = {"max_iter": [1, 5, 10],
"alpha": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100],
"l1_ratio": np.arange(0.0, 1.0, 0.1)}
grid = GridSearchCV(enet, parametersGrid, scoring=make_scorer(customized_rmse, greater_is_better=False), cv=tscv.split(train_X) )
grid.fit(train_X,train_y)
pred_y = grid.predict(test_X)
r_test = inverser_transform(t_name, test_y.to_numpy().reshape(-1,1), transformer)
r_pred = inverser_transform(t_name, pred_y.reshape(-1,1), transformer)
return customized_rmse(r_test, r_pred)
def get_final_df(df):
sum_df = pd.DataFrame(np.array(df.values.tolist())[:, :, 1], df.index, df.columns).astype("float").sum(axis=1).rename("sum")
transformer_df = pd.DataFrame(np.array(df.values.tolist())[:, :, 0], df.index, df.columns)
return pd.concat([sum_df, transformer_df], axis=1)
# main: pipeline
def pipeline(date="201908"):
origin_df = build_dataset(date=date)
numeric_df = origin_df.select_dtypes(exclude=['object', 'datetime64[ns]'])
X, y = split_xy(numeric_df)
return grid_search(X, y), origin_df['조사일자']
# rmse = grid_search_matrix()
# return get_final_df(result_df),sum_corr(numeric_df), rmse
# -
result, date_series = pipeline("201908")
result[1]
idx = 0
for X in result:
t_name = t_names[idx]
grid_search_matrix(X, c=date_series, t_name=t_name, transformer = transformers[idx])
idx += 1
a = list(map(functools.partial(grid_search_matrix, c=date_series), result))
a
| src/main/python/super_correlation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import x3d
x3dDoc = x3d.X3D()
scene = x3d.Scene()
x3dDoc.scene = scene
x3dDoc.toXML()
scene.children
Shape = x3d.Shape()
Shape
scene.children = [Shape]
scene.children
scene.toXML()
x3dDoc.toXML()
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Machine Learning and Equity Index Returns
# -------
# #### Short Summary of the Results
#
# Author: <NAME>
#
# Date: 26.06.2019
# ## Motivation
# - Predicting monthly S&P 500 index returns using historical macroeconomic data and technical indicators. S&P 500 represents well the US equity market.
# - An important task from both academic and practical perspectives
# - A task with a high noise-to-signal ratio
# - Very few models are able to outperform the simple historical mean return
# - High-dimensional setting with lack of data (number of predictors is comparable to the number of observations)
#
# ## Models
# I use models that perform well in the setting with many predictors and lack of observations:
# - Ordinary least squares (OLS) with prior dimensionality reduction (see [Neely et al. (2014)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1787554)).
# - Linear models with L1 and L2 regularisation terms (Ridge, Lasso, and Elastic Net). The models were introduced in Hoerl and Kennard (Technometrics, 1970), Tibshirani (Journal of the Royal Statistical Society, 1996) and
# Zou and Hastie (Journal of the Royal Statistical Society, 2005).
# - It is important to notice that the Elastic Net contains both L1 and L2 regularization terms. Thus, Lasso and Ridge can be represented by an Elastic Net with one of the regularization terms equal to zero. I use Elastic Net and let the cross-validation method to choose the optimal hyperparameters and the respective optimal model.
# - Bagging and boosted tree-based methods ([Breiman (2001)](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726))
#
# ## Data
# Start with the data from [Neely et al. (2014)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1787554):
#
# - 14 Macro Variables + 14 Technical Indicators
# - 1950 – 2011 = 732 months
#
# From the paper:
#
# >Macro Variables:
# >1. Dividend-price ratio (log), DP: log of a twelve-month moving sum of dividends paid on the S&P 500
# index minus the log of stock prices (S&P 500 index).
# >2. Dividend yield (log), DY: log of a twelve-month moving sum of dividends minus the log of lagged stock
# prices.
# >3. Earnings-price ratio (log), EP: log of a twelve-month moving sum of earnings on the S&P 500 index
# minus the log of stock prices.
# >4. Dividend-payout ratio (log), DE: log of a twelve-month moving sum of dividends minus the log of a
# twelve-month moving sum of earnings.
# >5. Equity risk premium volatility, RVOL: based on the moving standard deviation estimator
# >6. Book-to-market ratio, BM: book-to-market value ratio for the Dow Jones Industrial Average.
# >7. Net equity expansion, NTIS: ratio of a twelve-month moving sum of net equity issues by NYSE-listed
# stocks to the total end-of-year market capitalization of NYSE stocks.
# >8. Treasury bill rate, TBL: interest rate on a three-month Treasury bill (secondary market).
# >9. Long-term yield, LTY: long-term government bond yield.
# >10. Long-term return, LTR: return on long-term government bonds.
# >11. Term spread, TMS: long-term yield minus the Treasury bill rate.
# >12. Default yield spread, DFY: difference between Moody’s BAA- and AAA-rated corporate bond yields.
# >13. Default return spread, DFR: long-term corporate bond return minus the long-term government bond
# return.
# >14. Inflation, INFL: calculated from the Consumer Price Index (CPI) for all urban consumers; we use xi,t−1
# in (1) for inflation to account for the delay in CPI releases.
# In this setting, OLS is expected to provide a poor/noisy estimate
# Choose methods that perform well in high-dimensional
#
# >To compare technical indicators with the macroeconomic variables, we employ 14 technical indicators based
# on three popular technical strategies. The first is a moving-average (MA) rule. The second technical strategy is based on momentum. Technical analysts frequently employ volume data in conjunction with past prices to identify market trends.
# In light of this, the final technical strategy that we consider incorporates “on-balance” volume.
#
# See [Neely et al. (2014)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1787554) for more details.
#
#
# ## Exploratory Data Analysis
# In this [Jupyter notebook](https://github.com/paveles/Equity_Premium_ML/blob/master/notebooks/01-pl-first-data-analysis.ipynb) I do exploratory data analysis and find that in a simplified setting considered models are able to outperform historical mean in predicting S&P 500 index returns.
#
# ## Cross-Validation Method
# In this project I develop and implement a novel cross-validation method - one-month forward expanding window nested cross-validation. This cross-validation method chooses the best hyperparameters by comparing the performance of underlying models in the one-month forward predictive setting. Each month those hyperparameters are chosen that ensure the best performance for the historical validation sample. The figure below explains this method.
# ##### Figure: One-Month Forward Expanding Window Nested Cross-Validation Explained
# 
#
#
#
# In the first step of the outer loop, 180 months (1951-1965) are used as a starting sample. To make a forecast for the test month 181, the model with different hyperparameters is trained on 179 months and validated on the month 180. For that month, The predictions of the model with different hyperparameters are compared and the set of hyperparameters that results in the lowest squared error in that month is chosen. The forecast error for the test month 181 is calculated.
#
# In the next step of the outer loop, the training and validation window is extended by one month to 181 months. In the first iteration of the inner loop, the model with the same sets of hyperparameters is again trained on 179 months and one-month forward forecast errors for the month 180 (the first validation month) are calculated. In the next iteration of the inner loop, the model is trained on the sample of 180 months and the forecast errors for the month 181 (the second validation month) are calculated. The best set of hyperparameters to forecast returns in the test month 182 is a set that delivers the lowest mean squared error on the validation months, $MSE_{validate}$. In this step of the outer loop, the month 182 is added to the training sample and the validation sample consists of months 180 and 181.
#
# This procedure continues until the last period in the data. I calculate mean squared error over all test months of the whole sample period, $MSE_{test}$, for considered models. The best model delivers the lowest $MSE_{test}$. The described cross-validation procedure is a natural historical simulation of a model's performance.
#
# ## Results
# In the table below I compare performance over different models. $MSE_{validate}$ is the average validation MSE over all sample months. $MSE_{test}$ is the MSE on the test sample. $MSPE^{adj}$ is an adjusted t-statistic from [Clark and West (2007)](https://www.sciencedirect.com/science/article/pii/S0304407606000960) for the statistical test on whether a model under consideration outperforms the simple average mean in predicting index returns. The p-values can be obtained from the respective [tables](https://www.itl.nist.gov/div898/handbook/eda/section3/eda3672.htm). $R^2_{OOS}$ is out-of-sample R-squared introduced by [<NAME> (2008)](rfs.oxfordjournals.org/cgi/doi/10.1093/rfs/hhm055). It measures the reduction in mean squared error on the test sample relative to the historical moving average. Positive $R^2_{OOS}$ means that the respective model outperforms the historical moving average. See the respective papers for more details.
# ##### Table: Predictive performance of different models (one-month forward expanding window nested cross-validation)
# | Name | $MSE_{test}$ | $MSPE^{adj}$ | $MSE_{validate}$ | $R^2_{OOS}$ |
# |-------------------|--------------|-------------------------------|------------------|------------|
# | Moving Mean | 20.22 | 0.14 | 15.94 | 0.0000 |
# | OLS | 22.53 | 1.41 | 14.02 | -0.1145 |
# | PCA + OLS | 20.62 | 2.24 | 18.15 | -0.0198 |
# | **Elastic Net** | **19.89** | **2.91** | **17.61** | **0.0163** |
# | Random Forest | 20.74 | 1.62 | 18.50 | -0.0257 |
# | Adaptive Boosting | 22.08 | 1.10 | 20.38 | -0.0921 |
# | Gradient Boosting | 22.07 | 0.77 | 19.66 | -0.0914 |
# | XGBoost | 23.08 | 0.79 | 20.44 | -0.1413 |
#
# Results reveal that Elastic Net is the only model that delivers positive $R^2_{OOS}$ and $MSE_{test}$ below the historical moving mean.
# $MSPE^{adj}(test)$ reflects a significant decrease in the mean squared forecasting error at any common significance level.
#
# In the table below I compare the effect of different cross-validation methods on the accuracy of the Elastic Net model.
#
# ##### Table: Comparing Cross-Validation Methods
# | Name | $MSE_{test}$ | $MSPE^{adj}$ | $MSE_{validate}$ | $R^2_{OOS}$ |
# |---------------------|--------------|------------------------------|------------------|-------------|
# | Enet + No CV | 20.11 | 1.28 | 15.73 | 0.0054 |
# | Enet + 5-fold CV | 20.04 | 2.20 | 15.38 | 0.0091 |
# | Enet + 10-fold CV | 19.97 | 2.73 | 15.27 | 0.0121 |
# | Enet + Expanding CV | 19.89 | 2.91 | 17.61 | 0.0163 |
#
# The results reveal that one-month forward expanding window nested cross-validation delivers the most accurate forecasts.
#
# Is it possible to turn this more accurate forecasts in a profitable trading strategy? The figure below answers this question. In panel A, I draw one-month forward forecasts of the Elastic Net model versus realized S&P 500 index returns. As [Campbell and Thomson (2008)](rfs.oxfordjournals.org/cgi/doi/10.1093/rfs/hhm055) notice, even a small improvement in forecasting accuracy might result in sizable economic effects.
#
# I use these return forecasts to construct a simple strategy. The strategy enters the S&P 500 index when the forecasted return is above 0. Otherwise, the strategy's position in the index is zero. The corresponding position dynamics is depicted in panel B of the figure. Finally, panel C shows the value of one dollar invested in 1966 in the S&P 500 index and the market timing strategy based on the Elastic Net model. The strategy results in around 2 times higher wealth by the end of the period than pure S&P 500 index. One dollar invested in the strategy turns into 10 dollars. In comparison, the S&P 500 strategy turns into around 5 dollars.
#
# ##### Figure: Performance of Elastic Net Strategy with One-Month Ahead Expanding Window Nested Cross-Validation:
# 
#
#
#
# ## Ideas and Possible Extensions
#
# - Optimize new cross-validation algorithm. Currently, it does all cross-validations independently for each period - repetitive estimations. Calculations could be accelerated by orders of magnitude by using calculations from previous periods.
# - Add interaction terms and lags of predictors to improve performance of the linear models.
# - Neural networks could be tested (such as LSTM). But these models usually require larger amounts of data for precise estimates.
# - Extend results to:
# - Other indexes/asset classes
# - Weekly/daily data frequency
# - More various predictors
# - Potential to contribute to the scikit-learn package by adding expanding and rolling window nested cross-validation methods
# - Show other strategy performance measures, such as maximum drawdown, Sharpe ratio and etc.
#
| reports/Results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.5.0-dev
# language: julia
# name: julia-0.5
# ---
# # Querying hierarchical data
#
# In this notebook, we explore how to query hierarchical databases.
# ## The database
#
# We start with loading a sample hierarchical database. Our sample database is derived from the dataset of all employees of the city of Chicago ([source](https://data.cityofchicago.org/Administration-Finance/Current-Employee-Names-Salaries-and-Position-Title/xzkq-xp2w)).
# +
ENV["LINES"] = 15
include("../citydb_json.jl")
citydb
# -
# In hierarchical data model, data is organized in a tree-like structure. In this database, data is stored as a JSON document organized in a 2-level hierarchy:
#
# * the top level object contains field `"departments"` with an array of department objects;
# * each department object has fields `"name"`, the name of the department, and `"employees"`, an array of employees;
# * each employee object has fields `"name"`, `"surname"`, `"position"`, `"salary"` describing the employee.
#
# $$
# \text{departments} \quad
# \begin{cases}
# \text{name} \\
# \text{employees} \quad
# \begin{cases}
# \text{name} \\
# \text{surname} \\
# \text{position} \\
# \text{salary}
# \end{cases}
# \end{cases}
# $$
#
# Here is a fragment of the dataset:
#
# ```json
# {
# "departments": [
# {
# "name": "WATER MGMNT",
# "employees": [
# {
# "name": "ALVA",
# "surname": "A",
# "position": "WATER RATE TAKER",
# "salary": 87228
# },
# ...
# ]
# },
# ...
# ]
# }
# ```
# ## Combinators
#
# We may want to ask some questions about the data. For example,
#
# * *What are the departments in the city of Chicago?*
# * *How many employees in each department?*
# * *What is the top salary among all the employees?*
# * and so on...
#
# Even though the raw dataset does not immediately contain any answers to these questions, it has enough information so that the answers could be inferred from the data if we are willing to write some code (we use [Julia](http://julialang.org/) programming language).
#
# Take a relatively complicated problem:
#
# > *For each department, find the number of employees with the salary higher than $100k.*
#
# It can be solved as follows:
# +
Depts_With_Num_Well_Paid_Empls(data) =
map(d -> Dict(
"name" => d["name"],
"N100k" => length(filter(e -> e["salary"] > 100000, d["employees"]))),
data["departments"])
Depts_With_Num_Well_Paid_Empls(citydb)
# -
# Is it a good solution? Possibly. It is certainly compact, due to our use of `map` and `filter` to traverse the structure of the database. On the other hand, to write or understand code like this, one needs solid understanding of non-trivial CS concepts such as high-order and anonymous functions. One needs to be a professional programmer.
#
# Is there a way to write this query without use of `map` and `filter` (or, equivalently, nested loops)? Indeed, there is, but to show how to do it, we need to introduce some new primitives and operations. We start with the notion of *JSON combinators*.
#
# A JSON combinator is simply a function that maps JSON input to JSON output. Two trivial examples of JSON combinators are:
#
# * `Const(val)`, which maps each input value to constant value `val`.
# * `This()`, which copies the input to the output without changes.
# +
Const(val) = x -> val
C = Const(42)
C(true), C(42), C([1,2,3])
# -
# In this example, `Const(42)` creates a new constant combinator. It is then applied to various input JSON values, always producing the same output.
# +
This() = x -> x
I = This()
I(true), I(42), I([1,2,3])
# -
# Similarly, `This()` creates a new identity combinator. We test it with different input values to assure ourselves that it does not change the input.
#
# Notice the pattern:
#
# * First, we create a combinator *(construct a query)* using combinator constructors.
# * Then, we apply the combinator *(execute the query)* against the data.
#
# In short, by designing a collection of useful combinators, we are creating a query language (embedded in the host language, but this is immaterial).
# ## Traversing the hierarchy
#
# Now let us define a more interesting combinator. `Field(name)` extracts a field value from a JSON object.
Field(name) = x -> x[name]
Salary = Field("salary")
Salary(Dict("name" => "RAHM", "surname" => "E", "position" => "MAYOR", "salary" => 216210))
# Here, to demonstrate field extractors, we defined `Salary`, a combinator that extracts value of field `"salary"` from the input JSON object.
#
# To build interesting queries, we need a way to construct complex combinators from primitives. Let us define composition `(F >> G)` of combinators `F` and `G` that ties `F` and `G` by sending the output of `F` to the input of `G`.
#
# Our first, naive attempt to implement composition is as follows:
# +
import Base: >>
(F >> G) = x -> G(F(x))
# -
# We can traverse the structure of hierarchical data by chaining field extractors with the composition operator.
#
# $$
# \textbf{departments}\gg \quad
# \begin{cases}
# \gg\textbf{name} \\
# \text{employees} \quad
# \begin{cases}
# \text{name} \\
# \text{surname} \\
# \text{position} \\
# \text{salary}
# \end{cases}
# \end{cases}
# $$
#
# For example, let us *find the names of all departments.* We can do it by composing extractors for fields `"departments"` and `"name"`.
# +
Departments = Field("departments")
Name = Field("name")
Dept_Names = Departments >> Name
Dept_Names(citydb)
# -
# What is going on? We expected to get a list of department names, but instead we got an error.
#
# Here is a problem. With the current definition of the ``>>`` operator, expression
# ```julia
# (Departments >> Name)(citydb)
# ```
# is translated to
# ```julia
# citydb["departments"]["name"]
# ```
# But this fails because `citydb["departments"]` is a array and thus doesn't have a field called `"name"`.
#
# Let us demonstrate the behavior of `>>` on the *duplicating* combinator. Combinator `Dup` duplicates its input, that is, for any input value `x`, it produces an array `[x, x]`. See what happens when we compose `Dup` with itself, once or several times:
# +
Dup = x -> Any[x, x]
Dup(0), (Dup >> Dup)(0), (Dup >> Dup >> Dup)(0)
# -
# We need composition `(F >> G)` to be smarter. When `F` produces an array, the composition should apply `G` to *each* element of the array. In addition, if `G` also produces array values, `(F >> G)` concatenates all outputs to produce a single array value.
# +
(F >> G) = x -> _flat(_map(G, F(x)))
_flat(z) =
isa(z, Array) ? foldr(vcat, [], z) : z
_map(G, y) =
isa(y, Array) ? map(_expand, map(G, y)) : G(y)
_expand(z_i) =
isa(z_i, Array) ? z_i : z_i != nothing ? [z_i] : []
# -
# Let us test the updated `>>` operator with `Dup` again. We see that the output arrays are now flattened:
Dup(0), (Dup >> Dup)(0), (Dup >> Dup >> Dup)(0)
# We can get back to our original example, *finding the names of all departments*. Now we get the result we expected.
Dept_Names = Departments >> Name
Dept_Names(citydb)
# Similarly, we can list values of any attribute in the hierarchy tree. For example, let us *find the names of all employees*.
#
# $$
# \textbf{departments}\gg \quad
# \begin{cases}
# \text{name} \\
# \gg\textbf{employees}\gg \quad
# \begin{cases}
# \gg\textbf{name} \\
# \text{surname} \\
# \text{position} \\
# \text{salary}
# \end{cases}
# \end{cases}
# $$
#
# We can do it by composing field extractors on the path from the root of the hierarchy to the `"name"` attribute:
# +
Employees = Field("employees")
Empl_Names = Departments >> Employees >> Name
Empl_Names(citydb)
# -
# ## Summarizing data
#
# Field extractors and composition give us a way to traverse the hierarchy tree. We still need a way to summarize data.
#
# Consider a query: *find the number of departments*. To write it down, we need a combinator that can count the number of elements in an array.
#
# Here is our first attempt to implement it:
Count() = x -> length(x)
# We compose it with a combinator that generates an array of departments to *calculate the number of departments:*
Num_Depts = Departments >> Count()
Num_Depts(citydb)
# But that's not what we expected! Here is the problem: the composition operator does not let `Count()` see the whole array. Instead, `Departments >> Count()` submits each array element to `Count()` one by one and then concatenates the outputs of `Count()`. `Count()`, when its input is a JSON object, returns the number of fields in the object (in this case, 2 fields for all department objects).
#
# The right way to implement `Count()` is to add an array-producing combinator as a parameter:
Count(F) = x -> length(F(x))
Num_Depts = Count(Departments)
Num_Depts(citydb)
# How to use composition with `Count()` correctly? Here is an example: *show the number of employees for each department*. Consider this: *number of employees* is a (derived) property of *each department*, which suggests us to compose two combinators: one generating department objects and the other calculating the number of employees for a given department. We get:
Num_Empls_Per_Dept = Departments >> Count(Employees)
Num_Empls_Per_Dept(citydb)
# On the other hand, if we'd like to *calculate the total number of employees*, the parameter of `Count()` should be the combinator that generates all the employees:
Num_Empls = Count(Departments >> Employees)
Num_Empls(citydb)
# We could add other summarizing or *aggregate* combinators. For example, let us define a combinator that finds the maximum value in an array.
Max(F) = x -> maximum(F(x))
# Aggregate combinators could be combined to answer complex questions. For example, let us *find the maximum number of employees per department*. We already have a combinator that generates the number of employees for each department, all is left is to apply `Max()`.
Max_Empls_Per_Dept = Max(Num_Empls_Per_Dept) # Max(Departments >> Count(Employees))
Max_Empls_Per_Dept(citydb)
# ## Constructing objects
#
# We learned how to traverse and summarize data. Let us show how to create new structured data.
#
# Combinator `Select()` constructs JSON objects. It is parameterized with a list of field names and constructors.
Select(fields...) =
x -> Dict(map(f -> f.first => f.second(x), fields))
# For each input, `Select()` constructs a new JSON object with field values generated by field constructors applied to the input.
#
# Here is a simple example of `Select()` summarizing the input array:
S = Select("len" => Count(This()), "max" => Max(This()))
S([10, 20, 30])
# Similarly, we can summarize any hierarchical dataset. Let us modify the query that *finds the number of employees for each department*. Instead of a raw list of numbers, we will generate a table with the name of the department and its size (the number of employees):
# +
Depts_With_Size =
Departments >> Select(
"name" => Name,
"size" => Count(Employees))
Depts_With_Size(citydb)
# -
# This query could easily be expanded to add more information about the department. For that, we only need to add extra field definitions to the `Select()` clause. Notably, change in one field constructor cannot in any way affect the values of the other fields.
#
# Let us additionally determine *the top salary for each department*:
# +
Depts_With_Size_And_Max_Salary =
Departments >> Select(
"name" => Name,
"size" => Count(Employees),
"max_salary" => Max(Employees >> Salary))
Depts_With_Size_And_Max_Salary(citydb)
# -
# ## Filtering data
#
# Remember the problem we stated in the beginning: *find the number of employees with the salary higher than $100k*. We have almost all pieces we need to construct a solution of this problem. One piece that appears to be missing is a way to refine data. We need a combinator that, given a set of values and a predicate, produces the values that satisfy the predicate condition.
#
# Here is how we can implement it:
Sieve(P) = x -> P(x) ? x : nothing
# Combinator `Sieve(P)` is parameterized with a predicate combinator `P`. A predicate is a combinator that, for any input, returns `true` or `false`. For example, a predicate combinator `(F < G)` with two parameters `F` and `G` returns, for any input `x`, the result of comparison `F(x) < G(x)`.
#
# Let us implement common predicate (and also some arithmetic) combinators:
# +
import Base: >, >=, <, <=, ==, !=, !, &, |, +, -, /, %
(>)(F::Function, G::Function) = x -> F(x) > G(x)
(>)(F::Function, n::Number) = F > Const(n)
(>=)(F::Function, G::Function) = x -> F(x) >= G(x)
(>=)(F::Function, n::Number) = F >= Const(n)
(<)(F::Function, G::Function) = x -> F(x) < G(x)
(<)(F::Function, n::Number) = F < Const(n)
(<=)(F::Function, G::Function) = x -> F(x) <= G(x)
(<=)(F::Function, n::Number) = F <= Const(n)
(==)(F::Function, G::Function) = x -> F(x) == G(x)
(==)(F::Function, n::Number) = F == Const(n)
(!=)(F::Function, G::Function) = x -> F(x) != G(x)
(!=)(F::Function, n::Number) = F != Const(n)
(!)(F::Function) = x -> !F(x)
(&)(F::Function, G::Function) = x -> F(x) && G(x)
(|)(F::Function, G::Function) = x -> F(x) || G(x)
(+)(F::Function, G::Function) = x -> F(x) + G(x)
(+)(F::Function, n::Number) = F + Const(n)
(-)(F::Function, G::Function) = x -> F(x) - G(x)
(-)(F::Function, n::Number) = F - Const(n)
(/)(F::Function, G::Function) = x -> F(x) / G(x)
(/)(F::Function, n::Number) = F / Const(n)
(%)(F::Function, G::Function) = x -> F(x) % G(x)
(%)(F::Function, n::Number) = F % Const(n)
# -
# `Sieve(P)` tests its input on the predicate condition `P`. If the input satisfies the condition, it is returned without changes. Otherwise, `nothing` is returned.
#
# Here is a trivial example to demonstrate how `Sieve()` works:
Take_Odd = Sieve(This() % 2 == 1)
Take_Odd(5), Take_Odd(10)
# When the composition operator accumulates values for array output, it drops `nothing` values. Thus, in a composition `(F >> Sieve(P))` with an array-generating combinator `F`, `Sieve(P)` filters the elements of the array.
#
# Let us use this feature to *list the departments with more than 1000 employees*. We already defined a combinator producing departments with the number of employees, we just need to filter its output:
# +
Size = Field("size")
Large_Depts = Depts_With_Size >> Sieve(Size > 1000)
Large_Depts(citydb)
# -
# Similarly, we can *list positions of employees with salary higher than 200k*:
# +
Position = Field("position")
Very_Well_Paid_Posns =
Departments >> Employees >> Sieve(Salary > 200000) >> Position
Very_Well_Paid_Posns(citydb)
# -
# With `Sieve()` defined, we are finally able to answer the original question using combinators:
#
# > *For each department, find the number of employees with salary higher than 100k.*
# +
Better_Depts_With_Num_Well_Paid_Empls =
Departments >> Select(
"name" => Name,
"N100k" => Count(Employees >> Sieve(Salary > 100000)))
Better_Depts_With_Num_Well_Paid_Empls(citydb)
# -
# Compare it with the original solution. The new one reads much better!
Depts_With_Num_Well_Paid_Empls(data) =
map(d -> Dict(
"name" => d["name"],
"N100k" => length(filter(e -> e["salary"] > 100000, d["employees"]))),
data["departments"])
# ## Parameters
#
# We achieved our goal of sketching (a prototype of) a query language for hierarchical databases. Let us explore how it could be developed further. One possible way to improve it is by adding query parameters.
#
# Consider a problem: *find the number of employees whose annual salary exceeds 200k*. We have all the tools to solve it:
# +
Num_Very_Well_Paid_Empls =
Count(Departments >> Employees >> Sieve(Salary >= 200000))
Num_Very_Well_Paid_Empls(citydb)
# -
# Now, imagine that we'd like to *find the number of employees with salary in a certain range*, but we don't know the range at the time we construct the query. Instead, we want to specify the range when we *execute* the query.
#
# Let us introduce a *query context*, a collection of parameters and their values. We'd like the query context to travel with the input, where each combinator could access it if necessary. Thus, we have an updated definition of a JSON combinator: a function that maps JSON input and query context to JSON output.
#
# We need to update existing combinators to make them context-aware:
# +
Const(val) = (x, ctx...) -> val
This() = (x, ctx...) -> x
(F >> G) = (x, ctx...) -> _flat(_map(G, F(x, ctx...), ctx...))
_map(G, y, ctx...) =
isa(y, Array) ? map(_expand, map(yi -> G(yi, ctx...), y)) : G(y, ctx...)
Field(name) = (x, ctx...) -> x[name]
Select(fields...) =
(x, ctx...) -> Dict(map(f -> f.first => f.second(x, ctx...), fields))
Count(F) = (x, ctx...) -> length(F(x, ctx...))
Max(F) = (x, ctx...) -> maximum(F(x, ctx...))
Sieve(P) = (x, ctx...) -> P(x, ctx...) ? x : nothing
(>)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) > G(x, ctx...)
(>=)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) >= G(x, ctx...)
(<)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) < G(x, ctx...)
(<=)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) <= G(x, ctx...)
(==)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) == G(x, ctx...)
(!=)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) != G(x, ctx...)
(!)(F::Function) = (x, ctx...) -> !F(x, ctx...)
(&)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) && G(x, ctx...)
(|)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) || G(x, ctx...)
(+)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) + G(x, ctx...)
(-)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) - G(x, ctx...)
(/)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) / G(x, ctx...)
(%)(F::Function, G::Function) = (x, ctx...) -> F(x, ctx...) % G(x, ctx...)
# -
# Next, let us add combinator `Var(name)` that extracts the value of a parameter from the query context.
Var(name) = (x, ctx...) -> Dict(ctx)[name]
# Now we can make parameterized queries. *Find the number of employees with salary in a certain range:*
# +
Min_Salary = Var("min_salary")
Max_Salary = Var("max_salary")
Departments = Field("departments")
Employees = Field("employees")
Salary = Field("salary")
Num_Empls_By_Salary =
Count(
Departments >>
Employees >>
Sieve((Salary >= Min_Salary) & (Salary < Max_Salary)))
Num_Empls_By_Salary(citydb, "min_salary" => 100000, "max_salary" => 200000)
# -
# Use of context is not limited to query parameters. We can also update the context dynamically.
#
# Consider a problem: *find the employee with the highest salary*.
#
# It can be solved in two queries. First, *find the highest salary:*
Max_Salary = Max(Departments >> Employees >> Salary)
Max_Salary(citydb)
# Second, *find the employee with the given salary:*
# +
The_Salary = Var("salary")
Empl_With_Salary = Departments >> Employees >> Sieve(Salary == The_Salary)
Empl_With_Salary(citydb, "salary" => 260004)
# -
# We need to automate this sequence of operations. Specifically, we take a value calculated by one combinator, assign it to some context parameter, and then evaluate the other combinator in the updated context. That's what `Given()` combinator does:
Given(F, vars...) =
(x, ctx...) ->
let ctx = (ctx..., map(v -> v.first => v.second(x, ctx...), vars)...)
F(x, ctx...)
end
# Combining `Max_Salary` and `Empl_With_Salary` using `Given`, we get:
# +
Empl_With_Max_Salary = # Given(Empl_With_Salary, "salary" => Max_Salary)
Given(
Departments >> Employees >> Sieve(Salary == The_Salary),
"salary" => Max(Departments >> Employees >> Salary))
Empl_With_Max_Salary(citydb)
# -
# This is not just a convenience feature. Indeed, let us change this query to *find the highest paid employee for each department*. To implement it, we need to pull `Departments` from the `Given()` clause:
# +
Empls_With_Max_Salary_By_Dept =
Departments >> Given(
Employees >> Sieve(Salary == The_Salary),
"salary" => Max(Employees >> Salary))
Empls_With_Max_Salary_By_Dept(citydb)
# -
# ## Limitations and conclusion
#
# Consider a problem: *find the top salary for each department*. This is an easy one:
Max_Salary_By_Dept = Departments >> Max(Employees >> Salary)
Max_Salary_By_Dept(citydb)
# Now change it to: *find the top salary for each position.* We can't solve it with our current set of combinators. Why?
#
# Look at the database hierarchy diagram:
#
# $$
# \text{departments} \quad
# \begin{cases}
# \text{name} \\
# \text{employees} \quad
# \begin{cases}
# \text{name} \\
# \text{surname} \\
# \text{position} \\
# \text{salary}
# \end{cases}
# \end{cases}
# $$
#
# The structure of the first query (*top salary for each department*) fits the structure of the database:
#
# $$
# \textbf{departments}\gg \quad
# \begin{cases}
# \text{name} \\
# \gg\textbf{employees}\gg \quad
# \begin{cases}
# \text{name} \\
# \text{surname} \\
# \text{position} \\
# \gg\textbf{salary}
# \end{cases}
# \end{cases}
# $$
#
# The structure of the second query (*top salary for each position*) violates this structure:
#
# $$
# \text{departments} \quad
# \begin{cases}
# \text{name} \\
# \textbf{employees}\lessgtr \quad
# \begin{cases}
# \text{name} \\
# \text{surname} \\
# \ll\textbf{position} \\
# \gg\textbf{salary}
# \end{cases}
# \end{cases}
# $$
#
# This is not the only limitation. Let us not forget that real databases are *decidedly* non-hierarchical. For example, this is the database schema (designed by <NAME>) of our flagship product [RexStudy](http://www.rexdb.org/). No hierarchy in sight!
#
# 
#
# As a conclusion, combinators are awesome for querying data as long as:
#
# 1. The data is hierarchical.
# 2. The structure of the query respects the structure of the data.
#
# Otherwise, we are out of luck...
#
# *... Or are we?*
| jl/demo/querying-hierarchical-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="M7ajrVEAo-p0"
# # Circle Packing using Circlify
# > Data Visualization
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [level-3, chapter-3, matplotlib, data-visualization]
# - image: images/surname_song.png
# + [markdown] id="_Tytl4ZXoYiS"
# As mentioned in the [instructions](https://pinkychow1010.github.io/digital-chinese-history-blog/about/), all materials can be `open in Colab` as Jupyter notebooks. In this way users can run the code in the cloud. It is highly recommanded to follow the [tutorials](https://pinkychow1010.github.io/digital-chinese-history-blog/) in the right order.
#
# ***
#
# This notebook aims to introduce users some practical skills on generating a circle packing chart.
#
# <br>
#
# **Presumptions:**
#
# Not applicable.
#
# <br>
# + colab={"base_uri": "https://localhost:8080/"} id="N6iHWGN8bzrP" outputId="8aa14814-050a-4abf-be21-6183af13eaa3"
# ! pip install circlify
# + id="LCmSQHgNb2iL"
from pprint import pprint as pp
import circlify as circ
# + id="t_wbVzXD5JC8"
surname = """
王 李 張 趙 劉 陳 楊 吳 黃 朱 孫 郭 胡 呂 高 宋 徐 程 林 鄭 范 何 韓 曹 馬 許 田 馮 杜 周 曾 汪 蘇 董 方 蔡 梁 石 謝 賈 薛 彭 崔 唐 潘 鄧 任 史 錢 侯 魏 羅 葉 沈 孟 姚 傅 丁 章 蕭 蔣 盧 陸 袁 江 晁 譚 邵 歐陽 孔 俞 尹 廖 閻 洪 夏 雷 葛 文 柳 陶 毛 丘 龔 康 蒲 邢 郝 龐 安 裴 折 施 游 金 鄒 湯 虞 嚴 鍾
"""
import re
surname = list(re.sub("\s+", "", surname.strip()))
# + colab={"base_uri": "https://localhost:8080/"} id="1KKBJ4M65pE7" outputId="c2f06874-299e-4fa2-e88a-596bd5593f88"
surname[:5]
# + id="j8LqnrwC5_Lc"
surname.reverse()
# + colab={"base_uri": "https://localhost:8080/"} id="WZM7-O_i6MlV" outputId="75143b9b-99f4-4be0-cb7f-7d15315f1df6"
len(surname)
# + id="MeCAz47w51A0"
import pandas as pd
import numpy as np
df = pd.DataFrame({
'surname': surname,
'weight': 5*np.arange(1,102)
})
# + id="24uenV1M6bgH"
# import the circlify library
import circlify
# compute circle positions:
circles = circlify.circlify(
df['weight'].tolist(),
target_enclosure=circlify.Circle(x=0, y=0, r=1),
show_enclosure=False
)
# + colab={"base_uri": "https://localhost:8080/"} id="NsRU95PvCKw2" outputId="2e81cf6c-0069-4d36-ee75-b39ccabb65b6"
len(df.weight)
# + id="dFufXPQnCiW4"
import math
import numpy as np
x = np.array([cir.x for cir in circles])
y = np.array([cir.y for cir in circles])
r = np.array([cir.r for cir in circles])
bubble_df = pd.DataFrame({
'x': x,
'y': y,
'r': r,
'l': df.sort_values('weight').surname.values,
's': (math.pi)*(r**2)
})
# + id="iHeTmtTA6_kt"
# !wget -O TaipeiSansTCBeta-Regular.ttf https://drive.google.com/uc?id=1eGAsTN1HBpJAkeVM57_C7ccp7hbgSz3_&export=download
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.font_manager import fontManager
fontManager.addfont('TaipeiSansTCBeta-Regular.ttf')
mpl.rc('font', family='Taipei Sans TC Beta')
# + colab={"base_uri": "https://localhost:8080/", "height": 601} id="_i-1GliR6os8" outputId="0766975e-85a2-4733-88f7-38557236892d"
# import libraries
import circlify
import numpy as np
import random
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.colors import ListedColormap
palette = ["#CD5C5C","#F08080","#E9967A","#FFA07A","#C04000","#FF5F1F","#B22222","#660000","#C21E56"]
# Create just a figure and only one subplot
fig, ax = plt.subplots(figsize=(10,10))
# Title
ax.set_title('宋朝人口姓氏期望分佈', fontsize=26)
# Remove axes
ax.axis('off')
# Find axis boundaries
lim = max(
max(
abs(circle.x) + circle.r,
abs(circle.y) + circle.r,
)
for circle in circles
)
plt.xlim(-lim, lim)
plt.ylim(-lim, lim)
# list of labels
labels = df.sort_values('weight').surname.values
# print circles
for circle, label in zip(circles, labels):
x, y, r = circle
ax.add_patch(plt.Circle((x, y), r, alpha=0.6, linewidth=1, facecolor=random.choice(palette), edgecolor="white"))
plt.annotate(
label,
(x,y) ,
va='center',
ha='center',
fontsize=300*r,
color="black"
)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="EbuQH0YK9xnF" outputId="a6d688f5-7f53-40d6-da94-3a89d1642c40"
bubble_df["rank"] = bubble_df.sort_values(by="r", ascending=False).index
bubble_df.head()
# + id="SiIw-29M_QW4"
font_size = 250*bubble_df.r.values
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="RgplRkNXCbpb" outputId="01e7a4dd-7b51-43be-e6e5-3976005318b6"
import plotly.express as px
fig = px.scatter(bubble_df, x="x", y="y", custom_data=["l","rank"], color="x", width=800, height=700,
size="s", hover_name="l", size_max=45, text="l")
fig.update(layout_coloraxis_showscale=False)
fig.update_traces(
hovertemplate="<br>".join([
"Surname: %{customdata[0]}",
"Ranking: %{customdata[1]}"
])
)
fig.update_layout(showlegend=False)
fig.update_xaxes(visible=False)
fig.update_yaxes(visible=False)
fig.update_yaxes(
scaleanchor = "x",
scaleratio = 0.95,
)
fig.update_layout({
'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
})
fig.update_layout(
title={
'text': "<b>宋朝人口姓氏期望分佈</b>",
'y':0.97,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
font=dict(
family="Courier New, monospace",
size=18,
color="black"
)
)
fig.update_traces(textfont_size=font_size)
fig.show()
# + [markdown] id="maKLA_hRpba4"
# <br>
# <br>
#
# ***
#
# ## **Additional information**
#
# This notebook is provided for educational purpose and feel free to report any issue on GitHub.
#
# <br>
#
# **Author:** [<NAME>](https://www.linkedin.com/in/ka-hei-chow-231345188/)
#
# **License:** The code in this notebook is licensed under the [Creative Commons by Attribution 4.0 license](https://creativecommons.org/licenses/by/4.0/).
#
# **Last modified:** December 2021
#
# <br>
#
# ***
#
# <br>
#
# ## **References:**
#
# https://github.com/elmotec/circlify
| _notebooks/2020-02-02-Circlify.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv_playground
# language: python
# name: venv_playground
# ---
# +
import numpy as np
import pandas as pd
from IPython.display import display
import seaborn as sns
import matplotlib.pyplot as plt
# -
train = pd.read_parquet("data/tidy/train.parquet")
# # **Validação da base**
# ---
#
# - Os minimos e máximos das variáveis fazem sentido?
# - Quais colunas tem dados faltantes?
# - Existem amostras duplicadas?
#
# + jupyter={"outputs_hidden": true} tags=[]
train.describe().loc[["min", "max"]].T
# +
cond = train.isna().sum() > 0
train.columns[cond]
# -
train.duplicated().any()
# ---
# - Qual a nota das pessoas que faltaram nas provas ?
#
# - TP_PRESENCA_CN
# - TP_PRESENCA_CH
# - TP_PRESENCA_LC
# - TP_PRESENCA_MT
# +
tp_mapping = {"TP_PRESENCA_CN": "NU_NOTA_CN",
"TP_PRESENCA_CH": "NU_NOTA_CH",
"TP_PRESENCA_LC": "NU_NOTA_LC",
"TP_PRESENCA_MT": "NU_NOTA_MT"}
# 0 significa faltou e 2 eliminado
for presence, nota in tp_mapping.items():
aux = (train.query(f"{presence} == 0 | {presence} == 2")
.loc[:,nota]
.isna()
.all())
print(f"{presence}: {aux}")
# -
# ---
#
# - Qual a nota das pessoas que faltaram em 1 dia?
#
# - TP_PRESENCA_CN
# - TP_PRESENCA_CH
# - TP_PRESENCA_LC
# - TP_PRESENCA_MT
#
# O enem possuí dois dias de prova. No primeiro dia são aplicadas as provas de Linguagens, Códigos e suas Tecnologias ```TP_PRESENCA_LC```, Ciências Humanas e suas Tecnologias (```TP_PRESENCA_CH```) e Redação. No segundo dia são aplicadas as provas de Ciências da Natureza e suas Tecnologias (```TP_PRESENCA_CN```) e Matemática e suas Tecnologias (```TP_PRESENCA_MT```). Isto é, se faltou em um dia as notas daquele dia devem todas estarem zeradas.
# + tags=[]
day_mapping = {"DAY_ONE":(("TP_PRESENCA_LC", "TP_PRESENCA_CH"), ("NU_NOTA_LC", "NU_NOTA_CH", "NU_NOTA_REDACAO")),
"DAY_TWO":(("TP_PRESENCA_CN", "TP_PRESENCA_MT"), ("NU_NOTA_CN", "NU_NOTA_MT"))}
for day, val in day_mapping.items():
(presence1, presence2), (notas) = val
aux = (train.query(f"({presence1} == 0 | {presence1} == 2) & ({presence2} == 2 | {presence2} == 0)")
.loc[:,notas]
.isna()
.all())
print(f"{day}:")
print(f"{aux}\n")
# + [markdown] tags=[]
# # **Exemplo**
# ---
# - Qual a distribuição das notas por sexo?
#
# - TP_SEXO
# - NU_NOTA_MT
#
# -
train.filter(regex="NU_NOTA").columns
# +
notas_columns = ['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_MT', 'NU_NOTA_REDACAO']
temp = (train.reset_index()
.melt(id_vars="TP_SEXO", value_vars=notas_columns, var_name="PROVAS", value_name="NOTA"))
temp.head()
# + jupyter={"outputs_hidden": true} tags=[]
g = sns.FacetGrid(temp, col="PROVAS", hue="TP_SEXO", col_wrap=3, height=4, sharex=False, sharey=False)
g.map_dataframe(sns.histplot, x="NOTA",stat="density")
g.add_legend()
plt.show()
# -
# - Calcule uma medida de tendência central e uma medida de dispersão (Calcule para cada grupo).
# + jupyter={"outputs_hidden": true} tags=[]
def iqr(x):
return x.quantile(0.75) - x.quantile(0.25)
stats = ["mean", "std", "median", iqr, "skew"]
(train.groupby("TP_SEXO", as_index=False)
.agg({"NU_NOTA_CN":stats,
"NU_NOTA_CH":stats,
"NU_NOTA_LC":stats,
"NU_NOTA_MT":stats,
"NU_NOTA_REDACAO":stats})
.T)
# -
# - Encontre os outliers (utilize alguns dos métodos vistos em aulas).
data_structure = (train.loc[:, notas_columns]
.apply(lambda x: (x.mean(), x.std())))
data_structure
# + tags=[]
def z_score(x, data_structure):
u = data_structure.loc[0, x.name]
sigma = data_structure.loc[1, x.name]
return (x - u)/sigma
z_scores = train.loc[:, notas_columns].apply(lambda x: z_score(x,data_structure))
z_scores.columns = [column + "_ZSCORE" for column in z_scores.columns]
z_scores
# +
var = "NU_NOTA_MT_ZSCORE"
amostras_a = pd.concat([train, z_scores], axis =1).query(f"{var} > 3 or {var} < -3").loc[:, var.split("_ZSCORE")[0]]
amostras_b = pd.concat([train, z_scores], axis =1).query(f"{var} <= 3 and {var} >= -3").loc[:, var.split("_ZSCORE")[0]]
g = plt.hist([amostras_a, amostras_b], bins=100)
plt.show()
# -
# # **Questões**
# ---
#
# - Qual a distribuição das notas por cor/raça?
#
# - TP_COR_RACA
#
# - Calcule uma medida de tendência central e uma medida de dispersão. (Calcule para cada grupo)
#
# - Encontre os outliers (utilize alguns dos métodos vistos em aulas).
# ---
#
# - Qual a distribuição das notas por estado civil?
#
# - TP_ESTADO_CIVIL
#
# - Calcule uma medida de tendência central e uma medida de dispersão. (Calcule para cada grupo)
#
# - Encontre os outliers (utilize alguns dos métodos vistos em aulas).
# ---
# - Existe diferença entre a nota das pessoas que precisam de recurso?
#
# - IN_SEM_RECURSO
# ---
#
# - Alunos que fizeram a prova em sala particular performam melhor que o resto?
#
# - IN_SALA_INDIVIDUAL
# ---
# - Existe diferença entre a nota dos jovens e idosos?
#
# - IN_IDOSO
# ---
# - Existe diferença entre as notas de quem escolheu inglês ou espanhol?
#
# - TP_LINGUA
#
# ---
#
# - Ter alguma deficiência impacta na nota?
#
# - IN_BAIXA_VISAO
# - IN_CEGUEIRA
# - IN_SURDEZ
# - IN_DEFICIENCIA_AUDITIVA
# - IN_SURDO_CEGUEIRA
# - IN_DEFICIENCIA_FISICA
# - IN_DEFICIENCIA_MENTAL
# - IN_DEFICIT_ATENCAO
# - IN_DISLEXIA
# - IN_DISCALCULIA
# - IN_AUTISMO
# - IN_VISAO_MONOCULAR
# - IN_OUTRA_DEF
# ---
# - Como as seguintes variáveis estão relacionadas?
#
# - IN_CEGUEIRA
# - IN_BRAILLE
# - IN_LIBRAS
#
# ---
# - Analise as perguntas. Você acredita que alguma delas pode te ajudar a entender os resultados das notas nas provas?
#
# ```python
# ['Q001', 'Q002', 'Q003', 'Q004', 'Q005', 'Q006', 'Q007', 'Q008', 'Q009',
# 'Q010', 'Q011', 'Q012', 'Q013', 'Q014', 'Q015', 'Q016', 'Q017', 'Q018',
# 'Q019', 'Q020', 'Q021', 'Q022', 'Q023', 'Q024', 'Q025']
# ```
| semana_4/eda_exercicios.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Jobby-John/KNN-AND-NAIVE-BAYES/blob/main/Week_9_Ip_on_k_Nearest_Neighbours(KNN).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="1N94e3Rv_TAP"
# **Week 9 Ip on k-Nearest Neighbours(KNN)*
# + [markdown] id="NQbBuP5t_iv0"
# **Defining the question**
# + [markdown] id="dVEVqqTc_nd5"
# a) Specifying the analytic question
# + [markdown] id="hkBQtfwYAEwc"
# Build a model using K-N Neighbours to predict the survivors of the titanic shipwreck.
#
# Adjust splitting size for training test and number of neigbours in order to optimize the model
#
# + [markdown] id="KBTTAp51AIFq"
# **b) Defining the Metric of Success**
# + [markdown] id="_1HahPi6AMNn"
# If the model has an accuracy higher than 80% it will be considered successful
# + [markdown] id="50LD7Y5s_3kD"
# **c)Understanding the context**
# + [markdown] id="idWoRe08AURu"
# The RMS Titanic, a luxury steamship, sank in the early hours of April 15, 1912, off the coast of Newfoundland in the North Atlantic after sideswiping an iceberg during its maiden voyage. Of the 2,240 passengers and crew on board, more than 1,500 lost their lives in the disaster.
#
# Building a model that can predict survivers is crucial in order to prevent future tragedies.
#
# + [markdown] id="ROb82Z8zASFT"
# **d)Recording the Experimental Design**
# + [markdown] id="nOW1zobL_ujr"
# importing libraries
#
# -loading data
#
# -data cleaning
#
# -exploratory data analysis
#
# -modeling
#
# -implementing model
#
# -conclusion
#
# -recommendations
#
# + [markdown] id="MD_3FZrdAdhQ"
# **2.Reading the Data**
# + id="ES8UlX7JAgd9"
#importing the libaries
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# + colab={"base_uri": "https://localhost:8080/", "height": 580} id="c4Zh6iB7Cg6H" outputId="c6aeeb8f-845a-43c8-dd78-f4431a18832d"
#Loading the data
data=pd.read_csv('train.csv')
data
# + [markdown] id="eSbhIjkbCw3h"
# **3. Checking the Data**
# + colab={"base_uri": "https://localhost:8080/"} id="tFcfZtBqCyqg" outputId="4a113802-ce2e-4fb7-eb09-12c33262f2d3"
data.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 276} id="cHEOZyHEC1Qe" outputId="89864c4c-a125-48ac-b896-fbb09f774797"
data.head(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 241} id="t6aopnbFC6yy" outputId="d0869047-ef91-4ed0-f19f-a710694205a4"
#previewing the bottom of our data
data.tail(5)
# + colab={"base_uri": "https://localhost:8080/"} id="qTeMNht5DDK9" outputId="e3c15956-571d-42cd-d4c7-20ce1d4ef8db"
#check the datatypes of the variables
data.dtypes
# + [markdown] id="SPaYPKywDMi8"
# **5. Tidying the Dataset**
# + id="de1ZEPcXDN4l"
#The first step we will do is to drop the columns that we do not need
#These columns include 'PassengerId', 'Name', 'Ticket', 'Fare', 'Cabin'
data=data.drop(columns=['PassengerId', 'Name', 'Ticket', 'Fare', 'Cabin'],axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="iK6wxskfEWpi" outputId="8dc762f8-2c72-4e2c-a95a-923f2931f3ae"
data.head(5)
# + id="0ZxAFpNoEduE"
#Before we go ahead and clean the data we need to first encode the columns embarked and sex
data['Embarked']=data['Embarked'].astype('category')
data['Sex']=data['Sex'].astype('category')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="HYVcrQ2PE7Lj" outputId="9c6c5459-d532-4dfb-960e-0b5d822d9818"
#Let us preview our dataset
data.head(5)
# + colab={"base_uri": "https://localhost:8080/"} id="c_7mbZWsFDSr" outputId="8e82cfbc-4ef3-45d7-a825-833da63cd02f"
#Checking the existence of null values
data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="YVjUcZaCFPDb" outputId="3b5b4cdd-1b76-4f16-f143-11c6c34c0355"
#Let us find the mean ages of the people and replace it with the null values in the age column
data_mean=data['Age'].mean()
data_mean
# + id="SWdJn7roFgEy"
data['Age'].fillna(29,inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="T-j_aCTyFpuY" outputId="fbfc29c9-a580-4e32-8f79-b291eec5b248"
#Checking for duplicates in the dataset
data.duplicated().sum()
#There are 306 duplicates in our dataset
# + id="w-8MdE8rGC2s"
# dropping duplicate values
data.drop_duplicates(keep=False,inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="RV4lQoxJGHZc" outputId="7bc13818-0644-4cf3-a17c-7095319a1f63"
#Let us check for any duplicates after dropping them
data.duplicated().sum()
#We have no duplicates now
# + colab={"base_uri": "https://localhost:8080/"} id="nspDARyuGJUm" outputId="444c8088-578e-4de2-c94e-752f1e646506"
#Let us check for unique values in our dataset
data.nunique().sum
#Our data seems to have many unique values
# + id="ENOGCgZ_GNjS"
#checcking the numerical and categorical variables
numerical = data._get_numeric_data().columns
categorical = set(data.columns) - set(numerical)
# + colab={"base_uri": "https://localhost:8080/"} id="ciDAxGskGREp" outputId="f9ece861-b961-4af1-e2c1-95bf7825b87d"
numerical
# + colab={"base_uri": "https://localhost:8080/"} id="PqOfLfYbGRHb" outputId="e46908a2-8935-4f5f-93a1-cde7800f855b"
categorical
# + id="XCsmmY-jG6BH"
#Let us encode the dataset
data['Embarked']=data['Embarked'].cat.codes
data['Sex']=data['Sex'].cat.codes
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="_ov3Y5IIG76x" outputId="d372ec83-f824-4811-81a3-e2bc83882666"
data.head(4)
# + [markdown] id="ZIW3GI8qGWda"
# UNIVARIATE AND BIVARIATE ANALYSIS
# + colab={"base_uri": "https://localhost:8080/"} id="vatesM8eGRKa" outputId="2f0253a5-cd0b-4981-b81b-1d482e523156"
data['Survived'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="ebnfdHkEHIg5" outputId="1f95478a-3a25-4de0-b9b5-11250da696d7"
f, ax = plt.subplots(figsize=(6, 8))
ax = sns.countplot(x="Survived", data=data, palette="Set1")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="ywXsPAqgHQuE" outputId="2ff07c6a-a549-48bf-eee1-7ff568309057"
#PLOTTING HORIZONTALLY
f, ax = plt.subplots(figsize=(8, 4))
ax = sns.countplot(y="Survived", data=data, palette="Set1")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 377} id="HiO1_PCoHWGE" outputId="378522c8-642e-4b3d-9a77-4f7036faaad5"
#Pie chart on survival rates
#
data['Survived'].value_counts().head(5).plot.pie(figsize=(8,6),autopct='%1.1f%%')
plt.title('Survived',size=18)
plt.show()
# + [markdown] id="fYRcqSy_HySA"
# The number of unique values in Survived variable is 2.
#
# The two unique values are No and Yes.
#
# The No variable have 243 entries, and
#
# The Yes variable have 241 entries.
#
# + [markdown] id="DcdPjJSdIBg7"
# Checking for outliers
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="xd8D65BSIGY7" outputId="2728ad39-81a8-4938-9c0e-07cd423a1edf"
num_of_rows = 3
num_of_cols = 2
fig, ax = plt.subplots(num_of_rows, num_of_cols, figsize=(15,15))
print(numerical)
i=0;j=0;k=0;
while i<num_of_rows:
while j<num_of_cols:
sns.boxplot(data[numerical[k]], ax=ax[i, j])
k+=1;j+=1
j=0;i+=1
plt.savefig('before_removing_outliers_from_numerical_columns.png')
plt.show()
# + id="uZK1wKPCIcEZ"
lsUpper = []
lsLower = []
def removeOutliers(numerical):
for i in range(len(numerical)):
q1 = data[numerical[i]].quantile(0.25)
q3 = data[numerical[i]].quantile(0.75)
IQR = q3-q1
minimum = q1 - 1.5 * IQR
maximum = q3 + 1.5 * IQR
data.loc[(data[numerical[i]] <= minimum), numerical[i]] = minimum
data.loc[(data[numerical[i]] >= maximum), numerical[i]] = maximum
removeOutliers(numerical)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="qPbPTOSAId2E" outputId="25478280-0044-47ee-b3ba-65ad06d6cd08"
num_of_rows = 3
num_of_cols = 2
fig, ax = plt.subplots(num_of_rows, num_of_cols, figsize=(15,15))
print(numerical)
i=0;j=0;k=0;
while i<num_of_rows:
while j<num_of_cols:
sns.boxplot(data[numerical[k]], ax=ax[i, j])
k+=1;j+=1
j=0;i+=1
plt.savefig('after_removing_outliers_from_numerical_columns.png')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="PmwBYQWyKKPE" outputId="973a5163-0672-49a3-f28a-47c4a7e4c3a8"
#Let us create the histograms
data.hist(figsize=(16,20),bins=50,xlabelsize=8,ylabelsize=8)
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="KMfdmtUVKbRe" outputId="9fea09e5-71a6-4c7b-81fe-f7c66eb8b8fa"
#Checking the missing the people that survived based on their age
f,ax=plt.subplots(1,2,figsize=(20,5))
sns.distplot(data[data['Survived']==1].Age,ax=ax[0])
ax[0].set_title('People that Surived ')
sns.distplot(data[data['Survived']==0].Age,ax=ax[1])
ax[1].set_title('People that did not Survive')
# + [markdown] id="qXtzx8FQK6G0"
# Most of the children survived compared to the elder people
# Majority of the people that died 20 and 40
# + colab={"base_uri": "https://localhost:8080/", "height": 396} id="YwRNzslbLBVD" outputId="0f5f5cf5-cde5-42af-f6fa-c7a0ca16f3a6"
f,ax=plt.subplots(1,2,figsize=(20,5))
sns.distplot(data[data['Survived']==1].Sex,ax=ax[0])
ax[0].set_title('People that Surived based on gender ')
sns.distplot(data[data['Survived']==0].Sex,ax=ax[1])
ax[1].set_title('People that did not Survive based on gender')
# + colab={"base_uri": "https://localhost:8080/"} id="Nb1oIC60LPnY" outputId="78a9276a-945d-4f03-dcfc-ddc6830a9583"
#Between males and females who survived
data.groupby(["Survived","Sex"])["Sex"].count()
#More males lost their lives
# + colab={"base_uri": "https://localhost:8080/", "height": 489} id="e1y3MealLZHc" outputId="7070b2e2-ee92-43a4-f1db-c8bc39355272"
#Correlation
print(data.corr())
plt.figure(figsize=(15,5))
sns.heatmap(data.corr(),annot=True)
#Our variables are not strongly correlated
#We can work with them
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="gpSj3kqjLtjM" outputId="4811a075-77f5-41cf-c8b2-1381f34d8f99"
data.head(5)
# + [markdown] id="orekqLNwMn0i"
# Modelling
# K-Nearest Neighbour (KNN)
# + id="0X1w8fj4LgYC"
#Create your dependent and independent variable
y = data[["Survived"]].values
X = data[['Pclass','Sex','Age','SibSp','Parch','Embarked']].values
# + id="K7Ay8MOaMsq6"
#Split the data into test set and training set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20,random_state =50)
# + id="khP0_IacM0dk"
#Feature Scale the data
scaler = StandardScaler()
# scaler.fit(X_train)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="-CiKfb23M7IL" outputId="2b0b22be-5ec5-4818-cb30-9c566d51851c"
#Create the model
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=11)
classifier.fit(X_train, y_train)
# + id="HWIoX-doNCtz"
#Predict the model that we have created
#We use the test to predict
y_pred = classifier.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="RrDgUEJLNG4c" outputId="12e0e1af-bcdd-4a7f-d613-dc7575e4b9b1"
#Evaluate the model
from sklearn.metrics import classification_report, confusion_matrix,accuracy_score
print(accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# + [markdown] id="_OInkEUmNOJM"
# 80-20 split gives us a 81% accuracy
# + [markdown] id="lFtjGxFpNZ6l"
# SECOND MODEL ON 70-30 SPLIT
# + id="xhDEO1X5NeNb"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,random_state =50)
# + id="XQgq4GMUNjGj"
#Feature Scale the data
scaler = StandardScaler()
# scaler.fit(X_train)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="88x3vXsjNmw2" outputId="e265c92d-2b2f-4f59-e9e6-3860e1fdf110"
#Create the model
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=11)
classifier.fit(X_train, y_train)
# + id="h3gXNebTNxKg"
#Predict the model that we have created
#We use the test to predict
y_pred = classifier.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="_hrOTMvpN2Td" outputId="cc564ea3-8df4-4ad1-fb53-1a69fe2f5bae"
#Evaluate the model
from sklearn.metrics import classification_report, confusion_matrix,accuracy_score
print(accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# + [markdown] id="_vd6_fg8N1US"
# 70-30 split gives us a 77% accuracy
# + [markdown] id="UXj8Pk_HOF9D"
# THIRD MODEL ON 60-40 SPLIT
# + id="C_cHKHoCOK11"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40,random_state =50)
# + id="tTVLAP_NOO21"
#Feature Scale the data
scaler = StandardScaler()
# scaler.fit(X_train)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="Q4SgXD9_OO5c" outputId="22220da6-0b0d-49da-d8b3-03c53ebcaf71"
#Create the model
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=11)
classifier.fit(X_train, y_train)
# + id="nWNhZlTsOUFT"
#Predict the model that we have created
#We use the test to predict
y_pred = classifier.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="pxEPhGe1OWxw" outputId="86e25984-7c21-45d1-e3b7-a494eb6d14f2"
#Evaluate the model
from sklearn.metrics import classification_report, confusion_matrix,accuracy_score
print(accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# + [markdown] id="TpujHleIOlYv"
# 60-40 split gives us a 75% accuracy
# + [markdown] id="CtqD_Vb7OtSM"
# **Conclusion**
# + [markdown] id="D4SRE-x-OvEv"
# The best model for predicting the survivers of the titanic shipwreck, Using KNN, has parameters of 11 neighbours and training test split by 80-20.
#
# This gives an accuracy of 81% which is the highest
#
# The model is successful because it passes the success criteria of at least 80% accuracy
# + [markdown] id="OYOhQNOnOycY"
# **Recommendation**
# + [markdown] id="vJIQFEi_O1x0"
# Check for overfitting
#
# Consider using other methods apart from KNN and compare metrics, aim to achieve accuracy of over 90%
#
# + [markdown] id="Am5BxLYmO5JM"
# **Follow up questions**
# + [markdown] id="afKk3AJ6O7h9"
#
#
# Did we have the right data? YES!
#
# Do we need more data? IF POSSIBLE, YES!
#
# Was the model succesful? YES!
#
#
| Week_9_Ip_on_k_Nearest_Neighbours(KNN).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparison with EventDisplay
# **Purpose of this notebook:**
#
# Compare IRF and Sensitivity as computed by pyirf and EventDisplay on the same DL2 results
#
# **Notes:**
#
# The following results correspond to:
#
# - Paranal site
# - Zd 20 deg, Az 180 deg
# - 50 h observation time
#
# **Resources:**
#
# _EventDisplay_ DL2 data, https://forge.in2p3.fr/projects/cta_analysis-and-simulations/wiki/Eventdisplay_Prod3b_DL2_Lists
#
#
# Download and unpack the data using
#
# ```bash
# $ curl -fL -o data.zip https://nextcloud.e5.physik.tu-dortmund.de/index.php/s/Cstsf8MWZjnz92L/download
# $ unzip data.zip
# $ mv eventdisplay_dl2 data
# ```
# ## Table of contents
#
# * [Optimized cuts](#Optimized-cuts)
# - [Direction cut](#Direction-cut)
# * [Differential sensitivity from cuts optimization](#Differential-sensitivity-from-cuts-optimization)
# * [IRFs](#IRFs)
# - [Effective area](#Effective-area)
# - [Point Spread Function](#Point-Spread-Function)
# + [Angular resolution](#Angular-resolution)
# - [Energy dispersion](#Energy-dispersion)
# + [Energy resolution](#Energy-resolution)
# - [Background rate](#Background-rate)
# + [markdown] nbsphinx="hidden"
# ## Imports
# +
import os
import numpy as np
import uproot
from astropy.io import fits
import astropy.units as u
import matplotlib.pyplot as plt
from astropy.table import QTable
from matplotlib.ticker import ScalarFormatter
# %matplotlib inline
# -
plt.rcParams['figure.figsize'] = (9, 6)
# + [markdown] nbsphinx="hidden"
# ## Input data
# + [markdown] nbsphinx="hidden"
# ### _EventDisplay_
# + [markdown] nbsphinx="hidden"
# The input data provided by _EventDisplay_ is stored in _ROOT_ format, so _uproot_ is used to transform it into _numpy_ objects.
# + tags=["parameters"]
# Path of EventDisplay IRF data in the user's local setup
# Please, empty the indir_EventDisplay variable before pushing to the repo
indir = "../../data/"
irf_file_event_display = "DESY.d20180113.V3.ID0NIM2LST4MST4SST4SCMST4.prod3b-paranal20degs05b-NN.S.3HB9-FD.180000s.root"
irf_eventdisplay = uproot.open(os.path.join(indir, irf_file_event_display))
# + [markdown] nbsphinx="hidden"
# ## _pyirf_
# -
# The following is the current IRF + sensititivy output FITS format provided by this software.
#
# Run `python examples/calculate_eventdisplay_irfs.py` after downloading the data
pyirf_file = '../../pyirf_eventdisplay.fits.gz'
# ## Optimized cuts
# [back to top](#Table-of-contents)
# ### Direction cut
# [back to top](#Table-of-contents)
# +
from astropy.table import QTable
rad_max = QTable.read(pyirf_file, hdu='RAD_MAX')[0]
theta_cut_ed = irf_eventdisplay['ThetaCut;1']
plt.errorbar(
10**theta_cut_ed.edges[:-1],
theta_cut_ed.values,
xerr=np.diff(10**theta_cut_ed.edges),
ls='',
label='EventDisplay',
)
plt.errorbar(
0.5 * (rad_max['ENERG_LO'] + rad_max['ENERG_HI'])[1:-1].to_value(u.TeV),
rad_max['RAD_MAX'].T[1:-1, 0].to_value(u.deg),
xerr=0.5 * (rad_max['ENERG_HI'] - rad_max['ENERG_LO'])[1:-1].to_value(u.TeV),
ls='',
label='pyirf',
)
plt.legend()
plt.ylabel('θ-cut / deg²')
plt.xlabel(r'$E_\mathrm{reco} / \mathrm{TeV}$')
plt.xscale('log')
None # to remove clutter by mpl objects
# +
from astropy.table import QTable
gh_cut = QTable.read(pyirf_file, hdu='GH_CUTS')[1:-1]
plt.errorbar(
0.5 * (gh_cut['low'] + gh_cut['high']).to_value(u.TeV),
gh_cut['cut'],
xerr=0.5 * (gh_cut['high'] - gh_cut['low']).to_value(u.TeV),
ls='',
label='pyirf',
)
plt.legend()
plt.ylabel('G/H-cut')
plt.xlabel(r'$E_\mathrm{reco} / \mathrm{TeV}$')
plt.xscale('log')
None # to remove clutter by mpl objects
# -
# ## Differential sensitivity from cuts optimization
# [back to top](#Table-of-contents)
# +
# [1:-1] removes under/overflow bins
sensitivity = QTable.read(pyirf_file, hdu='SENSITIVITY')[1:-1]
# make it print nice
sensitivity['reco_energy_low'].info.format = '.3g'
sensitivity['reco_energy_high'].info.format = '.3g'
sensitivity['reco_energy_center'].info.format = '.3g'
sensitivity['n_signal'].info.format = '.1f'
sensitivity['n_signal_weighted'].info.format = '.1f'
sensitivity['n_background_weighted'].info.format = '.1f'
sensitivity['n_background'].info.format = '.1f'
sensitivity['relative_sensitivity'].info.format = '.2g'
sensitivity['flux_sensitivity'].info.format = '.3g'
sensitivity
# +
plt.figure(figsize=(12,8))
# Get data from event display file
h = irf_eventdisplay["DiffSens"]
bins = 10**h.edges
x = 0.5 * (bins[:-1] + bins[1:])
width = np.diff(bins)
y = h.values
yerr = np.sqrt(h.variances)
fig, (ax_sens, ax_ratio) = plt.subplots(
2, 1,
gridspec_kw={'height_ratios': [4, 1]},
sharex=True,
)
ax_sens.errorbar(
x,
y,
xerr=width/2,
yerr=yerr,
label="EventDisplay",
ls=''
)
unit = u.Unit('erg cm-2 s-1')
e = sensitivity['reco_energy_center']
s = (e**2 * sensitivity['flux_sensitivity'])
w = (sensitivity['reco_energy_high'] - sensitivity['reco_energy_low'])
ax_sens.errorbar(
e.to_value(u.TeV),
s.to_value(unit),
xerr=w.to_value(u.TeV) / 2,
ls='',
label='pyirf'
)
ax_ratio.errorbar(
e.to_value(u.TeV), s.to_value(unit) / y,
xerr=w.to_value(u.TeV)/2,
ls=''
)
ax_ratio.set_yscale('log')
ax_ratio.set_xlabel("Reconstructed energy / TeV")
ax_ratio.set_ylabel('pyirf / eventdisplay')
ax_ratio.grid()
ax_ratio.yaxis.set_major_formatter(ScalarFormatter())
ax_ratio.set_ylim(0.2, 5.0)
ax_ratio.set_yticks([0.2, 0.5, 1, 2, 5])
ax_ratio.set_yticks([], minor=True)
# Style settings
ax_sens.set_title('Minimal Flux Needed for 5σ Detection in 50 hours')
ax_sens.set_xscale("log")
ax_sens.set_yscale("log")
ax_sens.set_ylabel(rf"$(E^2 \cdot \mathrm{{Flux Sensitivity}}) /$ ({unit.to_string('latex')})")
ax_sens.grid(which="both")
ax_sens.legend()
fig.tight_layout(h_pad=0)
None # to remove clutter by mpl objects
# -
# ## IRFs
# [back to top](#Table-of-contents)
# ### Effective area
# [back to top](#Table-of-contents)
# +
# Data from EventDisplay
h = irf_eventdisplay["EffectiveAreaEtrue"]
x = 0.5 * (10**h.edges[:-1] + 10**h.edges[1:])
xerr = 0.5 * np.diff(10**h.edges)
y = h.values
yerr = np.sqrt(h.variances)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, ls='', label="EventDisplay")
for name in ('', '_NO_CUTS', '_ONLY_GH', '_ONLY_THETA'):
area = QTable.read(pyirf_file, hdu='EFFECTIVE_AREA' + name)[0]
plt.errorbar(
0.5 * (area['ENERG_LO'] + area['ENERG_HI']).to_value(u.TeV)[1:-1],
area['EFFAREA'].to_value(u.m**2).T[1:-1, 0],
xerr=0.5 * (area['ENERG_LO'] - area['ENERG_HI']).to_value(u.TeV)[1:-1],
ls='',
label='pyirf ' + name,
)
# Style settings
plt.xscale("log")
plt.yscale("log")
plt.xlabel("True energy / TeV")
plt.ylabel("Effective collection area / m²")
plt.grid(which="both")
plt.legend()
None # to remove clutter by mpl objects
# -
# ### Point Spread Function
# [back to top](#Table-of-contents)
# +
psf_table = QTable.read(pyirf_file, hdu='PSF')[0]
# select the only fov offset bin
psf = psf_table['RPSF'].T[:, 0, :].to_value(1 / u.sr)
offset_bins = np.append(psf_table['RAD_LO'], psf_table['RAD_HI'][-1])
phi_bins = np.linspace(0, 2 * np.pi, 100)
# Let's make a nice 2d representation of the radially symmetric PSF
r, phi = np.meshgrid(offset_bins.to_value(u.deg), phi_bins)
# look at a single energy bin
# repeat values for each phi bin
center = 0.5 * (psf_table['ENERG_LO'] + psf_table['ENERG_HI'])
fig = plt.figure(figsize=(15, 5))
axs = [fig.add_subplot(1, 3, i, projection='polar') for i in range(1, 4)]
for bin_id, ax in zip([10, 20, 30], axs):
image = np.tile(psf[bin_id], (len(phi_bins) - 1, 1))
ax.set_title(f'PSF @ {center[bin_id]:.2f} TeV')
ax.pcolormesh(phi, r, image)
ax.set_ylim(0, 0.25)
ax.set_aspect(1)
fig.tight_layout()
None # to remove clutter by mpl objects
# +
# Profile
center = 0.5 * (offset_bins[1:] + offset_bins[:-1])
xerr = 0.5 * (offset_bins[1:] - offset_bins[:-1])
for bin_id in [10, 20, 30]:
plt.errorbar(
center.to_value(u.deg),
psf[bin_id],
xerr=xerr.to_value(u.deg),
ls='',
label=f'Energy Bin {bin_id}'
)
#plt.yscale('log')
plt.legend()
plt.xlim(0, 0.25)
plt.ylabel('PSF PDF / sr⁻¹')
plt.xlabel('Distance from True Source / deg')
None # to remove clutter by mpl objects
# -
# #### Angular resolution
# [back to top](#Table-of-contents)
# +
# Data from EventDisplay
h = irf_eventdisplay["AngRes"]
x = 0.5 * (10**h.edges[:-1] + 10**h.edges[1:])
xerr = 0.5 * np.diff(10**h.edges)
y = h.values
yerr = np.sqrt(h.variances)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, ls='', label="EventDisplay")
# pyirf
ang_res = QTable.read(pyirf_file, hdu='ANGULAR_RESOLUTION')[1:-1]
plt.errorbar(
0.5 * (ang_res['true_energy_low'] + ang_res['true_energy_high']).to_value(u.TeV),
ang_res['angular_resolution'].to_value(u.deg),
xerr=0.5 * (ang_res['true_energy_high'] - ang_res['true_energy_low']).to_value(u.TeV),
ls='',
label='pyirf'
)
# Style settings
plt.xlim(1.e-2, 2.e2)
plt.ylim(2.e-2, 1)
plt.xscale("log")
plt.yscale("log")
plt.xlabel("True energy / TeV")
plt.ylabel("Angular Resolution / deg")
plt.grid(which="both")
plt.legend(loc="best")
None # to remove clutter by mpl objects
# -
# ### Energy dispersion
# [back to top](#Table-of-contents)
# +
edisp = QTable.read(pyirf_file, hdu='ENERGY_DISPERSION')[0]
e_bins = edisp['ENERG_LO'][1:]
migra_bins = edisp['MIGRA_LO'][1:]
plt.title('pyirf')
plt.pcolormesh(e_bins.to_value(u.TeV), migra_bins, edisp['MATRIX'].T[1:-1, 1:-1, 0].T, cmap='inferno')
plt.xscale('log')
plt.yscale('log')
plt.colorbar(label='PDF Value')
plt.xlabel(r'$E_\mathrm{True} / \mathrm{TeV}$')
plt.ylabel(r'$E_\mathrm{Reco} / E_\mathrm{True}$')
None # to remove clutter by mpl objects
# -
# #### Energy resolution
# [back to top](#Table-of-contents)
# +
# Data from EventDisplay
h = irf_eventdisplay["ERes"]
x = 0.5 * (10**h.edges[:-1] + 10**h.edges[1:])
xerr = np.diff(10**h.edges) / 2
y = h.values
yerr = np.sqrt(h.variances)
# Data from pyirf
bias_resolution = QTable.read(pyirf_file, hdu='ENERGY_BIAS_RESOLUTION')[1:-1]
# Plot function
plt.errorbar(x, y, xerr=xerr, yerr=yerr, ls='', label="EventDisplay")
plt.errorbar(
0.5 * (bias_resolution['true_energy_low'] + bias_resolution['true_energy_high']).to_value(u.TeV),
bias_resolution['resolution'],
xerr=0.5 * (bias_resolution['true_energy_high'] - bias_resolution['true_energy_low']).to_value(u.TeV),
ls='',
label='pyirf'
)
plt.xscale('log')
# Style settings
plt.xlabel(r"$E_\mathrm{True} / \mathrm{TeV}$")
plt.ylabel("Energy resolution")
plt.grid(which="both")
plt.legend(loc="best")
None # to remove clutter by mpl objects
# -
# ### Background rate
# [back to top](#Table-of-contents)
# +
from pyirf.utils import cone_solid_angle
# Data from EventDisplay
h = irf_eventdisplay["BGRate"]
x = 0.5 * (10**h.edges[:-1] + 10**h.edges[1:])
width = np.diff(10**h.edges)
xerr = width / 2
y = h.values
yerr = np.sqrt(h.variances)
# pyirf data
bg_rate = QTable.read(pyirf_file, hdu='BACKGROUND')[0]
reco_bins = np.append(bg_rate['ENERG_LO'], bg_rate['ENERG_HI'][-1])
# first fov bin, [0, 1] deg
fov_bin = 0
rate_bin = bg_rate['BKG'].T[:, fov_bin]
# interpolate theta cut for given e reco bin
e_center_bg = 0.5 * (bg_rate['ENERG_LO'] + bg_rate['ENERG_HI'])
e_center_theta = 0.5 * (rad_max['ENERG_LO'] + rad_max['ENERG_HI'])
theta_cut = np.interp(e_center_bg, e_center_theta, rad_max['RAD_MAX'].T[:, 0])
# undo normalization
rate_bin *= cone_solid_angle(theta_cut)
rate_bin *= np.diff(reco_bins)
# Plot function
plt.errorbar(x, y, xerr=xerr, yerr=yerr, ls='', label="EventDisplay")
plt.errorbar(
0.5 * (bg_rate['ENERG_LO'] + bg_rate['ENERG_HI']).to_value(u.TeV)[1:-1],
rate_bin.to_value(1 / u.s)[1:-1],
xerr=np.diff(reco_bins).to_value(u.TeV)[1:-1] / 2,
ls='',
label='pyirf',
)
# Style settings
plt.xscale("log")
plt.xlabel(r"$E_\mathrm{Reco} / \mathrm{TeV}$")
plt.ylabel("Background rate / (s⁻¹ TeV⁻¹) ")
plt.grid(which="both")
plt.legend(loc="best")
plt.yscale('log')
None # to remove clutter by mpl objects
| docs/notebooks/comparison_with_EventDisplay.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# +
#
# Julia version of code adapted from
# https://github.com/rawlings-group/paresto/blob/master/examples/green_book/bvsm.m
#
# -
using CSV, DataFrames
using DifferentialEquations, DiffEqSensitivity
using Plots
using Optim
using FiniteDiff, ForwardDiff, FiniteDifferences
using Interpolations
using Distributions
using LinearAlgebra
Qf_data = CSV.read("data_sets/flow.dat", DataFrame, header = ["Col"])
strsplit(x) = [parse(Float64, y) for y in split(strip(x),r"\s+")]
Qf_data_v = strsplit.(Qf_data.Col)
tQf = append!([0.0], [x[1] for x in Qf_data_v])
Qf = append!([0.0], [x[2] for x in Qf_data_v]./0.728)
Qf_itl = ConstantInterpolation(tQf, Qf)
tQf_i = 0:1:maximum(tQf)
Qf_i = Qf_itl.(tQf_i)
plot(tQf, Qf, seriestype = :scatter)
plot!(tQf_i, Qf_i)
lc_data = CSV.read("data_sets/lc.dat", DataFrame, delim = "\t", header = ["Col"])
lc_data_v = strsplit.(lc_data.Col)
tlc = [x[1] for x in lc_data_v]
lc = [x[2] for x in lc_data_v]
plot(tlc, lc, seriestype = :scatter)
# +
function rates_red(u, p, t)
cBf = 0.00721
vR0 = 2370.0
vR = u[1]
eps2 = u[2]
nBadded = max((vR - vR0) * cBf, 1.0e-6)
Qf_itl = p[1]
nA0 = p[2]
k = p[3]
dvR = Qf_itl(t)
deps2 = Qf_itl(t) * cBf / (1.0 + k * (nA0 - nBadded + eps2)/(nBadded - 2.0 * eps2))
return [dvR, deps2]
end
# -
u0 = [2370.0, 0.0]
p = [Qf_itl, 2.35, 2.0]
tspan = (0.0, 850.0)
prob = ODEProblem(rates_red, u0, tspan, p)
sol = solve(prob, Rosenbrock23(), tstops = tQf);
function get_lcpred(t, sol)
cBf = 0.00721
vR0 = 2370.0
nBadded = (sol(t)[1] - vR0) * cBf
nD = sol(t)[2]
nC = nBadded - 2.0 * nD
lcpred = 1.0/(1.0 + 2*nD/(nC + 1.0e-6))
return lcpred
end
plot(tlc, lc, seriestype = :scatter)
plot!(sol.t, [get_lcpred(t, sol) for t in sol.t])
lc_data = DataFrame(tlc = tlc, lc = lc)
function calc_SSE(pest, data, Qf_itl, tQf)
_prob = remake(prob, p = [Qf_itl, pest[1], pest[2]])
sol = solve(_prob, Rosenbrock23(), tstops = tQf)
ypred = [get_lcpred(t, sol) for t in data.tlc]
sse = 0.0
for (i, t) in enumerate(data.tlc)
sse = sse + (data.lc[i]/ypred[i] - 1.0)^2
end
return sse
end
calc_SSE([2.35, 2.0], lc_data, Qf_itl, tQf)
res_pe = optimize(p -> calc_SSE(p, lc_data, Qf_itl, tQf), [2.35, 1.0], LBFGS())
res_pe.minimizer
H_ad = ForwardDiff.hessian(p -> calc_SSE(p, lc_data, Qf_itl), [2000.0, 1000.0])
calc_SSE_a(p) = calc_SSE(p, lc_data, Qf_itl)
H = FiniteDiff.finite_difference_hessian(p -> calc_SSE(p, lc_data, Qf_itl, tQf), [2.35, 2.0])
inv(H)
n = size(lc_data)[1]
p = 2
mse = calc_SSE([2.35, 2.0], lc_data, Qf_itl, tQf)/(n - p)
cov_est = 2 * mse * inv(H)
d = FDist(p, n-p)
mult_fact = p*quantile(d, 0.95)
sqrt.(mult_fact * diag(cov_est))
n
| julia_paresto/bvsm_red_parmest_jbr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from PIL import Image
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import random
piclocation = r'2019-06-21.jpg'
im = Image.open(piclocation)
pixels = list(im.getdata())
pixel_sample = random.sample(pixels, 5000)
fig = plt.figure(figsize=(12,10))
ax1 = fig.add_subplot(111, projection='3d')
for i in pixel_sample:
xs = i[0]
ys = i[1]
zs = i[2]
r, g, b = xs/255, ys/255, zs/255
c = [(r, g, b)]
ax1.scatter(xs, ys, zs, c=c)
ax1.set_xlabel('Red Value')
ax1.set_ylabel('Green Value')
ax1.set_zlabel('Blue Value')
ax2 = fig.add_subplot(321)
ax2.imshow(im, interpolation='nearest')
ax2.axis("off")
plt.show()
# -
| RGBScatterJpyn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
# 链表的题目比较单一,因为链表数据结构的特殊性,成员只能通过指针访问,所以根据维护的指针数量可以大致分为几类:
# - 单指针
# - 双指针
# - 多指针
#
# ## 单指针
# 单指针指的是只需要维护单个工作指针用于扫描链表,而该指针通常指向前驱节点。
# [Delete Node in a Linked List](https://leetcode.com/problems/delete-node-in-a-linked-list/)。只给出待删除节点的指针,保证该节点不是尾节点,在链表中删除该节点。
#
# 思路:一般删除链表节点需要pre指针,该题保证不是尾节点,那么可以使用覆盖的方式来实现等效删除。
def deleteNode(node):
node.val = node.next.val
node.next = node.next.next
# [Remove Duplicates from Sorted List](https://leetcode.com/problems/remove-duplicates-from-sorted-list/)。删除有序链表中冗余的节点。
#
# 思路:判断下一个节点值是否等于当前节点,等于则删除,不等则下一个。
def deleteDuplicates(head: ListNode) -> ListNode:
if not head:
return head
idx = head
while idx.next:
if idx.next.val == idx.val:
idx.next = idx.next.next
else:
idx = idx.next
return head
# ## 双指针
# 双指针的题目都是链表的经典题,如快慢双指针,可用于找中点和倒数第$k$个点,还可用于判环。
# [Reverse Linked List](https://leetcode.com/problems/reverse-linked-list/)。反转链表。
#
# 思路:个人认为最通用的反转链表方法如下。假如现在有一链表:
# $$
# 1\to{2}\to{3}\to{4}
# $$
# 需要把该状态转变成如下状态:
# $$
# 1\rightleftharpoons{2}\gets{3}\gets{4}
# $$
# 使用```left```跟```right```指示每次反转的节点对,首先要保存的是```right```的后继```post```,然后倒置```left```与```right```的指针,接着双指针后移即可。
def reverseList(head: ListNode) -> ListNode:
if not head or not head.next:
return head
left = head
right = left.next
while right:
post = right.next # 1. 保存后继
right.next = left # 2. 倒置指针
left = right # 双指针后移
right = post
head.next = right
return left
# [Remove Duplicates from Sorted List II](https://leetcode.com/problems/remove-duplicates-from-sorted-list-ii/)。删除有序链表中所有的重复节点。
#
# 思路:维护双指针,pre指向最后一个非重复节点,last指向最后一个重复节点。初始状态pre指向哑节点,那么每次都需要从```last=pre.next```开始寻找重复节点。若```last!=pre.next```则说明last有移动,即有重复节点,删除即可。
def deleteDuplicates(head: ListNode) -> ListNode:
if not head or not head.next:
return head
dummy = ListNode(None)
dummy.next = head
pre = dummy
while pre.next:
last = pre.next
while last.next and last.next.val == last.val:
last = last.next
if pre.next != last:
pre.next = last.next
else:
pre = pre.next
return dummy.next
# [Palindrome Linked List](https://leetcode.com/problems/palindrome-linked-list/)。判断单链表是否对称。
#
# 思路:首先使用快慢指针找到单链表的中点,然后断链,反转某一半链表,再逐一比较即可。
def isPalindrome(head: ListNode) -> bool:
if not head or not head.next:
return True
slow = fast = head
while fast and fast.next and fast.next.next:
slow = slow.next
fast = fast.next.next
head_2 = slow.next
slow.next = None
def reverse_ll(head):
if not head or not head.next:
return head
left, right = head, head.next
while right:
post = right.next
right.next = left
left = right
right = post
head.next = right
return left
head_2 = reverse_ll(head_2)
while head and head_2:
if head.val != head_2.val:
return False
head = head.next
head_2 = head_2.next
return True
# [Reverse Linked List II](https://leetcode.com/problems/reverse-linked-list-ii/)。给一个区间$[m,n]$,将该区间的链表节点做反转。
#
# 思路:首先定位到需要反转的起点位置,然后对该部分做$n-m-1$次反转。
def reverseBetween(head: ListNode, m: int, n: int) -> ListNode:
dummy = ListNode(None)
dummy.next = head
idx = dummy
for i in range(m-1):
idx = idx.next
pre = idx
raw_start = left = pre.next
right = left.next
for i in range(n-m-1):
post = right.next
right.next = left
left = right
right = post
raw_start.next = right
pre.next = left
return dummy.next
# [Remove Nth Node From End of List](https://leetcode.com/problems/remove-nth-node-from-end-of-list/)。删除链表倒数第$k$个节点。给定的$k$是合法的。
#
# 思路:经典题,快慢双指针,快指针先走$k$步。
def removeNthFromEnd(head: ListNode, n: int) -> ListNode:
dummy = ListNode(None)
dummy.next = head
slow = fast = dummy
for _ in range(n):
fast = fast.next
while fast and fast.next:
slow = slow.next
fast = fast.next
slow.next = slow.next.next
return dummy.next
# [Linked List Cycle](https://leetcode.com/problems/linked-list-cycle/)。单链表判环。
#
# 思路:快慢双指针,若有环,两指针必会相遇。
def hasCycle(head):
if not head or not head.next:
return False
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow is fast:
return True
return False
# [Linked List Cycle II](https://leetcode.com/problems/linked-list-cycle-ii/)。单链表判环,还需要找出环的入口。
#
# 思路:经典题。第一步判环,设立快慢双指针,若有环则两指针必会相遇。第二步找到环的入口,快慢指针相遇时,相遇点与头结点距环入口的距离是相等的,在头结点与相遇点出分别设立指针移动即可。
def detectCycle(head):
if not head or not head.next:
return None
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow is fast:
slow = head
while slow is not fast:
slow = slow.next
fast = fast.next
return slow
return None
# [Intersection of Two Linked Lists](https://leetcode.com/problems/intersection-of-two-linked-lists/)。求两单链表的交点。
#
# 思路:解法比较巧妙。从两链表的头结点处分别设立两个指针线性扫描,关键在于扫描完之后,A链表上的指针转移到B的头结点,B链表上的指针转移到A的头结点。这样一来,若有交点,两指针必会在交点处相遇;若无交点,两指针也会同时指向空。
def getIntersectionNode(headA, headB):
A, B = headA, headB
while A is not B:
A = A.next if A else headB
B = B.next if B else headA
return A
# [Odd Even Linked List](https://leetcode.com/problems/odd-even-linked-list/)。给一单链表,令索引从$1$开始,将链表中所有奇数索引的节点都移动到偶数索引节点的前面。
#
# 思路:$2-pass$。$1st-pass$按索引将链表拆分成两个链表,$2nd-pass$找到奇链表的尾节点,然后将偶链表接到奇链表后面。
def oddEvenList(head: ListNode) -> ListNode:
if not head or not head.next:
return head
# 1st-pass
even_head = head.next
idx = head
while idx.next:
tmp = idx.next
idx.next = idx.next.next
idx = tmp
# 2nd-pass
idx = head
while idx.next:
idx = idx.next
idx.next = even_head
return head
# [Copy List with Random Pointer](https://leetcode.com/problems/copy-list-with-random-pointer/)。复杂链表的复制。假设一个简单单链表,给每个节点再增加一个随机指针,其指向链表中的随机节点。复制这个特殊链表。
#
# 思路:$3-pass$。$1st-pass$复制节点的值,每一个复制节点跟在原节点后面;$2nd-pass$,复制原节点的随机指针;$3rd-pass$,将链表拆成两条。
# +
class Node:
def __init__(self, val, next, random):
self.val = val
self.next = next
self.random = random
def copyRandomList(head: 'Node') -> 'Node':
if not head:
return None
# 1st-pass
org_node = head
while org_node:
copy_node = Node(org_node.val)
copy_node.next = org_node.next
org_node.next = copy_node
org_node = copy_node.next
# 2nd-pass
org_node = head
while org_node:
copy_node = org_node.next
if org_node.random:
copy_node.random = org_node.random.next
org_node = copy_node.next
# 3rd-pass
copy_head = head.next
idx = head
while idx.next:
tmp = idx.next
idx.next = idx.next.next
idx = tmp
return copy_head
# -
# [Add Two Numbers](https://leetcode.com/problems/add-two-numbers/)。给两单链表,分别表示两个数字的逆序表示,每个节点表示一位。对该两链表求和。
#
# 思路:逐节点相加产生新节点即可。
def addTwoNumbers(l1: ListNode, l2: ListNode) -> ListNode:
dummy = ListNode(None)
idx = dummy
carry = 0
while l1 or l2:
if l1:
carry += l1.val
l1 = l1.next
if l2:
carry += l2.val
l2 = l2.next
idx.next = ListNode(carry % 10)
idx = idx.next
carry = carry//10
if carry > 0:
node = ListNode(carry % 10)
idx.next = node
return dummy.next
# [Middle of the Linked List](https://leetcode.com/problems/middle-of-the-linked-list/)。求单链表的中点,若有两个中点,返回偏右的那个。
#
# 思路:举简单例子即可得到循环的终止条件。
def middleNode(head: ListNode) -> ListNode:
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
return slow
# [Rotate List](https://leetcode.com/problems/rotate-list/)。将链表循环右移$k$个位置,。
#
# 思路:链表循环右移$k$位,实际上就是倒数第$k$个节点成为首节点。找到倒数第$k-1$个节点(尾节点)后,将链表成环再断链即可。
def rotateRight(self, head: ListNode, k: int) -> ListNode:
if not head or not head.next:
return head
# 1. 求出n
idx = head
n = 1
while idx.next:
idx = idx.next
n += 1
# 2. 求倒数k-1的节点
tail = head
for _ in range(n-(k % n)-1):
tail = tail.next
# 3. 成环再断链
idx = tail
while idx.next:
idx = idx.next
idx.next = head
head = tail.next
tail.next = None
return head
# ## 多指针
# [Swap Nodes in Pairs](https://leetcode.com/problems/swap-nodes-in-pairs/)。给一链表,成对的反转节点。
#
# 思路:该题看似复杂,其实多维护几个指针就简单了。维护四个指针:pre、1st、2nd和3rd,反转目标是1st和2nd。
def swapPairs(head: ListNode) -> ListNode:
if not head or not head.next:
return head
dummy = ListNode(None)
dummy.next = head
pre = dummy
while pre and pre.next and pre.next.next:
first, second, third = pre.next, pre.next.next, pre.next.next.next
pre.next = second
second.next = first
first.next = third
pre = pre.next.next
return dummy.next
# ## 拆分与合并
# 链表的拆分与合并至少会涉及到双指针。
# [Reorder List](https://leetcode.com/problems/reorder-list/)。重排链表,使得顺序为$1->n->2->n-1->...$。
#
# 思路:三步。1. 以中点分割链表,中点应该偏右;2. 反转后半段;3. 逐节点插入。
def reorderList(head: ListNode) -> None:
if not head or not head.next or not head.next.next:
return head
# 1. 分割
slow = fast = head
while fast.next and fast.next.next:
slow = slow.next
fast = fast.next.next
mid = slow.next
slow.next = None
# 2. 反转
left, right = mid, mid.next
while right:
post = right.next
right.next = left
left, right = right, post
mid.next = right
# 3. 插入
idx1, idx2 = head, left
while idx2:
post1, post2 = idx1.next, idx2.next
idx1.next = idx2
idx2.next = post1
idx1, idx2 = post1, post2
return head
# [Merge Two Sorted Lists](https://leetcode.com/problems/merge-two-sorted-lists/),2019作业帮手撕代码。合并两有序链表。
#
# 思路:最简单的,新建链表。
def mergeTwoLists(l1: ListNode, l2: ListNode) -> ListNode:
dummy = ListNode(None)
idx = dummy
while l1 and l2:
if l1.val < l2.val:
idx.next = ListNode(l1.val)
l1 = l1.next
else:
idx.next = ListNode(l2.val)
l2 = l2.next
idx = idx.next
while l1:
idx.next = ListNode(l1.val)
l1 = l1.next
idx = idx.next
while l2:
idx.next = ListNode(l2.val)
idx = idx.next
l2 = l2.next
return dummy.next
# [Sort List](https://leetcode.com/problems/sort-list/)。对单链表排序。
#
# 思路:归并排序,分三步。1. 空节点或单节点直接返回;2. 以中点拆分链表并递归;3. 合并两有序链表。
def sortList(head: ListNode) -> ListNode:
if not head or not head.next:
return head
# 1. 拆分
slow=fast=head
while fast.next and fast.next.next:
slow=slow.next
fast=fast.next.next
mid=slow.next
slow.next=None
# 2. 递归
left,right=sortList(head),sortList(mid)
# 3. 合并
dummy=ListNode(None)
tmp=dummy
while left and right:
if left.val<right.val:
tmp.next=left
left=left.next
else:
tmp.next=right
right=right.next
tmp=tmp.next
if left:
tmp.next=left
if right:
tmp.next=right
return dummy.next
# [Partition List](https://leetcode.com/problems/partition-list/)。给一链表与一个阈值$x$,重排链表,使得小于$x$的节点都在前边,大于等于$x$的节点都在后边。
#
# 思路:直接新建两链表,根据题意将节点转移。
def partition(head: ListNode, x: int) -> ListNode:
dummy1, dummy2 = ListNode(None), ListNode(None)
idx1, idx2 = dummy1, dummy2
idx = head # 原链表的工作指针
while idx:
if idx.val < x:
idx1.next = idx
idx = idx.next
idx1 = idx1.next
idx1.next = None # 断链
else:
idx2.next = idx
idx = idx.next
idx2 = idx2.next
idx2.next = None # 断链
idx1.next = dummy2.next
return dummy1.next
| Algorithm/BlankBoard/Python/LinkedList.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os
from numpy import loadtxt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
import xgboost as xgb
from xgboost import XGBClassifier
# -
# read in data
dtrain = xgb.DMatrix(data='./data/agaricus.txt.train')
dtest = xgb.DMatrix(data='./data/agaricus.txt.test')
# specify parameters via map
param = {'max_depth': 2, 'eta': 1, 'objective': 'binary:logistic'}
num_round = 2
bst = xgb.train(param, dtrain, num_round)
# make prediction
preds = bst.predict(dtest)
bst
len(preds)
len(bst.predict(dtrain))
# load data
dataset = loadtxt('./data/pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
# split data into train and test sets
seed = 42
test_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
# +
# fit model to training data
model = XGBClassifier(learning_rate=0.3)
model.fit(X_train, y_train)
print()
print(model)
print()
# make predictions for train data
x_pred = model.predict(X_train)
x_predictions = [round(value) for value in x_pred]
# evaluate predictions
x_accuracy = accuracy_score(y_train, x_predictions)
x_conf = confusion_matrix(y_train, x_predictions)
print('Training Accuracy: {:.2f}'.format(x_accuracy * 100))
print('Confusion Matrix:\n{}'.format(x_conf))
# make predictions for test data
y_pred = model.predict(X_test)
y_predictions = [round(value) for value in y_pred]
# evaluate predictions
y_accuracy = accuracy_score(y_test, y_predictions)
y_conf = confusion_matrix(y_test, y_predictions)
print('Test Accuracy: {:.2f}'.format(y_accuracy * 100))
print('Confusion Matrix:\n{}'.format(y_conf))
# -
| xgboost_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
import numpy as np
import pandas as pd
import sys
pd.set_option('display.max_colwidth', -1)
import warnings
warnings.filterwarnings("ignore")
# + _uuid="c1c63d2d93747f2849b9306f7d6174aec6050e0e"
train = pd.read_csv('../input/application_train.csv')
# + _uuid="de9c2ac97775b20991ef9ea1e0a914b090caa010"
bureau = pd.read_csv('../input/bureau.csv')
# + [markdown] _uuid="e69bf9ea736f9a0aa992b2f1da5bbc08950d3cdd"
# # HOW TO INTERPRET BUREAU DATA
#
# This table talks about the Loan data of each unique customer with all financial institutions other than Home Credit
# For each unique SK_ID_CURR we have multiple SK_ID_BUREAU Id's, each being a unique loan transaction from other financial institutions availed by the same customer and reported to the bureau.
# + [markdown] _uuid="ec4df5029fb0b8d5c83eda395d0f81a705d3e674"
# # EXAMPLE OF BUREAU TRANSACTIONS
#
# - In the example below customer with SK_ID_CURR = 100001 had 7 credit transactions before the current application.
# + _uuid="e1b10dbbc77faf7f2c8f142c82a48157325591c5"
bureau[bureau['SK_ID_CURR'] == 100001]
# + [markdown] _uuid="bbe08509242f983c705b40e7ea098280615917a6"
# # UNDERSTANDING OF VARIABLES
# + [markdown] _uuid="aac62f58dbda79c48f499d86ed696a6bbaac0613"
# CREDIT_ACTIVE - Current status of a Loan - Closed/ Active (2 values)
#
# CREDIT_CURRENCY - Currency in which the transaction was executed - Currency1, Currency2, Currency3, Currency4
# ( 4 values)
#
# CREDIT_DAY_OVERDUE - Number of overdue days
#
# CREDIT_TYPE - Consumer Credit, Credit card, Mortgage, Car loan, Microloan, Loan for working capital replemishment,
# Loan for Business development, Real estate loan, Unkown type of laon, Another type of loan.
# Cash loan, Loan for the purchase of equipment, Mobile operator loan, Interbank credit,
# Loan for purchase of shares ( 15 values )
#
# DAYS_CREDIT - Number of days ELAPSED since customer applied for CB credit with respect to current application
# Interpretation - Are these loans evenly spaced time intervals? Are they concentrated within a same time frame?
#
#
# DAYS_CREDIT_ENDDATE - Number of days the customer CREDIT is valid at the time of application
# CREDIT_DAY_OVERDUE - Number of days the customer CREDIT is past the end date at the time of application
#
# AMT_CREDIT_SUM - Total available credit for a customer
# AMT_CREDIT_SUM_DEBT - Total amount yet to be repayed
# AMT_CREDIT_SUM_LIMIT - Current Credit that has been utilized
# AMT_CREDIT_SUM_OVERDUE - Current credit payment that is overdue
# CNT_CREDIT_PROLONG - How many times was the Credit date prolonged
#
# # NOTE:
# For a given loan transaction
# 'AMT_CREDIT_SUM' = 'AMT_CREDIT_SUM_DEBT' +'AMT_CREDIT_SUM_LIMIT'
#
#
#
# AMT_ANNUITY - Annuity of the Credit Bureau data
# DAYS_CREDIT_UPDATE - Number of days before current application when last CREDIT UPDATE was received
# DAYS_ENDDATE_FACT - Days since CB credit ended at the time of application
# AMT_CREDIT_MAX_OVERDUE - Maximum Credit amount overdue at the time of application
#
# + [markdown] _uuid="bf5326cce3eda032ddd3153cedd8489dc23f72b5"
# # FEATURE ENGINEERING WITH BUREAU CREDIT
# + [markdown] _uuid="2bc4aa7b6bae3b57b7a7920e459902063a477553"
# # FEATURE 1 - NUMBER OF PAST LOANS PER CUSTOMER
# + _uuid="5fa4424b0c343f46c87af48ab33364753b7604f5"
B = bureau[0:10000]
grp = B[['SK_ID_CURR', 'DAYS_CREDIT']].groupby(by = ['SK_ID_CURR'])['DAYS_CREDIT'].count().reset_index().rename(index=str, columns={'DAYS_CREDIT': 'BUREAU_LOAN_COUNT'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
print(B.shape)
# + [markdown] _uuid="ef1d2c8dc19e66a135b3b575f30acd0909693e22"
# # FEATURE 2 - NUMBER OF TYPES OF PAST LOANS PER CUSTOMER
# + _uuid="75866b1b15bd469cb4daa71f4d43c64a36999783"
B = bureau[0:10000]
grp = B[['SK_ID_CURR', 'CREDIT_TYPE']].groupby(by = ['SK_ID_CURR'])['CREDIT_TYPE'].nunique().reset_index().rename(index=str, columns={'CREDIT_TYPE': 'BUREAU_LOAN_TYPES'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
print(B.shape)
# + [markdown] _uuid="c7dacc5c1c6d9fd748d2181849c7617a9c1b7449"
# # FEATURE 3 - AVERAGE NUMBER OF PAST LOANS PER TYPE PER CUSTOMER
#
# # Is the Customer diversified in taking multiple types of Loan or Focused on a single type of loan
#
# + _uuid="0c490cc3d1610efdf294d08e50f9b268661d846e"
B = bureau[0:10000]
# Number of Loans per Customer
grp = B[['SK_ID_CURR', 'DAYS_CREDIT']].groupby(by = ['SK_ID_CURR'])['DAYS_CREDIT'].count().reset_index().rename(index=str, columns={'DAYS_CREDIT': 'BUREAU_LOAN_COUNT'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
# Number of types of Credit loans for each Customer
grp = B[['SK_ID_CURR', 'CREDIT_TYPE']].groupby(by = ['SK_ID_CURR'])['CREDIT_TYPE'].nunique().reset_index().rename(index=str, columns={'CREDIT_TYPE': 'BUREAU_LOAN_TYPES'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
# Average Number of Loans per Loan Type
B['AVERAGE_LOAN_TYPE'] = B['BUREAU_LOAN_COUNT']/B['BUREAU_LOAN_TYPES']
del B['BUREAU_LOAN_COUNT'], B['BUREAU_LOAN_TYPES']
import gc
gc.collect()
print(B.shape)
# + [markdown] _uuid="74c86dc591524fa52e587f8a466578b3e19456a8"
# # FEATURE 4 - % OF ACTIVE LOANS FROM BUREAU DATA
# + _uuid="ba87d73bfb1730c3ab043c4a43fb13b3edea01ff"
B = bureau[0:10000]
# Create a new dummy column for whether CREDIT is ACTIVE OR CLOED
B['CREDIT_ACTIVE_BINARY'] = B['CREDIT_ACTIVE']
def f(x):
if x == 'Closed':
y = 0
else:
y = 1
return y
B['CREDIT_ACTIVE_BINARY'] = B.apply(lambda x: f(x.CREDIT_ACTIVE), axis = 1)
# Calculate mean number of loans that are ACTIVE per CUSTOMER
grp = B.groupby(by = ['SK_ID_CURR'])['CREDIT_ACTIVE_BINARY'].mean().reset_index().rename(index=str, columns={'CREDIT_ACTIVE_BINARY': 'ACTIVE_LOANS_PERCENTAGE'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
del B['CREDIT_ACTIVE_BINARY']
import gc
gc.collect()
print(B.shape)
B[B['SK_ID_CURR'] == 100653]
# + [markdown] _uuid="2a8d439f731a71449ae6852e4f8fdfc9aea1f6a7"
# # FEATURE 5
#
# # AVERAGE NUMBER OF DAYS BETWEEN SUCCESSIVE PAST APPLICATIONS FOR EACH CUSTOMER
#
# # How often did the customer take credit in the past? Was it spaced out at regular time intervals - a signal of good financial planning OR were the loans concentrated around a smaller time frame - indicating potential financial trouble?
#
# + _uuid="ef03e96b1e6fd9da97e0187d280cb0356ebce3ed"
B = bureau[0:10000]
# Groupby each Customer and Sort values of DAYS_CREDIT in ascending order
grp = B[['SK_ID_CURR', 'SK_ID_BUREAU', 'DAYS_CREDIT']].groupby(by = ['SK_ID_CURR'])
grp1 = grp.apply(lambda x: x.sort_values(['DAYS_CREDIT'], ascending = False)).reset_index(drop = True)#rename(index = str, columns = {'DAYS_CREDIT': 'DAYS_CREDIT_DIFF'})
print("Grouping and Sorting done")
# Calculate Difference between the number of Days
grp1['DAYS_CREDIT1'] = grp1['DAYS_CREDIT']*-1
grp1['DAYS_DIFF'] = grp1.groupby(by = ['SK_ID_CURR'])['DAYS_CREDIT1'].diff()
grp1['DAYS_DIFF'] = grp1['DAYS_DIFF'].fillna(0).astype('uint32')
del grp1['DAYS_CREDIT1'], grp1['DAYS_CREDIT'], grp1['SK_ID_CURR']
gc.collect()
print("Difference days calculated")
B = B.merge(grp1, on = ['SK_ID_BUREAU'], how = 'left')
print("Difference in Dates between Previous CB applications is CALCULATED ")
print(B.shape)
# + [markdown] _uuid="57863098361700d9fcb3b678602928afc61bdc89"
# # FEATURE 6
#
# # % of LOANS PER CUSTOMER WHERE END DATE FOR CREDIT IS PAST
#
# # INTERPRETING CREDIT_DAYS_ENDDATE
#
# # NEGATIVE VALUE - Credit date was in the past at time of application( Potential Red Flag !!! )
#
# # POSITIVE VALUE - Credit date is in the future at time of application ( Potential Good Sign !!!!)
#
# # NOTE : This is not the same as % of Active loans since Active loans
# # can have Negative and Positive values for DAYS_CREDIT_ENDDATE
# + _uuid="8c3402c9fdab2910bf03a0f14e1f312331de4450"
B = bureau[0:10000]
B['CREDIT_ENDDATE_BINARY'] = B['DAYS_CREDIT_ENDDATE']
def f(x):
if x<0:
y = 0
else:
y = 1
return y
B['CREDIT_ENDDATE_BINARY'] = B.apply(lambda x: f(x.DAYS_CREDIT_ENDDATE), axis = 1)
print("New Binary Column calculated")
grp = B.groupby(by = ['SK_ID_CURR'])['CREDIT_ENDDATE_BINARY'].mean().reset_index().rename(index=str, columns={'CREDIT_ENDDATE_BINARY': 'CREDIT_ENDDATE_PERCENTAGE'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
del B['CREDIT_ENDDATE_BINARY']
gc.collect()
print(B.shape)
# + [markdown] _uuid="b701705b4341d31c1a8c9cb780b96f6891edb1c6"
# # FEATURE 7
#
# # AVERAGE NUMBER OF DAYS IN WHICH CREDIT EXPIRES IN FUTURE -INDICATION OF CUSTOMER DELINQUENCY IN FUTURE??
# + _uuid="846b8e10e4d5800b45edeac31b134f5521e9e5c9"
# Repeating Feature 6 to Calculate all transactions with ENDATE as POSITIVE VALUES
B = bureau[0:10000]
# Dummy column to calculate 1 or 0 values. 1 for Positive CREDIT_ENDDATE and 0 for Negative
B['CREDIT_ENDDATE_BINARY'] = B['DAYS_CREDIT_ENDDATE']
def f(x):
if x<0:
y = 0
else:
y = 1
return y
B['CREDIT_ENDDATE_BINARY'] = B.apply(lambda x: f(x.DAYS_CREDIT_ENDDATE), axis = 1)
print("New Binary Column calculated")
# We take only positive values of ENDDATE since we are looking at Bureau Credit VALID IN FUTURE
# as of the date of the customer's loan application with Home Credit
B1 = B[B['CREDIT_ENDDATE_BINARY'] == 1]
B1.shape
#Calculate Difference in successive future end dates of CREDIT
# Create Dummy Column for CREDIT_ENDDATE
B1['DAYS_CREDIT_ENDDATE1'] = B1['DAYS_CREDIT_ENDDATE']
# Groupby Each Customer ID
grp = B1[['SK_ID_CURR', 'SK_ID_BUREAU', 'DAYS_CREDIT_ENDDATE1']].groupby(by = ['SK_ID_CURR'])
# Sort the values of CREDIT_ENDDATE for each customer ID
grp1 = grp.apply(lambda x: x.sort_values(['DAYS_CREDIT_ENDDATE1'], ascending = True)).reset_index(drop = True)
del grp
gc.collect()
print("Grouping and Sorting done")
# Calculate the Difference in ENDDATES and fill missing values with zero
grp1['DAYS_ENDDATE_DIFF'] = grp1.groupby(by = ['SK_ID_CURR'])['DAYS_CREDIT_ENDDATE1'].diff()
grp1['DAYS_ENDDATE_DIFF'] = grp1['DAYS_ENDDATE_DIFF'].fillna(0).astype('uint32')
del grp1['DAYS_CREDIT_ENDDATE1'], grp1['SK_ID_CURR']
gc.collect()
print("Difference days calculated")
# Merge new feature 'DAYS_ENDDATE_DIFF' with original Data frame for BUREAU DATA
B = B.merge(grp1, on = ['SK_ID_BUREAU'], how = 'left')
del grp1
gc.collect()
# Calculate Average of DAYS_ENDDATE_DIFF
grp = B[['SK_ID_CURR', 'DAYS_ENDDATE_DIFF']].groupby(by = ['SK_ID_CURR'])['DAYS_ENDDATE_DIFF'].mean().reset_index().rename( index = str, columns = {'DAYS_ENDDATE_DIFF': 'AVG_ENDDATE_FUTURE'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
del grp
#del B['DAYS_ENDDATE_DIFF']
del B['CREDIT_ENDDATE_BINARY'], B['DAYS_CREDIT_ENDDATE']
gc.collect()
print(B.shape)
# + _uuid="4c136670248ed676d85a7b57fb5710ac0187488e"
# Verification of Feature
B[B['SK_ID_CURR'] == 100653]
# In the Data frame below we have 3 values not NAN
# Average of 3 values = (0 +0 + 3292)/3 = 1097.33
#The NAN Values are Not Considered since these values DO NOT HAVE A FUTURE CREDIT END DATE
# + [markdown] _uuid="1b1a219ed085a91c3d4d31d97094b400f9b38076"
# # FEATURE 8 - DEBT OVER CREDIT RATIO
# # The Ratio of Total Debt to Total Credit for each Customer
# # A High value may be a red flag indicative of potential default
# + _uuid="224c5ae47cd15eb3c44f8a635182eb7cddab93e8"
B[~B['AMT_CREDIT_SUM_LIMIT'].isnull()][0:2]
# WE can see in the Table Below
# AMT_CREDIT_SUM = AMT_CREDIT_SUM_DEBT + AMT_CREDIT_SUM_LIMIT
# + _uuid="d4985f67f37f089b0b5ef5b39229f8ee1850a234"
B = bureau[0:10000]
B['AMT_CREDIT_SUM_DEBT'] = B['AMT_CREDIT_SUM_DEBT'].fillna(0)
B['AMT_CREDIT_SUM'] = B['AMT_CREDIT_SUM'].fillna(0)
grp1 = B[['SK_ID_CURR', 'AMT_CREDIT_SUM_DEBT']].groupby(by = ['SK_ID_CURR'])['AMT_CREDIT_SUM_DEBT'].sum().reset_index().rename( index = str, columns = { 'AMT_CREDIT_SUM_DEBT': 'TOTAL_CUSTOMER_DEBT'})
grp2 = B[['SK_ID_CURR', 'AMT_CREDIT_SUM']].groupby(by = ['SK_ID_CURR'])['AMT_CREDIT_SUM'].sum().reset_index().rename( index = str, columns = { 'AMT_CREDIT_SUM': 'TOTAL_CUSTOMER_CREDIT'})
B = B.merge(grp1, on = ['SK_ID_CURR'], how = 'left')
B = B.merge(grp2, on = ['SK_ID_CURR'], how = 'left')
del grp1, grp2
gc.collect()
B['DEBT_CREDIT_RATIO'] = B['TOTAL_CUSTOMER_DEBT']/B['TOTAL_CUSTOMER_CREDIT']
del B['TOTAL_CUSTOMER_DEBT'], B['TOTAL_CUSTOMER_CREDIT']
gc.collect()
print(B.shape)
# + [markdown] _uuid="9c28c097baf6007a1f0412e164dd8ab3b173936e"
# # FEATURE 9 - OVERDUE OVER DEBT RATIO
# # What fraction of total Debt is overdue per customer?
# # A high value could indicate a potential DEFAULT
# + _uuid="9c39380be96689344da9b9008193ff05343ff1dc"
B = bureau[0:10000]
B['AMT_CREDIT_SUM_DEBT'] = B['AMT_CREDIT_SUM_DEBT'].fillna(0)
B['AMT_CREDIT_SUM_OVERDUE'] = B['AMT_CREDIT_SUM_OVERDUE'].fillna(0)
grp1 = B[['SK_ID_CURR', 'AMT_CREDIT_SUM_DEBT']].groupby(by = ['SK_ID_CURR'])['AMT_CREDIT_SUM_DEBT'].sum().reset_index().rename( index = str, columns = { 'AMT_CREDIT_SUM_DEBT': 'TOTAL_CUSTOMER_DEBT'})
grp2 = B[['SK_ID_CURR', 'AMT_CREDIT_SUM_OVERDUE']].groupby(by = ['SK_ID_CURR'])['AMT_CREDIT_SUM_OVERDUE'].sum().reset_index().rename( index = str, columns = { 'AMT_CREDIT_SUM_OVERDUE': 'TOTAL_CUSTOMER_OVERDUE'})
B = B.merge(grp1, on = ['SK_ID_CURR'], how = 'left')
B = B.merge(grp2, on = ['SK_ID_CURR'], how = 'left')
del grp1, grp2
gc.collect()
B['OVERDUE_DEBT_RATIO'] = B['TOTAL_CUSTOMER_OVERDUE']/B['TOTAL_CUSTOMER_DEBT']
del B['TOTAL_CUSTOMER_OVERDUE'], B['TOTAL_CUSTOMER_DEBT']
gc.collect()
print(B.shape)
# + [markdown] _uuid="8858ab1aab13d02cd1581f844fb0e3f31c752973"
# # FEATURE 10 - AVERAGE NUMBER OF LOANS PROLONGED
# + _uuid="b8f7432e83fa1e6b6fed1474ff88049553e1dbcf"
B = bureau[0:10000]
B['CNT_CREDIT_PROLONG'] = B['CNT_CREDIT_PROLONG'].fillna(0)
grp = B[['SK_ID_CURR', 'CNT_CREDIT_PROLONG']].groupby(by = ['SK_ID_CURR'])['CNT_CREDIT_PROLONG'].mean().reset_index().rename( index = str, columns = { 'CNT_CREDIT_PROLONG': 'AVG_CREDITDAYS_PROLONGED'})
B = B.merge(grp, on = ['SK_ID_CURR'], how = 'left')
print(B.shape)
# + _uuid="12ed8bfe2dfa59e2d5db1e6ba7feaf2bd642b896"
| first_project/home-credit-bureau-data-feature-engineering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn import datasets
dir(datasets)
from sklearn.datasets import load_breast_cancer
data=load_breast_cancer()
dir(data)
print(data.DESCR)
print(data.data)
print(data.feature_names)
print(type(data.data))
ls=list(data.data)
# ls
print(data.filename)
print(data.target)
print(data.target_names)
| ML/BC_sklearn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# var.py
import datetime
import numpy as np
import pandas_datareader as web
from scipy.stats import norm
def var_cov_var(P, c, mu, sigma):
"""
Variance-Covariance calculation of daily Value-at-Risk
using confidence level c, with mean of returns mu
and standard deviation of returns sigma, on a portfolio
of value P.
"""
alpha = norm.ppf(1-c, mu, sigma)
return P - P*(alpha + 1)
if __name__ == "__main__":
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2014, 1, 1)
citi = web.DataReader("C", 'yahoo', start, end)
citi["rets"] = citi["Adj Close"].pct_change()
P = 1e6 # 1,000,000 USD
c = 0.99 # 99% confidence interval
mu = np.mean(citi["rets"])
sigma = np.std(citi["rets"])
var = var_cov_var(P, c, mu, sigma)
print ("Value-at-Risk: $%0.2f" % var)
# -
| Var calculations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: distcal
# language: python
# name: distcal
# ---
import numpy as np
import pandas as pd
# Use identical random seed to Rodrigues and Pereira (2018)
np.random.seed(42)
# Read data set
dataset = pd.read_csv("data/motorcycle.csv", sep=",")
dataset.head()
# Standardization as done for the DeepJMQR approach
original_series = pd.Series(dataset["accel"]).as_matrix()
y_mean = original_series.mean()
y_std = original_series.std()
original_series = (original_series - y_mean) / y_std
# Use identical train / test split
# +
TRAIN_PERC = 0.66
n_train = int(TRAIN_PERC*len(original_series))
ix = np.random.permutation(len(original_series))
ix_train = ix[:n_train]
ix_train = np.array(sorted(ix_train))
ix_test = ix[n_train:]
ix_test = np.array(sorted(ix_test))
X_train = ix_train[:,np.newaxis]
X_mean = X_train.mean(axis=0)
X_std = X_train.std(axis=0)
X_train = (X_train - X_mean) / X_std
X_test = ix_test[:,np.newaxis]
X_test = (X_test - X_mean) / X_std
y_train = original_series[ix_train]
y_test = original_series[ix_test]
# -
motorcycle_train = pd.DataFrame(np.concatenate([X_train, y_train[:,None]], axis=1))
motorcycle_train.columns = ["times", "accel"]
motorcycle_test = pd.DataFrame(np.concatenate([X_test, y_test[:,None]], axis=1))
motorcycle_test.columns = motorcycle_train.columns
motorcycle_train.to_csv("data/motorcycle_train.csv", index=False)
motorcycle_test.to_csv("data/motorcycle_test.csv", index=False)
| benchmarks/deepDistModels/create_motorcycle_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Working with FB Prophet
# ## [Trend Changepoints](https://facebookincubator.github.io/prophet/docs/trend_changepoints.html) example from FB page
#
# Lok at time series of daily page views for the Wikipedia page for <NAME>. The csv is available [here](https://github.com/facebookincubator/prophet/blob/master/examples/example_wp_peyton_manning.csv)
# + deletable=true editable=true
#wp_R_dataset_url = 'https://github.com/facebookincubator/prophet/blob/master/examples/example_wp_R.csv'
wp_peyton_manning_filename = '../datasets/example_wp_peyton_manning.csv'
# + deletable=true editable=true
import pandas as pd
import numpy as np
from fbprophet import Prophet
# + [markdown] deletable=true editable=true
# ### import the data and transform to log-scale
# + deletable=true editable=true
df = pd.read_csv(wp_peyton_manning_filename)
# transform to log scale
df['y']=np.log(df['y'])
df.head()
# -
# Looking at this dataset, it is clear that real time series frequently have abrupt changes in their trajectories. By default, Prophet will automatically detect these changepoints and will allow the trend to adapt appropriately. However, if you wish to have finer control over this process (e.g., Prophet missed a rate change, or is overfitting rate changes in the history), then there are several input arguments you can use.
#
# ### Automatic changepoint detection in Prophet
# <hr>
#
# Prophet detects changepoints by first specifying a large number of _potential changepoints_ at which the rate is allowed to change. It then puts a sparse prior on the magnitudes of the rate of changes. This is equivalent to L1 regularization, meaning that Prophet has a large number of _possible_ places where the rate can change, but will use as few of them as possible. Consider the Peyton Manning forecast in the Quickstart example. By default, Prophet specifies 25 potential changepoints which are uniformly placed in the first 80% of the time series.
# + deletable=true editable=true
m = Prophet()
m.fit(df);
# + deletable=true editable=true
future = m.make_future_dataframe(periods=365)
forecast = m.predict(future)
forecast.tail()
# + deletable=true editable=true
# %matplotlib inline
import matplotlib.pyplot as plt
# -
# Below the dashed lines show where the potential changepoints wer placed.
ax = m.plot(forecast)
for ts in m.changepoints.values:
plt.vlines(x=ts, ymin=5, ymax=13, linestyles='--')
# Even though we have a lot of places where the rate can possibly change because of the sparse prior, most of these changepoints are unused (i.e. 0). We can determine this by plotting the magnitude of the rate change at each changepoint:
# + deletable=true editable=true
deltas = m.params['delta'].mean(0)
fig = plt.figure(facecolor='w', figsize=(10, 6))
ax = fig.add_subplot(111)
ax.bar(range(len(deltas)), deltas, facecolor='#0072B2', edgecolor='#2072B2')
ax.grid(True, which='major', c='gray', ls='-', lw=1, alpha=0.2)
ax.set_ylabel('Rate change')
ax.set_xlabel('Potential changepoint')
fig.tight_layout()
# + [markdown] deletable=true editable=true
# The number of potential changepoints can be set using the argument `n_changepoints`, but this is better tuned by adjusting the regularization.
#
# <hr>
# ### Adjusting trend flexibility
# If the trend changes are being overfit (too much flexibility) or underfit (not enough flexibility), you can adjust the strength of the sparse prior using the input argument `changepoint_prior_scale`. By default, this parameter is set to 0.05. Increasing it will make the trend _more_ flexible:
# + deletable=true editable=true
m = Prophet(changepoint_prior_scale=0.5)
forecast = m.fit(df).predict(future)
m.plot(forecast);
# + deletable=true editable=true
deltas = m.params['delta'].mean(0)
fig = plt.figure(facecolor='w', figsize=(10, 6))
ax = fig.add_subplot(111)
ax.bar(range(len(deltas)), deltas, facecolor='#0092B2', edgecolor='#2072B2')
ax.grid(True, which='major', c='gray', ls='-', lw=1, alpha=0.2)
ax.set_ylabel('Rate change')
ax.set_xlabel('Potential changepoint')
fig.tight_layout()
# -
# Whereas decreasing it will make the trend _less_ flexible
m = Prophet(changepoint_prior_scale=0.001)
forecast = m.fit(df).predict(future)
m.plot(forecast);
deltas = m.params['delta'].mean(0)
fig = plt.figure(facecolor='w', figsize=(10, 6))
ax = fig.add_subplot(111)
ax.bar(range(len(deltas)), deltas, facecolor='#0032B2', edgecolor='#2072B2')
ax.grid(True, which='major', c='gray', ls='-', lw=1, alpha=0.2)
ax.set_ylabel('Rate change')
ax.set_xlabel('Potential changepoint')
fig.tight_layout()
# ### Specifying the locations of the changepoints
#
# If you wish, rather than using automatic changepoint detection you can manually specify the locations of potential changepoints with the `changepoints` argument.
m = Prophet(changepoints=['2014-01-01'])
forecast = m.fit(df).predict(future)
m.plot(forecast);
| notebooks/Prophet_TrendChangepoints_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="3XxjL75hCVgC"
# # Paired Style Transfer using Pix2Pix GAN
# In their work titled “[Image to Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004)”, Isola and Zhu et. al. present a conditional GAN network which is able to learn task specific loss functions and thus work across datasets. As the name suggests, this GAN architecture takes a specific type of image as input and transforms it into a different domain. It is called pair-wise style transfer as the training set needs to have samples from both, source and target domains.
# -
# [](https://colab.research.google.com/github/PacktPublishing/Hands-On-Generative-AI-with-Python-and-TensorFlow-2/blob/master/Chapter_7/pix2pix/pix2pix.ipynb)
# + [markdown] colab_type="text" id="RzLBxBnsCbER"
# ## Load Libraries
# + colab={} colab_type="code" id="gcMOeO0pyB2K"
from tensorflow.keras.layers import Input, Concatenate
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from matplotlib import pyplot as plt
import tensorflow as tf
import numpy as np
# + [markdown] colab_type="text" id="rleiC7NiCh3A"
# ## Load Utilities
# + colab={} colab_type="code" id="IOxxFhFMxs1X"
from gan_utils import downsample_block, upsample_block, discriminator_block
from data_utils import plot_sample_images, batch_generator, get_samples
# + [markdown] colab_type="text" id="--ZzF52jCkgj"
# ## Set Configs
# + colab={} colab_type="code" id="pyFbnax9yI8M"
params = {'legend.fontsize': 'x-large',
'figure.figsize': (8,8),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
plt.rcParams.update(params)
# + colab={} colab_type="code" id="bBE6UENLyUmf"
IMG_WIDTH = 256
IMG_HEIGHT = 256
DOWNLOAD_URL = 'https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/maps.tar.gz'
# + [markdown] colab_type="text" id="NfV0CZCZCpq7"
# ## U-Net Generator
# The U-Net architecture uses skip connections to shuttle important features between the input and outputs. In case of pix2pix GAN, skip connections are added between every _ith_ down-sampling and _(n-i)th_ over-sampling layers, where n is the total number of layers in the generator. The skip connection leads to concatenation of all channels from the _ith_ and _(n-i)th_ layers.
# + colab={} colab_type="code" id="0zruEl9Hyb4L"
def build_generator(img_shape,channels=3,num_filters=64):
# Image input
input_layer = Input(shape=img_shape)
# Downsampling
down_sample_1 = downsample_block(input_layer,
num_filters,
batch_normalization=False)
# rest of the down-sampling blocks have batch_normalization=true
down_sample_2 = downsample_block(down_sample_1, num_filters*2)
down_sample_3 = downsample_block(down_sample_2, num_filters*4)
down_sample_4 = downsample_block(down_sample_3, num_filters*8)
down_sample_5 = downsample_block(down_sample_4, num_filters*8)
down_sample_6 = downsample_block(down_sample_5, num_filters*8)
down_sample_7 = downsample_block(down_sample_6, num_filters*8)
# Upsampling blocks with skip connections
upsample_1 = upsample_block(down_sample_7, down_sample_6, num_filters*8)
upsample_2 = upsample_block(upsample_1, down_sample_5, num_filters*8)
upsample_3 = upsample_block(upsample_2, down_sample_4, num_filters*8)
upsample_4 = upsample_block(upsample_3, down_sample_3, num_filters*8)
upsample_5 = upsample_block(upsample_4, down_sample_2, num_filters*2)
upsample_6 = upsample_block(upsample_5, down_sample_1, num_filters)
upsample_7 = UpSampling2D(size=2)(upsample_6)
output_img = Conv2D(channels,
kernel_size=4,
strides=1,
padding='same',
activation='tanh')(upsample_7)
return Model(input_layer, output_img)
# + [markdown] colab_type="text" id="w55ZM7L5CwKc"
# ## Patch-GAN Discriminator
# The authors for pix2pix propose a Patch-GAN setup for the discriminator which takes the required inputs and generates an output of size NxN. Each $x_{ij}$ element of the NxN output signifies whether the corresponding patch ij in the generated image is real or fake. Each output patch can be traced back to its initial input patch basis the effective receptive field for each of the layers.
# + colab={} colab_type="code" id="F1HfxYij4TnL"
def build_discriminator(img_shape,num_filters=64):
input_img = Input(shape=img_shape)
cond_img = Input(shape=img_shape)
# Concatenate input and conditioning image by channels
# as input for discriminator
combined_input = Concatenate(axis=-1)([input_img, cond_img])
# First discriminator block does not use batch_normalization
disc_block_1 = discriminator_block(combined_input,
num_filters,
batch_normalization=False)
disc_block_2 = discriminator_block(disc_block_1, num_filters*2)
disc_block_3 = discriminator_block(disc_block_2, num_filters*4)
disc_block_4 = discriminator_block(disc_block_3, num_filters*8)
output = Conv2D(1, kernel_size=4, strides=1, padding='same')(disc_block_4)
return Model([input_img, cond_img], output)
# + [markdown] colab_type="text" id="IRHl_qp1C4WS"
# ## Custom Training Loop
# + colab={} colab_type="code" id="k0TEcRRV5-Ab"
def train(generator,
discriminator,
gan,
patch_gan_shape,
epochs,
path='/content/maps' ,
batch_size=1,
sample_interval=50):
# Ground truth shape/ Patch-GAN outputs
real_y = np.ones((batch_size,) + patch_gan_shape)
fake_y = np.zeros((batch_size,) + patch_gan_shape)
for epoch in range(epochs):
print("Epoch={}".format(epoch))
for idx, (imgs_source, imgs_cond) in enumerate(batch_generator(path=path,
batch_size=batch_size,
img_res=[IMG_HEIGHT, IMG_WIDTH])):
# train disriminator
# generator generates outputs based on conditioned input images
fake_imgs = generator.predict([imgs_cond])
# calculate discriminator loss on real samples
disc_loss_real = discriminator.train_on_batch([imgs_source, imgs_cond],
real_y)
# calculate discriminator loss on fake samples
disc_loss_fake = discriminator.train_on_batch([fake_imgs, imgs_cond],
fake_y)
# overall discriminator loss
discriminator_loss = 0.5 * np.add(disc_loss_real, disc_loss_fake)
# train generator
gen_loss = gan.train_on_batch([imgs_source, imgs_cond],
[real_y, imgs_source])
# training updates every 50 iterations
if idx % 50 == 0:
print ("[Epoch {}/{}] [Discriminator loss: {}, accuracy: {}] [Generaotr loss: {}]".format(epoch,
epochs,
discriminator_loss[0],
100*discriminator_loss[1],
gen_loss[0]))
# Plot and Save progress every few iterations
if idx % sample_interval == 0:
plot_sample_images(generator=generator,
path=path,
epoch=epoch,
batch_num=idx,
output_dir='images')
# + [markdown] colab_type="text" id="VlwcwRYAC7Ql"
# ## Get Discriminator
# + colab={} colab_type="code" id="q5TAA6FE-K6a"
discriminator = build_discriminator(img_shape=(IMG_HEIGHT,IMG_WIDTH,3),
num_filters=64)
discriminator.compile(loss='mse',
optimizer=Adam(0.0002, 0.5),
metrics=['accuracy'])
# + [markdown] colab_type="text" id="2R8kvmHCC-Nw"
# ## Get Generator and GAN Model Objects
# + colab={} colab_type="code" id="j4GaR-MU-SHa"
generator = build_generator(img_shape=(IMG_HEIGHT,IMG_WIDTH,3),
channels=3,
num_filters=64)
source_img = Input(shape=(IMG_HEIGHT,IMG_WIDTH,3))
cond_img = Input(shape=(IMG_HEIGHT,IMG_WIDTH,3))
fake_img = generator(cond_img)
discriminator.trainable = False
output = discriminator([fake_img, cond_img])
gan = Model(inputs=[source_img, cond_img], outputs=[output, fake_img])
gan.compile(loss=['mse', 'mae'],
loss_weights=[1, 100],
optimizer=Adam(0.0002, 0.5))
# + [markdown] colab_type="text" id="GzEnfuwh12D2"
# ## Calculate Receptive Field for Patch
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Jzij_HAZ1x12" outputId="fa459cca-61cf-4b7d-fd39-05e911d7d303"
# Based on example : https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/39#issuecomment-368239697
def get_receptive_field(output_size, ksize, stride):
return (output_size - 1) * stride + ksize
last_layer = get_receptive_field(output_size=1, ksize=4, stride=1)
# Receptive field: 4
fourth_layer = get_receptive_field(output_size=last_layer, ksize=4, stride=1)
# Receptive field: 7
third_layer = get_receptive_field(output_size=fourth_layer, ksize=4, stride=2)
# Receptive field: 16
second_layer = get_receptive_field(output_size=third_layer, ksize=4, stride=2)
# Receptive field: 34
first_layer = get_receptive_field(output_size=second_layer, ksize=4, stride=2)
# Receptive field: 70
print(first_layer)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="gYRRzaIO-3SP" outputId="df31054d-6352-4a84-a3e9-40be4a8c1f8c"
# prepare patch size for our setup
patch = int(IMG_HEIGHT / 2**4)
patch_gan_shape = (patch, patch, 1)
print("Patch Shape={}".format(patch_gan_shape))
# + [markdown] colab_type="text" id="PJ2ACwC6DH0Z"
# ## Download Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="6mazk3aJ_38O" outputId="5789b96b-3bbd-4699-9a62-ad09f08c3fc1"
tf.keras.utils.get_file('maps.tar.gz',
origin=DOWNLOAD_URL,
cache_subdir='/content',
extract=True)
# + [markdown] colab_type="text" id="3N6uaWRpDKNF"
# ## Training Begins!
# + colab={"base_uri": "https://localhost:8080/", "height": 841} colab_type="code" id="fu77NcLY_o2s" outputId="7d77d98c-fc40-4d39-ec84-d7cb69f929e2"
train(generator, discriminator, gan, patch_gan_shape, epochs=200, batch_size=1, sample_interval=200)
# + colab={} colab_type="code" id="EwLZhhOzAudk"
| Chapter_7/pix2pix/.ipynb_checkpoints/pix2pix-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root]
# language: python
# name: conda-root-py
# ---
# %matplotlib notebook
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
df = pd.merge(mouse_metadata, study_results, how = "left", on=["Mouse ID", "Mouse ID"])
# Display the data table for preview
df.head()
# -
# Checking the number of mice.
mice =df["Mouse ID"].value_counts()
number_of_mice=len(mice)
number_of_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = df.loc[df.duplicated(subset=['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
duplicate_mice
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_mouse_id=pd.DataFrame(duplicate_mice)
all_duplicate_mouse_id
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = df[df['Mouse ID'].isin(duplicate_mice)==False]
clean_df
# Checking the number of mice in the clean DataFrame.
clean_mice=clean_df["Mouse ID"].value_counts()
clean_number_of_mice=len(clean_mice)
clean_number_of_mice
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
regimen_mean = clean_df.groupby('Drug Regimen').mean()["Tumor Volume (mm3)"]
regimen_mean
regimen_median = clean_df.groupby('Drug Regimen').median()["Tumor Volume (mm3)"]
regimen_median
regimen_variance = clean_df.groupby('Drug Regimen').var()["Tumor Volume (mm3)"]
regimen_variance
regimen_std = clean_df.groupby('Drug Regimen').std()["Tumor Volume (mm3)"]
regimen_std
regimen_sem = clean_df.groupby('Drug Regimen').sem()["Tumor Volume (mm3)"]
regimen_sem
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
summary_stats_table = pd.DataFrame({"Mean": regimen_mean, "Median":regimen_median, "Variance":regimen_variance, "Standard Deviation": regimen_std, "SEM": regimen_sem})
summary_stats_table
# -
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
single_group_by = clean_df.groupby('Drug Regimen')
summary_stats_table_2 = single_group_by.agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"]
summary_stats_table_2
# +
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
total_measurements = clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
plot_pandas = total_measurements.plot.bar(figsize=(6,6), color='b', fontsize = 14)
total_measurements
plt.xlabel("Drug Regimen",fontsize = 8)
plt.ylabel("Number of Mice",fontsize = 8)
plt.title("Total Measurements",fontsize = 20)
total_measurements
# -
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
mice_list =(clean_df.groupby(["Drug Regimen"])["Mouse ID"].count()).tolist()
mice_list
# +
x_axis = np.arange(len(total_measurements))
fig1, ax1 = plt.subplots(figsize=(8, 8))
plt.bar(x_axis, mice_list, color='b', alpha=0.8, align='center')
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, ['Capomulin', 'Ceftamin', 'Infubinol', 'Ketapril', 'Naftisol', 'Placebo', 'Propriva', 'Ramicane', 'Stelasyn', 'Zoniferol'], rotation='vertical')
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, max(mice_list)+10)
plt.title("Total Measurements",fontsize = 20)
plt.xlabel("Drug Regimen",fontsize = 14)
plt.ylabel("Number of Mice",fontsize = 14)
# +
# Generate a pie plot showing the distribution of female versus male mice using pandas
groupby_gender = clean_df.groupby(["Mouse ID", "Sex"])
groupby_gender
gender_df = pd.DataFrame(groupby_gender.size())
mouse_gender = pd.DataFrame(gender_df.groupby(["Sex"]).count())
mouse_gender.columns = ["Total Count"]
mouse_gender["Percentage of Sex"] = (100*(mouse_gender["Total Count"]/mouse_gender["Total Count"].sum()))
mouse_gender["Percentage of Sex"] = mouse_gender["Percentage of Sex"]
mouse_gender
# +
colors = ['green', 'blue']
explode = (0.1, 0)
plot = mouse_gender.plot.pie(y='Total Count',figsize=(6,6), colors = colors, startangle=140, explode = explode, shadow = True, autopct="%1.1f%%")
plt.title('Male vs Female Mouse Population',fontsize = 20)
plt.ylabel('Sex',fontsize = 14)
plt.axis("equal")
# +
# Generate a pie plot showing the distribution of female versus male mice using pyplot
labels = ["Female","Male"]
sizes = [49.799197,50.200803]
colors = ['green', 'blue']
explode = (0.1, 0)
fig1, ax1 = plt.subplots(figsize=(6, 6))
plt.pie(sizes, explode=explode,labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=140,)
plt.title('Male vs Female Mouse Population',fontsize = 20)
plt.ylabel('Sex',fontsize = 14)
plt.axis("equal")
# +
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
capomulin_df = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin",:]
ramicane_df = clean_df.loc[clean_df["Drug Regimen"] == "Ramicane", :]
infubinol_df = clean_df.loc[clean_df["Drug Regimen"] == "Infubinol", :]
ceftamin_df = clean_df.loc[clean_df["Drug Regimen"] == "Ceftamin", :]
# Start by getting the last (greatest) timepoint for each mouse
# Capomulin
capomulin_last = capomulin_df.groupby('Mouse ID').max()['Timepoint']
capomulin_vol = pd.DataFrame(capomulin_last)
# Ramicane
ramicane_last = ramicane_df.groupby('Mouse ID').max()['Timepoint']
ramicane_vol = pd.DataFrame(ramicane_last)
# Infubinol
infubinol_last = infubinol_df.groupby('Mouse ID').max()['Timepoint']
infubinol_vol = pd.DataFrame(infubinol_last)
# Ceftamin
ceftamin_last = ceftamin_df.groupby('Mouse ID').max()['Timepoint']
ceftamin_vol = pd.DataFrame(ceftamin_last)
# -
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
capomulin_merge = pd.merge(capomulin_vol, clean_df, on=("Mouse ID","Timepoint"),how="left")
capomulin_merge.head()
ramicane_merge = pd.merge(ramicane_vol, clean_df, on=("Mouse ID","Timepoint"),how="left")
ramicane_merge.head()
infubinol_merge = pd.merge(infubinol_vol, clean_df, on=("Mouse ID","Timepoint"),how="left")
infubinol_merge.head()
ceftamin_merge = pd.merge(ceftamin_vol, clean_df, on=("Mouse ID","Timepoint"),how="left")
ceftamin_merge.head()
# +
# Put treatments into a list for for loop (and later for plot labels)
treatment_list = capomulin_merge, ramicane_merge, infubinol_merge, ceftamin_merge
for drug in treatment_list:
print(treatment_list)
# +
capomulin_tumors = capomulin_merge["Tumor Volume (mm3)"]
quartiles =capomulin_tumors.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of Capomulin tumors: {lowerq}")
print(f"The upper quartile of Capomulin tumors: {upperq}")
print(f"The interquartile range of Capomulin tumors: {iqr}")
print(f"The median of Capomulin tumors: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# +
ramicane_tumors = ramicane_merge["Tumor Volume (mm3)"]
quartiles =ramicane_tumors.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of Ramicane tumors is: {lowerq}")
print(f"The upper quartile of Ramicane tumors is: {upperq}")
print(f"The interquartile range of Ramicane tumors is: {iqr}")
print(f"The median of Ramicane tumors is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# +
infubinol_tumors = infubinol_merge["Tumor Volume (mm3)"]
quartiles =infubinol_tumors.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of Infubinol tumors is: {lowerq}")
print(f"The upper quartile of Infubinol tumors is: {upperq}")
print(f"The interquartile range of Infubinol tumors is: {iqr}")
print(f"The median of Infubinol tumors is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# +
ceftamin_tumors = ceftamin_merge["Tumor Volume (mm3)"]
quartiles = ceftamin_tumors.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of treatment is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
# Determine outliers using upper and lower bounds
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# +
data_to_plot = [capomulin_tumors, ramicane_tumors, infubinol_tumors, ceftamin_tumors]
Regimen= ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
fig1, ax1 = plt.subplots(figsize=(6, 6))
ax1.set_title('Tumor Volume at Selected Mouse',fontsize =12)
ax1.set_ylabel('Final Tumor Volume (mm3)',fontsize = 10)
ax1.set_xlabel('Drug Regimen',fontsize = 10)
ax1.boxplot(data_to_plot, labels=Regimen, widths = 0.4, patch_artist=True,vert=True)
plt.ylim(10, 80)
plt.show
# -
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
forline_df = capomulin_df.loc[capomulin_df["Mouse ID"] == "f966",:]
forline_df.head()
# +
x_axis = forline_df["Timepoint"]
tumsiz = forline_df["Tumor Volume (mm3)"]
fig1, ax1 = plt.subplots(figsize=(6, 6))
plt.title('Capomulin treatmeant of mouse f966',fontsize =10)
plt.plot(x_axis, tumsiz,linewidth=2, markersize=15,marker="o",color="blue", label="Fahreneit")
plt.xlabel('Timepoint (Days)',fontsize =12)
plt.ylabel('Tumor Volume (mm3)',fontsize =12)
# +
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
fig1, ax1 = plt.subplots(figsize=(10, 10))
avg_capm_vol =capomulin_df.groupby(['Mouse ID']).mean()
marker_size=15
plt.scatter(avg_capm_vol['Weight (g)'],avg_capm_vol['Tumor Volume (mm3)'],s=175, color="blue")
plt.title('Mouse Weight Versus Average Tumor Volume',fontsize =12)
plt.xlabel('Weight (g)',fontsize =10)
plt.ylabel('Averag Tumor Volume (mm3)',fontsize =10)
plt.show()
# +
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
corr=round(st.pearsonr(avg_capm_vol['Weight (g)'],avg_capm_vol['Tumor Volume (mm3)'])[0],2)
print(f"The correlation between mouse weight and average tumor volume is {corr}")
# +
x_values = avg_capm_vol['Weight (g)']
y_values = avg_capm_vol['Tumor Volume (mm3)']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
print("slope", slope)
print("intercept", intercept)
print("rvalue (Correlation coefficient)", rvalue)
print("pandas (Correlation coefficient)", corr)
print("stderr", stderr)
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
print(line_eq)
# +
fig1, ax1 = plt.subplots(figsize=(10, 10))
plt.scatter(x_values,y_values,s=175, color="blue")
plt.plot(x_values,regress_values,"r-")
plt.title('Regression Plot of Mouse Weight Versus Average Tumor Volume',fontsize =12)
plt.xlabel('Weight(g)',fontsize =10)
plt.ylabel('Average Tumore Volume (mm3)',fontsize =10)
ax1.annotate(line_eq, xy=(20, 40), xycoords='data',xytext=(0.8, 0.95), textcoords='axes fraction',horizontalalignment='right', verticalalignment='top',fontsize=30,color="red")
print(f"The r-squared is: {rvalue**2}")
plt.show
# -
| Pymaceuticals/pymaceuticals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="9YrmFNMI2aJ6"
# # Import the dataset
# + id="CquCPLXaUeKl"
import numpy as np
import pandas as pd
import tensorflow as tf
from keras import Sequential
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Embedding, GlobalMaxPool1D, LSTM, Input
from keras.losses import BinaryCrossentropy, binary_crossentropy
from keras.metrics import AUC
from keras.optimizers import Adam
from keras import backend as K
from sklearn.model_selection import train_test_split
import gc
import pickle
# + id="GNXQhy-h_rZK" outputId="e6e86c32-6817-473a-cfb2-00fc0cb9c0f8" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.debugging.set_log_device_placement(True)
# + id="NnBSzE0a1z0a" outputId="210944e1-f178-4cdc-d3d9-2aed9febf7dd" colab={"base_uri": "https://localhost:8080/", "height": 241}
# !pip install kaggle
# + id="ZNrQtU_d2G6N" outputId="75348275-252c-487b-9496-ffdc9abfa424" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 89}
from google.colab import files
files.upload()
# + id="cMop1gMP2Ixd"
# ! mkdir ~/.kaggle
# ! cp kaggle.json ~/.kaggle/
# + id="jy1EQKAH2LD5"
# ! chmod 600 ~/.kaggle/kaggle.json
# + id="PDfRP4EO2NXI" outputId="9d14dbb1-abc0-47bf-c299-e0d21270f126" colab={"base_uri": "https://localhost:8080/", "height": 442}
# ! kaggle competitions download -c jigsaw-unintended-bias-in-toxicity-classification
# + id="zeqWy4aF2UTI"
# ! mkdir dataset
# + id="6CKpIzPm2WOz" outputId="423baf7e-89d7-4205-8abb-ecd89529e195" colab={"base_uri": "https://localhost:8080/", "height": 51}
# ! unzip test.csv.zip -d dataset
# + id="S-eErwYE2XOS" outputId="4aa57452-1319-405c-ae74-8d4c038489ca" colab={"base_uri": "https://localhost:8080/", "height": 51}
# ! unzip train.csv.zip -d dataset
# + [markdown] id="8xlceFhh2dG6"
# # Data Fetching
# + id="KBNsZHHBqNGd" outputId="ba079ca9-361b-488a-b5f6-11f78e80134d" colab={"base_uri": "https://localhost:8080/", "height": 34}
gc.collect()
# + id="jBz0Uq6B2gNu" outputId="9d54b5f1-58b9-421d-b79f-5075bf995f92" colab={"base_uri": "https://localhost:8080/", "height": 445}
train = pd.read_csv('dataset/train.csv',dtype={'comment_text':'string'})
train.head()
# + id="dlShY5D_8Ak5" outputId="3671e185-ed1a-4e8c-9e41-6b2ff5ae7cfd" colab={"base_uri": "https://localhost:8080/", "height": 445}
train = train.drop(columns='id')
train.head()
# + id="ciUt5ezK8G-0" outputId="d9b075f3-b85f-4a13-a1f8-bcf9f6fc5e80" colab={"base_uri": "https://localhost:8080/", "height": 782}
train.isna().sum()
# + id="xqKRf5XT8Nyj" outputId="f49d86d8-faf7-4a37-f2c0-1928aae49387" colab={"base_uri": "https://localhost:8080/", "height": 34}
train.shape
# + id="tMsE7jOW9faK" outputId="d9ad3e2c-5299-44ac-f31d-7bb083973310" colab={"base_uri": "https://localhost:8080/", "height": 445}
train = train.drop(columns=['toxicity_annotator_count', 'identity_annotator_count', 'disagree', 'likes', 'sad', 'wow', 'funny', 'rating'])
train = train.drop(columns=['article_id', 'parent_id', 'publication_id', 'created_date'])
train.head()
# + id="L4NmTK-rVqLH" outputId="fe485e90-e1e5-40ee-b2dc-4a24c028c34b" colab={"base_uri": "https://localhost:8080/", "height": 136}
identity_columns = train.iloc[:,7:31].columns
identity_columns
# + id="7mGe3LCs9-Ya" outputId="1fcfb1b5-a7c7-44c1-abae-96794a16bc78" colab={"base_uri": "https://localhost:8080/", "height": 445}
train[identity_columns] = train[identity_columns].fillna(0)
train.head()
# + id="bfBf6KW7P2s2"
output_columns = ['target', 'asian', 'atheist', 'bisexual', 'black', 'buddhist', 'christian',
'female', 'heterosexual', 'hindu', 'homosexual_gay_or_lesbian',
'intellectual_or_learning_disability', 'jewish', 'latino', 'male',
'muslim', 'other_disability', 'other_gender', 'other_race_or_ethnicity',
'other_religion', 'other_sexual_orientation', 'physical_disability',
'psychiatric_or_mental_illness', 'transgender', 'white']
# + id="qQNgIH2E-8i2" outputId="ea26baf5-64fb-422a-8066-1d2a3b167d00" colab={"base_uri": "https://localhost:8080/", "height": 224}
OUTPUT = train.loc[:,output_columns]
OUTPUT.head()
# + [markdown] id="kP3E8Hv7EQIs"
# # Data Preprocessing
# + id="xzXcpnELLSCh" outputId="20426708-c1e9-4e94-dcc3-b8814e38aa6d" colab={"base_uri": "https://localhost:8080/", "height": 428}
X = train['comment_text'].values
X
# + id="0Fl_pNcYRHlY" outputId="09502188-73d9-49ea-a4f1-1cd7b0b08606" colab={"base_uri": "https://localhost:8080/", "height": 238}
y = OUTPUT.values
y
# + id="DtKmjaUILiIM" outputId="ae5920c5-4090-419f-db58-e81e9928ea69" colab={"base_uri": "https://localhost:8080/", "height": 34}
del train
gc.collect()
# + id="KR5_fOdZRDQL"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# + id="rvaS8f4ZRdqU" outputId="238002d4-08b3-48e4-d189-a0913d6ddda4" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_test.shape
# + id="qx5BUpYTRb7h" outputId="a2079050-11a9-482c-d871-2ac7f72a697f" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_train.shape
# + id="fl7BTshm_ztC" outputId="3e6e4b31-fa19-4870-8318-54bde95566d2" colab={"base_uri": "https://localhost:8080/", "height": 357}
# ! wget http://nlp.stanford.edu/data/glove.840B.300d.zip
# + id="ezzAHtIh_1hw" outputId="b36ffa09-d272-4872-b75f-d9d7055e21bf" colab={"base_uri": "https://localhost:8080/", "height": 51}
# ! unzip glove.840B.300d.zip
# + id="bM0bpG1oBEWY"
tokenizer = Tokenizer()
# + id="Q4vD7iGrBGFj"
tokenizer.fit_on_texts(X_train)
# + id="Wd3IHOC_BIiS"
X_train_seq = tokenizer.texts_to_sequences(X_train)
X_test_seq = tokenizer.texts_to_sequences(X_test)
# + id="1SlztKxZBJrx" outputId="d7611d46-772d-4752-8c6f-ca7220808fe2" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(X_train_seq)
# + id="2OVGWJ7oBSlE" outputId="a7e09227-cdee-4d3c-8399-0a26839da59e" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(len(tokenizer.word_index))
# + id="TUVdSGCkBVnh"
X_train_seq = pad_sequences(X_train_seq, maxlen=250)
X_test_seq = pad_sequences(X_test_seq, maxlen=250)
# + id="WfNuWCvcBVmc" outputId="73401fce-035a-450f-c63a-f61563301b12" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_test_seq.shape
# + [markdown] id="FOrLd4_VBoud"
# # Word Embedding
# + id="7vCVs9fEBrlu" outputId="16697492-8d1f-4231-8c3b-4da1bfd2897c" colab={"base_uri": "https://localhost:8080/", "height": 34}
gc.collect()
# + id="ZE8cLpscBv8O" outputId="de99c63f-a7cb-46e4-8cca-72a01a31b97d" colab={"base_uri": "https://localhost:8080/", "height": 34}
vocab_size = len(tokenizer.word_index) + 1
vocab_size
# + id="GiF1VjsWBySa"
embeddings_index = dict()
glove = open('glove.840B.300d.txt')
# + id="4dPs54a7B0MY" outputId="16999730-f50d-402c-feae-7712b04db8fa" colab={"base_uri": "https://localhost:8080/", "height": 71}
for line in glove:
word, coefs = line.split(maxsplit=1)
coefs = np.fromstring(coefs, "f", sep=" ")
embeddings_index[word] = coefs
# + id="XqsUJd8sB3QM" outputId="5a23e1f6-90f8-4d0d-b84a-0296862887b4" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("Found %s word vectors." % len(embeddings_index))
# + id="mVl6HaoaB5fz"
glove.close()
# + id="D31aZ4L8B62u" outputId="ff32b9c2-f57b-4b57-f188-c5c3cdcca4ea" colab={"base_uri": "https://localhost:8080/", "height": 34}
embedding_matrix = np.zeros((vocab_size, 300))
miss = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
if embedding_vector.shape[0] != 0:
embedding_matrix[i] = embedding_vector
else:
miss+=1
print(miss)
# + id="3wP7cUQjF5S8"
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="s8fxVzUHGfB1"
with open('embedding_matrix.pickle', 'wb') as handle:
pickle.dump(embedding_matrix, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="Xk42MXCnG3Wv" outputId="e0bc26c9-e938-4df6-f29c-bac1c5aebfe3" colab={"base_uri": "https://localhost:8080/", "height": 34}
del embeddings_index
del X
del y
del tokenizer
del glove
gc.collect()
# + [markdown] id="12ygKniccgu5"
# # Model Training
# + id="_M2XiR4ACBN4" outputId="1345042d-c4e1-4202-c361-fbadedba1e23" colab={"base_uri": "https://localhost:8080/", "height": 34}
embedding_matrix.shape
# + id="rphL4gKYLTxh" outputId="dd5d1e18-9707-44e5-e4d1-2a9390a85e2c" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_target = y_train[:,0]
y_target.shape
# + id="aLiUG9Q2LuDp" outputId="f7618043-82d7-4410-f1ff-874cb9b5b848" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_test_target = y_test[:,0]
y_test_target.shape
# + id="dBzYbwfDaZuM" outputId="5f3eb64f-7f82-4cb6-ace1-89d98758b2a7" colab={"base_uri": "https://localhost:8080/", "height": 34}
aux_output = y_train[:,1:]
aux_output.shape
# + id="f8GMfUiaM0q6"
def custom_loss(y_true, y_pred):
return binary_crossentropy(K.reshape(y_true[:,0],(-1,1)), y_pred) * y_true[:,1]
# + id="xk4_-sVmFhDo"
weights = np.ones((len(X_train),)) / 4
# + id="m5HfsOCFFs7b"
weights += (aux_output>=0.5).sum(axis=1).astype(bool).astype(np.int) / 4
# + id="LZwjO_ZlGPi-"
weights += (( (y_target>=0.5).astype(bool).astype(np.int) +
(aux_output<0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4
# + id="HYeGts2mGcVJ"
weights += (( (y_target>=0.5).astype(bool).astype(np.int) +
(aux_output>=0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4
# + id="QWz99Em9G7kT" outputId="66e0df59-2000-4ff1-8419-e9851e057160" colab={"base_uri": "https://localhost:8080/", "height": 34}
weights.shape
# + id="sxffcZ0gGkJG" outputId="e3ea0327-ca21-423e-e76c-34755aa4fe34" colab={"base_uri": "https://localhost:8080/", "height": 34}
loss_weight = 1.0 / weights.mean()
loss_weight
# + id="w-AUKEBUNDbm" outputId="ccc3f3ab-48a1-48d8-ac52-8c3da8265d3a" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_train.shape
# + id="SMK0d8uGeulw"
model = Sequential()
# + id="UQ_X7TnMIvCQ" outputId="1efb1048-ab33-45d4-811e-b12d8332725e" colab={"base_uri": "https://localhost:8080/", "height": 34}
vocab_size
# + id="bIohsInoCMfk"
model.add(Embedding(input_dim=vocab_size, output_dim = 300, input_length = 250, weights=[embedding_matrix], trainable = False))
# + id="ssbOVsdyeuki"
model.add(LSTM(units=150,return_sequences=True, dropout=0.2))
# + id="fGFnhoKRe30W"
model.add(GlobalMaxPool1D())
# + id="JF1iP3iue8Sh"
model.add(Dense(units = 64, activation='relu'))
# + id="xGb37XVhNicS"
model.add(Dense(units = 32, activation='relu'))
# + id="NZAsrewfNkZ8"
model.add(Dense(units = 16, activation='relu'))
# + id="ztxxxIiNe-5J"
model.add(Dense(units = 1, activation='sigmoid'))
# + id="rEpvbznbCb1r"
model.compile(loss=[BinaryCrossentropy(), custom_loss], loss_weights=[loss_weight, 1.0], optimizer=Adam(),metrics=[AUC()])
# + id="Dcn-AoXqCdle" outputId="29069d0a-6716-4fb1-f1d5-1a7a510aa40f" colab={"base_uri": "https://localhost:8080/", "height": 408}
print(model.summary())
# + id="rKfIQtEib6Q3" outputId="1151f994-e459-4636-da91-3e2e1dbbd952" colab={"base_uri": "https://localhost:8080/", "height": 34}
gc.collect()
# + id="h7MRmPV8NzQ9" outputId="cd6c94eb-80e7-447d-89dc-98af4ad66431" colab={"base_uri": "https://localhost:8080/", "height": 51}
y_test_target
# + id="WjjNtnitPtg8"
y_test_target = np.where(y_test_target >= 0.5, 1, 0)
# + id="N1gbgdAMN0yU"
y_target = np.where(y_target >= 0.5, 1, 0)
# + id="dMkefBHzP0U-" outputId="96682eb3-1d4d-4fa2-dcea-0fd398bb400e" colab={"base_uri": "https://localhost:8080/", "height": 34}
gc.collect()
# + id="ch87vIo6Cglj"
history = model.fit(np.array(X_train_seq), np.array(y_target), batch_size=512, epochs=7, validation_data=(np.array(X_test_seq),np.array(y_test_target)))
# + id="fPFKr4gyCkT7"
model_json = model.to_json()
# + id="sjfEu0mPCmC8"
with open('glove_embedding.json', 'w') as json_file:
json_file.write(model_json)
# + id="WrTruHcjCnWq"
model.save_weights("weights.h5")
# + [markdown] id="AUoNKALyqdBK"
# # Kaggle Submission
# + id="H4_8cwoGopsE" outputId="a1344c10-2dcd-4800-b5bb-ea035435c0f3" colab={"base_uri": "https://localhost:8080/", "height": 204}
test = pd.read_csv('dataset/test.csv')
test.head()
# + id="xmol-iFVp_hz" outputId="75617b39-649c-419f-e076-2e4000d66582" colab={"base_uri": "https://localhost:8080/", "height": 34}
test.shape
# + id="43KX_rnTqOl4" outputId="247dcb3b-3ac1-44cb-fc12-4db8a7c2784c" colab={"base_uri": "https://localhost:8080/", "height": 34}
comments = test['comment_text'].values
comments.shape
# + id="CcIg92Ebqtk-"
with open('tokenizer.pickle', 'rb') as handle:
tokenizer = pickle.load(handle)
# + id="30sa6oWUqUqY"
comment_seq = tokenizer.texts_to_sequences(comments)
# + id="39swNF0Kq0k9"
comment_pad_seq = pad_sequences(comment_seq, maxlen=250)
# + id="pKaNJBLfq6QK"
prediction = model.predict(comment_pad_seq)
# + id="xJEJ8jcxsHMq" outputId="c4380635-7986-42f8-883f-a7a3be536652" colab={"base_uri": "https://localhost:8080/", "height": 34}
prediction.shape
# + id="ZRenPIVVsP2x" outputId="f6aff732-e653-42bc-f040-a0e929549875" colab={"base_uri": "https://localhost:8080/", "height": 221}
ids = test.iloc[:,0]
ids
# + id="kIkZhf4usVD1"
result = pd.DataFrame()
result['id'] = ids
result['prediction'] = prediction
# + id="ZeOfpPegseEX" outputId="912a35b8-b360-4ab8-c4ea-11bee0a067c5" colab={"base_uri": "https://localhost:8080/", "height": 204}
result.head()
# + id="MUyPOKT2siGu"
result.to_csv('submission.csv', index=False)
| Attempt 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # US Flights - Data Expo 2009
# ## by <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Investigation Overview
#
# > The goal of the presenation mainly to investigate cancelled flights and reasons behind the cancellation
#
# ## Dataset Overview
#
# > The data consists of flight arrival and departure details for all commercial flights within the USA, from October 1987 to April 2008. This is a large dataset: there are nearly 120 million records in total, and takes up 1.6 gigabytes of space compressed and 12 gigabytes when uncompressed.
# >
# > As the data is huge, I decided to explore the period from 2007 to 2008. Further, I am going to work on a sample of the data to speed up the computation
# + slideshow={"slide_type": "skip"}
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import os
import glob
import missingno as msno
import datetime
from scipy.spatial.distance import cdist
import datetime
import warnings
warnings.simplefilter(action='ignore')
# display all columns
pd.set_option('display.max_columns', 500)
# %matplotlib inline
# + slideshow={"slide_type": "skip"}
# load in the dataset into a pandas dataframe
flights_sample = pd.read_csv('flights_sample_for_presentation.csv')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Proportion of number of flights during 2007/2008
#
# > Proportion of number of flights during 2007 is slightly higher than flights during 2008!
# + slideshow={"slide_type": "subslide"}
# define a base color (blue) to be used in the graph
base_color = sb.color_palette()[0]
# define proportion tick values and names
n_flights = flights_sample.shape[0]
max_year_prop = flights_sample.Year.value_counts().iloc[0] / n_flights
tick_props = np.arange(0, max_year_prop + 0.05, 0.05)
tick_names = ['{:0.0f}'.format(100 * v) for v in tick_props]
# plot a count plot
sb.countplot(data=flights_sample, x='Year', color=base_color)
# Change tick locations and labels
plt.yticks(tick_props * n_flights, tick_names)
# axis labels
plt.ylabel('Proportion (%)', size=12, weight='bold')
plt.xlabel('Year', size=12, weight='bold')
# figure label
plt.title('Proportion of number of flights during 2007/2008', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "slide"}
# ## Top Five Carriers having the highest proportion number of flights during 2007-2008
#
# > Top five carriers are:
# > 1. Southwest Airlines Co.(WN)
# > 2. American Airlines Inc.(AA)
# > 3. SkyWest Airlines Inc. (OO)
# > 4. Envoy Air (MQ)
# > 5. US Airways Inc.(US)
# >
# > Interestingly, Number of flights operated by **Southwest Airlines** is almost doubled compared to **American Airlines** which comes in the second place.
# + slideshow={"slide_type": "subslide"}
# define proportion tick values and names
n_flights = flights_sample.shape[0]
max_carrier_prop = flights_sample.CarrierName.value_counts().iloc[0] / n_flights
xtick_props = np.arange(0, max_carrier_prop + 0.01, 0.01)
xtick_names = ['{:0.1f}'.format(100 *v) for v in xtick_props]
# set figure size
plt.figure(figsize=(12,7))
# plot a count plot
sb.countplot(
data=flights_sample,
y='CarrierName',
color=base_color,
order=flights_sample.CarrierName.value_counts().index)
# Change tick locations and labels
plt.xticks(xtick_props * n_flights, xtick_names)
# axis labels
plt.xlabel('Proportion (%)', size=12, weight='bold')
plt.ylabel('Carrier', size=12, weight='bold')
# figure label
plt.title('Proportion of number of flights during 2007-2008 per Carrier', size=14, weight='bold');
# + slideshow={"slide_type": "skip"}
# fliter cancelled flights
cancelled_flights_s = flights_sample.query('Cancelled==1')
# fliter opertated (not cancelled) and non-diverted flights
flights_opt_s = flights_sample.query('(Cancelled == 0) & (Diverted == 0)')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Investigation on Cancelled flights
#
# ### Cancelled vs Not Cancelled Flights
#
# >97.9% of flights were not Cancelled while only 2.1% of flights were cancelled.
# + slideshow={"slide_type": "subslide"}
# calculate sorted counts
sorted_counts = flights_sample.Cancelled.value_counts()
# plot a pie chart
labels=['Not Cancelled', 'Cancelled']
plt.figure(figsize=(6,6))
plt.pie(sorted_counts,
startangle=90,
counterclock=False,
autopct='%1.1f%%',
pctdistance=0.8)
plt.axis('square')
plt.title('Not Cancelled vs Cancelled flights', size=14, weight='bold')
plt.legend(loc=6,labels=labels);
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Reasons of Cancellation
#
# >When investigating further, we can find the most common reason for flights cancellation is Carrier 40.4% of the time followed be Weather with a percent of 39.5%
# + slideshow={"slide_type": "subslide"}
# set figure size
plt.figure(figsize=(8,5))
# plot a count graph
ax = sb.countplot(
data=cancelled_flights_s,
x='CancellationCode',
color=base_color,
order=cancelled_flights_s.CancellationCode.value_counts().index)
# set x axis ticks and labels
plt.xticks(size=12)
plt.xlabel('Flights Cancellation Reason', size=12, weight='bold')
ax.set_xticklabels(['Carrier', 'Weather', 'NAS', 'Security'])
# set y axis tickes and labels
plt.yticks(size=12)
plt.ylabel('Number of Cancelled flights in the sample', size=12, weight='bold')
# print percentage on the bars
n_flights_cancelled = cancelled_flights_s.shape[0]
for p in ax.patches:
percentage = f'{100 * p.get_height() / n_flights_cancelled:.4f}%\n'
x = p.get_x() + p.get_width() / 2
y = p.get_height()
ax.annotate(percentage, (x, y), ha='center', va='center')
# figure title
plt.title('Reasons of Flights Cancellation', size=14, weight='bold')
plt.show();
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Flights Cancellation vs Distance of Flight
# > When looking to cancellation from different prespective, we can find that the average distance of not-cancelled flights is higher (700 miles) than average distance of the cancelled flights (600 miles). The flights with more than 1000 distance is more likely to be not-cancelled.
# + slideshow={"slide_type": "subslide"}
#figure size
plt.figure(figsize=(8,5))
# plot the mean flights distance showing the deviation around the mean for cancelled and not-cancelled flights
sb.barplot(data=flights_sample, x='Cancelled', y='Distance', color=base_color, ci='sd')
# x-axis parameters
plt.xlabel('Flights status', size=12, weight='bold')
plt.xticks([0,1], ['Not Cancelled', 'Cancelled'])
# y-axis label
plt.ylabel('Flights Distance (mile)', size=12, weight='bold')
# figure title
plt.title('Distances of Cancelled Fligts vs Distances of Not Cancelled Flights', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Carrier with the most cancelling rate
#
# >**Envoy Air** occupies the first place in the list with 4 % cancellation rate followed by **Mesa Airlines Inc.** while **Frontier Airlines Inc.** comes at the end of the list.
# + slideshow={"slide_type": "skip"}
# calculate cancellation rate per Carrier
Carriers_cancel_rate = flights_sample.groupby('CarrierName')['Cancelled'].mean().reset_index()
# Sorting values in descending order
Carriers_cancel_rate.sort_values(by='Cancelled', ascending=False, ignore_index=True, inplace=True)
# rename column
Carriers_cancel_rate.rename(columns={'Cancelled': 'CancellationRate'}, inplace=True)
# convert to percent
Carriers_cancel_rate.CancellationRate = Carriers_cancel_rate.CancellationRate * 100
# + slideshow={"slide_type": "subslide"}
# set figure size
plt.figure(figsize=(12,7))
# plot a count plot
sb.barplot(
data=Carriers_cancel_rate,
x='CancellationRate',
y='CarrierName',
color=base_color)
# x-axis parameters
plt.xlabel('Flights Cancellation Rate %', size=12, weight='bold')
# y-axis label
plt.ylabel('Carriers', size=12, weight='bold')
# figure title
plt.title('Cancellation Rate Per Carrier', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Why in specific **Envoy Air** and **Mesa Airlines Inc.** are having the highest cancellation rate?
#
# > The upcoming slide will answer this question but I am going to spoil the surprise
# >The common reason of cancelling **Envoy Air** flights found to be **Weather** and not **Carrier** as I was expected. **American Airlines Inc.** comes after **Envoy Air** with the number of cancelled flights due to weather.
# While, **Mesa Airlines Inc.** cancelled flights was mainly beacuse of the **Carrier** itself.
# + slideshow={"slide_type": "skip"}
# Use group_by() and size() to get the number of flights and each combination of the two variable levels as a pandas Series
cc_counts = cancelled_flights_s.groupby(['CarrierName', 'CancellationCode']).size().reset_index(name='count')
# + slideshow={"slide_type": "skip"}
# Use DataFrame.pivot() to rearrange the data, to have Carriers on rows
cc_counts = cc_counts.pivot(index = 'CarrierName', columns = 'CancellationCode', values = 'count')
# rename cancellation code columns to cancellation definition
cc_counts = cc_counts.rename(columns={'A':'Carrier', 'B':'Weather', 'C':'NAS', 'D':'Security'})
# + slideshow={"slide_type": "subslide"}
# plot a heat map showing Carriers vs reasons of flights cancellation
# figure size
plt.figure(figsize=(12,8))
sb.heatmap(cc_counts, annot = True, fmt = '.0f', cmap='viridis_r')
# x-axis label
plt.xlabel('Canellation Reasons', size=12, weight='bold')
# y-axis label
plt.ylabel('Carriers', size=12, weight='bold')
# figure title
plt.title('Cancelled flights breakdown per each Carrier and Cancellation reason', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Want to learn more?!
#
# It is obvious from upcoming slide that the highest number of cancelled flights of Envoy Airlines & American Airlines is correlated with flights destined to **DFW (Dallas/Fort Worth International)** and **ORD (O'Hare International Airport)**.
# + slideshow={"slide_type": "skip"}
# filter cancelled flights due to Weather
cancelled_flights_s_weather = cancelled_flights_s.query('CancellationCode == "B"')
# Use group_by() and size() to get the number of flights and each combination of the two variable levels as a pandas Series
co_counts = cancelled_flights_s_weather.groupby(['CarrierName', 'Dest']).size().reset_index(name='count')
# Use DataFrame.pivot() to rearrange the data, to have Carriers on rows
co_counts = co_counts.pivot(index = 'CarrierName', columns = 'Dest', values = 'count')
# In this cell, I will apply a filter to reduce the number of columns (Destinations) for better view
# tune a thershold that minimize the number of columns and give enough informative graph
thershold = 40 # maximum number of cancelled flights in a specific Destination is less than 40
for col in co_counts.columns:
if co_counts[col].max() < thershold:
co_counts.drop(columns=col, inplace=True)
# + slideshow={"slide_type": "subslide"}
# plot a heat map for carriers vs Airport destinations for cancelled flights due to Weather
# figure size
plt.figure(figsize=(25,15))
sb.heatmap(co_counts, annot = True, fmt = '.0f', cmap='viridis_r')
# x-axis label
plt.xlabel('Destinations', size=12, weight='bold')
# y-axis label
plt.ylabel('Carriers', size=12, weight='bold')
# figure title
plt.title('Cancelled flights due Weather breakdown per each Carrier and Airport destination', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Is it correlated with the Month of year?
#
# > Yes, it is somehow and as expected correlated with the Month of Year. During Winter season (December-March), we can see higher number of cancelled flights for Envoy Air and Amreican Airlines (destined DFW or ORD).
# + slideshow={"slide_type": "skip"}
# filter cancelled flights destined to DFW or ORD
dfw_ord_canc = cancelled_flights_s_weather.query('Dest == "DFW" | Dest =="ORD"')
# groupby Month and Carrier
dfw_ord_canc = dfw_ord_canc.groupby(['Year','Month', 'CarrierName']).size().reset_index(name='count')
# + slideshow={"slide_type": "subslide"}
# plot a point plot showing counts of cancelled flights per Month for each Carrier
plt.figure(figsize=(20,12))
# 2007
plt.subplot(2,1,1)
sb.pointplot(data=dfw_ord_canc.query('Year == 2007'),
x='Month',
y='count',
hue='CarrierName',
palette='vlag',
hue_order=dfw_ord_canc.CarrierName.value_counts().index)
# x-axis label
plt.xlabel('Month (2007)', size=12, weight='bold')
# y-axis label
plt.ylabel('Number of Cancelled Flights destined DFW or ORD', size=12, weight='bold')
# legend
plt.legend(loc=0, ncol=2, title='Carrier',title_fontsize=12, fontsize=10);
# 2008
plt.subplot(2,1,2)
sb.pointplot(data=dfw_ord_canc.query('Year == 2008'),
x='Month',
y='count',
hue='CarrierName',
palette='vlag',
hue_order=dfw_ord_canc.CarrierName.value_counts().index)
# x-axis label
plt.xlabel('Month (2008)', size=12, weight='bold')
# y-axis label
plt.ylabel('Number of Cancelled Flights destined DFW or ORD', size=12, weight='bold')
# legend
plt.legend(loc=0, ncol=2, title='Carrier',title_fontsize=12, fontsize=10);
# figure title
plt.suptitle('Number of Cancelled Flights destined DFW or ORD per Month for each Carrier', size=14, weight='bold');
# + [markdown] slideshow={"slide_type": "slide"}
# ## The distribution of Flights Distances
# + slideshow={"slide_type": "subslide"}
# x-axis log transformation funciton
def log_trans(x, inverse = False):
""" transformation helper function """
if not inverse:
return np.log10(x)
else:
return 10 ** x
# create figure
plt.figure(figsize=(10,7))
# Bin resizing, to transform the x-axis
bins = np.arange(1,log_trans(flights_opt_s['Distance'].max())+0.1, 0.1)
# Plot the scaled data
sb.histplot(flights_opt_s['Distance'].apply(log_trans),color=base_color,bins=bins)
# Identify the tick-locations
tick_locs = np.arange(1, log_trans(flights_opt_s['Distance'].max())+0.15, 0.15)
# Apply x-ticks
plt.xticks(tick_locs, log_trans(tick_locs, inverse = True).astype(int))
# Draw mean line
plt.axvline(x=log_trans(flights_opt_s.Distance.mean()), color='r', label='mean distance')
# axis lables
plt.xlabel('Distance in miles (log scaled)', weight='bold', size=12)
plt.ylabel('Number of flights', weight='bold', size=12)
# print title
plt.title('Distribution of flights distances', weight='bold', size=14)
# show legend
plt.legend();
| presentation_US_Flight.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # archive to folder
# nuclio: ignore
import nuclio
# %nuclio config kind = "job"
# %nuclio config spec.image = "mlrun/mlrun"
# +
import os
import zipfile
import urllib.request
import tarfile
import json
from mlrun.execution import MLClientCtx
from mlrun.datastore import DataItem
from typing import Union
def open_archive(
context: MLClientCtx,
archive_url: DataItem,
subdir: str = "content",
key: str = "content"
):
"""Open a file/object archive into a target directory
Currently supports zip and tar.gz
:param context: function execution context
:param archive_url: url of archive file
:param subdir: path within artifact store where extracted files
are stored
:param key: key of archive contents in artifact store
"""
os.makedirs(subdir, exist_ok=True)
archive_url = archive_url.local()
if archive_url.endswith("gz"):
with tarfile.open(archive_url, mode="r|gz") as ref:
ref.extractall(subdir)
elif archive_url.endswith("zip"):
with zipfile.ZipFile(archive_url, "r") as ref:
ref.extractall(subdir)
else:
raise ValueError(f'unsupported archive type in {archive_url}')
context.log_artifact(key, local_path=subdir)
# +
# nuclio: end-code
# -
# ### mlconfig
# +
from mlrun import mlconf
import os
mlconf.dbpath = mlconf.dbpath or 'http://mlrun-api:8080'
mlconf.artifact_path = mlconf.artifact_path or f'{os.environ["HOME"]}/artifacts'
# -
# ### save
# +
from mlrun import code_to_function
# create job function object from notebook code
fn = code_to_function("open_archive")
# add metadata (for templates and reuse)
fn.spec.default_handler = "open_archive"
fn.spec.description = "Open a file/object archive into a target directory"
fn.metadata.categories = ["data-movement", "utils"]
fn.metadata.labels = {"author": "yaronh"}
fn.export("function.yaml")
# -
# ## tests
# +
# load function from marketplacen
from mlrun import import_function
# vcs_branch = 'development'
# base_vcs = f'https://raw.githubusercontent.com/mlrun/functions/{vcs_branch}/'
# mlconf.hub_url = mlconf.hub_url or base_vcs + f'{name}/function.yaml'
# fn = import_function("hub://open_archive")
# +
from mlrun import run_local
if "V3IO_HOME" in list(os.environ):
from mlrun import mount_v3io
fn.apply(mount_v3io())
else:
# is you set up mlrun using the instructions at https://github.com/mlrun/mlrun/blob/master/hack/local/README.md
from mlrun.platforms import mount_pvc
fn.apply(mount_pvc('nfsvol', 'nfsvol', '/home/joyan/data'))
# -
# ### tar
run = run_local(
handler=open_archive,
inputs={'archive_url': "https://fpsignals-public.s3.amazonaws.com/catsndogs.tar.gz"})
# ### zip
run_local(
handler=open_archive,
inputs={'archive_url': 'http://iguazio-sample-data.s3.amazonaws.com/catsndogs.zip'})
| open_archive/open_archive.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Outline
#
# This project has several sections and will provide you a concise introduction to time series concepts in R. We will learn the essential theory and also practice fitting the four main types of time series models, getting you up and running with all the basics in a little more than an hour!
#
# (1) Introduction to Rhyme Environment
#
# (2) Time Series Data Overview (Theory)
#
# (3) Why Time Series? (Theory)
#
# (4) Key Concepts: Autocorrelation / Autocovariance (Theory)
#
# (5) Key Concepts: Stationarity (Theory)
#
# (6) Checking for Stationarity (Practice)
#
# (7) Transforming for Stationarity: Differencing (Practice)
#
# (8) Transforming for Stationarity: Detrending (Practice)
#
# (9) Basic Model Types: AR(p), MA(q), ARMA(p,q), ARIMA(p,d,q), Decomposition (Theory)
#
# (10) Fitting AR / MA / ARMA / ARIMA models with the Box Jenkins Method (Theory)
#
# (11) Box Jenkins Method: Checking for Stationarity (Practice)
#
# (12) Box Jenkins Method: Transforming for Stationarity & Identifying Model Parameters (Practice)
#
# (13) Box Jenkins Method: Checking the Residuals of the Model Fit (Practice)
#
# (14) Making a Forecast for Each Model (Practice)
#
# (15) Fitting STL (Seasonal Trend Loess) Decomposition Models (Practice)
#
# (16) Where to go Next
# # Introduction to Rhyme Environment
#
# Now, let's load the R packages we will need for this project (they should be already installed on your virtual machine).
# +
#load required r packages
library(IRdisplay)
library(magrittr)
library(tidyverse)
library(scales)
library(gridExtra)
library(forecast)
library(tseries)
library(ggthemes)
theme_set(theme_economist())
#load helper R functions
setwd("C:/Users/Administrator/Desktop/Time Series Project Materials/")
source("R Functions/compare_models_function.R")
source("R Functions/sim_random_walk_function.R")
source("R Functions/sim_stationary_example_function.R")
print("Loading is completed")
# -
# # Time Series Data Overview
display_png(file="Images/time_series_dinosaur.png")
# (Univariate) time series data is defined as sequence data over time: $X_1, X_2, ... , X_T$
#
# where $t$ is the time period and $X_t$ is the value of the time series at a particular point
#
# Examples: daily temperatures in Boston, US presidential election turnout by year, minute stock prices
#
# Variables in time series models generally fall into three categories:
#
# (1) endogenous
#
# (2) random noise
#
# (3) exogenous
#
# All time series models involve (1) and (2) but (3) is optional.
# # Why Time Series?
display_png(file="Images/time_series_complication.png")
# The answer is that:
#
# (1) many forecasting tasks actually involve small samples which makes machine learning less effective
#
# (2) time series models are more interpretable and less black box than machine learning algorithms
#
# (2) time series appropriately accounts for forecasting uncertainty.
#
# As an example, lets look at the following data generating process known as a random walk: $X_t=X_{t-1}+\epsilon_t$
#
# We can compare the forecasting performance of linear regression to that of a basic time series model known as an AR(1) model.
#run function to compare linear regression to basic AR(1) time series model
compare.models(n=100)
# # Key Concepts: Autocorrelation/Autocovariance
#
# Autocorrelation/autocovariance refers to the correlation/covariance between two observations in the time series at different points.
#
# The central idea behind it is how related the data/time series is over time.
#
# For ease of interpretation we typically focus on autocorrelation i.e. what is the correlation between $X_t$ and $X_{t+p}$ for some integer $p$.
#
# A related concept is partial autocorrelation that computes the correlation adjusting for previous lags/periods i.e. the autocorrelation between $X_t$ and $X_{t+p}$ adjusting for the correlation of $X_t$ and $X_{t+1}$, … , $X_{t+p-1}$.
#
# When analyzing time series we usually view autocorrelation/partial autocorrelation in ACF/PACF plots.
#
# Let's view this for the random walk model we analyzed above: $X_t=X_{t-1}+\epsilon_t$.
# +
#simulate random walk
dat<-sim.random.walk()
#plot random walk
dat %>% ggplot(aes(t,X)) + geom_line() + xlab("T") + ylab("X") + ggtitle("Time Series Plot")
# -
#ACF plot
ggAcf(dat$X,type="correlation") + ggtitle("Autocorrelation ACF Plot")
#PACF plot
ggAcf(dat$X,type="partial") + ggtitle("Partial Autocorrelation PACF Plot")
# # Key Concepts: Stationarity
#
# The second key concept in time series is stationarity.
#
# While the concept can get quite technical, the basic idea is examining whether the distribution of the data over time is consistent.
#
# There are two main forms of stationarity.
#
# (1) Strict stationarity imples:
#
# The cumulative distribution function of the data does not depend on time:
#
# $F_X(X_1,...,X_T)=F_X(X_{1+\Delta},...,X_{T+\Delta})$ $\forall \Delta\in\mathbb{R}$
#
# (2) Weak stationarity implies:
#
# - the mean of the time series is constant
#
# $E(X_t)=E(X_{t+\Delta})$
#
# - the autocovariance/autocorrelation only depends on the time difference between points
#
# $ACF(X_{t},X_{t+\Delta-1})=ACF(X_1,X_{\Delta})$
#
# - the time series has a finite variance
#
# $Var(X_\Delta)<\infty$ $\forall \Delta\in\mathbb{R}$
# # Checking for Stationarity
#create three time series for example
df<-sim.stationary.example(n=1000)
# - Check a plot of the time series over time and look for constant mean and finite variance i.e. values appear bounded.
#plot nonstationary and stationary time series
# - Look at the ACF plot and see if it dies off quickly as opposed to a gradual decline.
#ACF for nonstationary and stationary time series
# - Perform unit root tests such as the Augmented Dickey–Fuller test.
#perform unit test; nonstationary example has large, non-significant p-value
#perform unit test; stationary example has small, significant p-value
# # Transforming for Stationarity
#
# ## Differencing
#
# Differencing involves taking differences between successive time series values.
#
# The order of differencing is defined as p for $X_t-X_{t-p}$.
#
# Let's transform a nonstationary time series to stationary by differencing with the random walk model.
#
# In a random walk $X_t=X_{t-1}+\epsilon_t$ where $\epsilon_t\sim N(0,\sigma^2)$ iid.
#
# Differencing with an order of one means that $\tilde{X}_t=X_t-X_{t-1}=\epsilon_t$.
#difference time series to make stationary
#plot original and differenced time series
# ## Detrending
#
# Detrending involves removing a deterministic relationship with time.
#
# As an example suppose we have the following data generating process $X_t=B_t+\epsilon_t$ where $\epsilon_t\sim N(0,\sigma^2)$ iid.
#
# Detrending involves using the transformed time series $\tilde{X}_t=X_t-Bt=\epsilon_t$.
#detrend time series to make stationary
#plot original and detrended time series
# # Basic Model Types: AR(p), MA(q), ARMA(p,q), ARIMA(p,d,q), Decomposition
#
# ## Autoregressive AR(p) Models
#
# AR models specify $X_t$ as a function of lagged time series values $X_{t-1}$, $X_{t-2}$, ...
#
# i.e $X_t=\mu+\phi_1 X_{t-1}+...+\phi_p X_{t-p}+\epsilon_t$
#
# where $\mu$ is a mean term and $\epsilon_t\overset{iid}\sim N(0,\sigma^2)$ is a random error.
#
# When fitting an AR model the key choice is p, the number of lags to include.
#
# ## Moving Average MA(q) Models
#
# MA models specify $X_t$ using random noise lags:
#
# $X_t=\mu+\epsilon_t+\Theta_1\epsilon_{t-1}+...+\Theta_q\epsilon_{t-q}$
#
# where $\mu$ is a mean term and $\epsilon_t\overset{iid}\sim N(0,\sigma^2)$ is a random error.
#
# Similar to an AR model, when fitting an MA model the key choice is q, the number of random shock lags.
#
# ## Autoregressive Moving Average ARMA(p,q) Models
#
# ARMA(p,q) models are a combination of an AR and MA model:
#
# $X_t=\mu+\phi_1 X_{t-1}+...+\phi_p X_{t-p}+\epsilon_t+\Theta_1\epsilon_{t-1}+...+\Theta_q\epsilon_{t-q}$
#
# where $\mu$ is a mean term and $\epsilon_t\overset{iid}\sim N(0,\sigma^2)$ is a random error.
#
# When fitting an ARMA model, we need to choose two things: p, the number of AR lags, and q, the number of MA lags.
#
# ## Autoregressive Integrated Moving Average ARIMA(p,d,q) Models
#
# ARIMA(p,d,q) is an ARMA model with differencing.
#
# When fitting an ARIMA model we need to choose three things: p, the number of AR lags, q, the number of MA lags, and d, the number of differences to use.
#
# ## Decomposition Models
#
# Decomposition models specify $X_t$ as a combination of a trend component ($T_t$), seasonal component ($S_t$), and an error component/residual ($E_t$) i.e. $X_t=f(T_t,S_t,E_t)$.
#
# Common decomposition forms are: $X_t=T_t+S_t+E_t$ or $X_t=T_t*S_t*E_t$ (where then take logs to recover the additive form).
#
# There are various ways to estimate the different trend components: exponential smoothing, state space models/Kalman filtering, STL models, etc.
#
# In this project we will cover STL models because of their ease of use and flexibility.
# # Fitting AR/MA/ARMA/ARIMA models with the Box Jenkins Method
#
# We will now go over how to fit AR/MA/ARMA/ARIMA models on a real data set and review a generic strategy for fitting them known as the Box Jenkins method.
#
# This process involves several steps to help identify the p, d, and q parameters that we need:
#
# - Identify whether the time series is stationary or not
#
# - Identify p, d, and q of the time series by
#
# - Making the the time series stationary through differencing/detrending to find d
#
# - Looking at ACF/PACF to find p and q
#
# - Using model fit diagnostics like AIC or BIC to select the best model to find p, d, and q
#
# - Check the model fit using the Ljung–Box test
#load data
#check date class
#change date class to date type
# ## Checking for Stationarity
#check time series plot
#check ACF plot
#run ADF test
# ## Transforming for Stationarity & Identifying Model Parameters
#fit AR model
#fit MA model
#fir ARMA model
#fit ARIMA model
# ## Checking the Residuals of the Model Fit
#calculate residuals of each model
#plot PACF plot of each models residuals
#run the Ljung Box test on the residuals
# ## Making a forecast for each model
#make forecast for each model
#plot forecast for each model
# # Fitting Seasonal Trend Loess (STL) Decomposition Models
#transform to time series object; need to specify frequency
#fit stil model
#plot model fit
#make forecast
# # Where to go Next
#
# - Advanced time series models
# - ARCH, GARCH, etc. that model changing variance over time
# - Vector Autoregression (VAR)
# - For multivariate i.e. multiple time series and modeling dependencies between them
# - Machine Learning
# - How to do CV with time series
# - Neural networks for sequence data (LSTMs, etc.)
# - Spatial Statistics
# - Generalize time dependence to spatial dependence in multiple dimensions
# - Econometrics
# - Cointegration
# - Granger Causality
# - Serial correlation
# - Regression with time series data
# - Bayesian time series
| intro-tsa/Introduction to Time Series.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Iannbrentte/OOP-58002/blob/main/OOP_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ICvnuRpWfOkA"
# Inheritance
#
# + id="kfZEUDXme7nS"
class Person:
def __init__(self,firstname,surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname,self.surname)
person = Person("Maam","Sayo")
person.printname()
class Student(Person):
pass
person = Student("Asley","Goce")
person.printname()
# + [markdown] id="360SH1jHipzJ"
# Polymorphism
# + id="FpSlaPWLkVOA"
class RegularPolygon:
def _init_(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
x= Square(4)
y= EquilateralTriangle(3)
print(x.area())
print(y.area())
# + [markdown] id="HSEMhHhUmyNR"
# Application 1
#
# + id="-xNPp3cTm12i" outputId="f63e6125-e676-4547-851b-e6c7a8c1062a" colab={"base_uri": "https://localhost:8080/"}
class Student:
student_grade = '3'
student_name = '<NAME>'
def display():
print(f'Student id: {Student.student_grade}\nStudent Name: {Student.student_name}')
print("Grade of student 1:")
Student.display()
class Student:
student_grade = '5'
student_name = '<NAME>'
def display():
print(f'Student id: {Student.student_grade}\nStudent Name: {Student.student_name}')
print("Grade of student 2:")
Student.display()
class Student:
student_grade = '2'
student_name = '<NAME>'
def display():
print(f'Student id: {Student.student_grade}\nStudent Name: {Student.student_name}')
print("Grade of student 3:")
Student.display()
| OOP_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from driver import *
from MNIST_LARGE_cfg import *
from pynq import Overlay
import numpy as np
from pynq import Xlnk
import time
import cv2
import matplotlib.pyplot as plt
ol=Overlay("pool_stream.bit")
ol.download();
dma=ol.axi_dma_0
pool=ol.pool_0
conv=ol.Conv_0
xlnk=Xlnk()
image=xlnk.cma_array(shape=(1,28,28,K),cacheable=0,dtype=np.int16)
W_conv1=xlnk.cma_array(shape=(32,3,3,1,K),cacheable=0,dtype=np.int16)
h_conv1=xlnk.cma_array(shape=(4,28,28,K),cacheable=0,dtype=np.int16)
h_pool1=xlnk.cma_array(shape=(4,7,7,K),cacheable=0,dtype=np.int16)
W_fc1=xlnk.cma_array(shape=(256,7,7,4,K),cacheable=0,dtype=np.int16)
h_fc1=xlnk.cma_array(shape=(32,1,1,K),cacheable=0,dtype=np.int16)
W_fc2=xlnk.cma_array(shape=(10,1,1,32,K),cacheable=0,dtype=np.int16)
h_fc2=xlnk.cma_array(shape=(2,1,1,K),cacheable=0,dtype=np.int16)
Load_Weight_From_File(W_conv1,"record/W_conv1.bin")
Load_Weight_From_File(W_fc1,"record/W_fc1.bin")
Load_Weight_From_File(W_fc2,"record/W_fc2.bin")
# cap=cv2.VideoCapture(0)
# cap.set(cv2.CAP_PROP_FRAME_WIDTH,640)
# cap.set(cv2.CAP_PROP_FRAME_HEIGHT,480)
# print("capture state: "+ str(cap.isOpened()))
# ret, frame=cap.read();ret, frame=cap.read();ret, frame=cap.read();ret, frame=cap.read();ret, frame=cap.read();
# np.shape(frame)
# frame_300x300=frame[90:390,170:470]
# frame_28x28=255-cv2.resize(frame_300x300,(28,28),interpolation=cv2.INTER_NEAREST)
# img_gray=cv2.cvtColor(frame_28x28,cv2.COLOR_RGB2GRAY)
# plt.imshow(cv2.merge([img_gray,img_gray,img_gray]))
# plt.show()
# #print((img_gray/255*pow(2,PTR_IMG)))
# img_gray=(img_gray/255*pow(2,PTR_IMG))
# print(img_gray.max())
test_pic=100
img_gray=np.zeros((28,28))
with open("record/t10k-images.idx3-ubyte",'rb') as fp:
dat=fp.read(16+28*28*test_pic)
for i in range(28):
for j in range(28):
dat=fp.read(1)
a=struct.unpack("B",dat)
img_gray[i][j]=a[0]
#print(a[0])
img_gray=img_gray.astype(np.uint8)
# for i in range(28):
# for j in range(28):
# print("%4d"%img_gray[i][j],end='')
# print('')
plt.imshow(cv2.merge([img_gray,img_gray,img_gray]))
plt.show()
for i in range(np.shape(img_gray)[0]):
for j in range(np.shape(img_gray)[1]):
image[0][i][j][0]=int((img_gray[i][j]/255)*(2**PTR_IMG));
start=time.time()
Run_Conv(conv,1,32,3,3,1,1,1,0,image,PTR_IMG,W_conv1,PTR_W_CONV1,h_conv1,PTR_H_CONV1)
#Run_Pool_Soft(32,4,4,h_conv1,h_pool1)
Run_Pool(pool,dma,32,4,4,h_conv1,h_pool1)
Run_Conv(conv,32,256,7,7,1,1,0,0,h_pool1,PTR_H_POOL1,W_fc1,PTR_W_FC1,h_fc1,PTR_H_FC1)
Run_Conv(conv,256,10,1,1,1,1,0,0,h_fc1,PTR_H_FC1,W_fc2,PTR_W_FC2,h_fc2,PTR_H_FC2)
end=time.time()
print("Hardware run time=%s s"%(end-start))
max=-32768
num=0
for i in range(10):
if(h_fc2[i//K][0][0][i%K]>max):
max=h_fc2[i//K][0][0][i%K]
num=i;
print("predict num is %d"%num);
| int16_version/Q16bit_accerator/mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Python Tricks
# <img src="https://www.hitechnectar.com/wp-content/uploads/2019/07/PyPy-vs-Cython-Difference-Between-The-Two-Explained.png">
#
# # 1. Slices
s = slice(3,6)
list_ = [1, 3, 'a', 'b', 5, 11, 16]
text = 'DataScience'
tuple_ = (1,2,3,4,5,6,7)
print(list_[s])
print(text[s])
print(tuple_[s])
# ## 2.Sorting a list of lists
lst = [[2, 7], [7, 3], [3, 8], [8, 7], [9, 7], [4, 9]]
lst.sort(key = lambda inner:inner[1])
print(lst)
lst.sort(key = lambda inner:inner[0])
print(lst)
#
# ## 3.Merging two Dictionaries
# +
# Python code to merge dict using a single
# expression
def Merge(dict1, dict2):
res = {**dict1, **dict2}
return res
# Driver code
dict1 = {'a': 10, 'b': 8}
dict2 = {'d': 6, 'c': 4}
dict3 = Merge(dict1, dict2)
print(dict3)
# -
# ## 4.Underline utilization in integer
num1 = 100_000_000
num2 = 12_003_420_005
total = num1 + num2
print(f'{total: ,}')
# ## 5.Library for opening each websites
import webbrowser
webbrowser.open('https://www.hejazizo.com/')
# ## 6.
from copy import deepcopy
list_1 = ['C++','Python','Rust',['HTML','CSSS']]
list_2 = list_1.copy()
list_3 = deepcopy(list_1)
list_2[-1][-1] = 'CSS'
list_1
list_3
# ## 7.
data = None
print(not data)
# ## 8. Print Monthly Calendar
import calendar
print(dir(calendar))
print(calendar.month(2022,1))
print(calendar.calendar(2022))
# ## 9.
import turtle
t = turtle.Turtle()
s = turtle.Screen()
s.bgcolor('#262626')
t.pencolor('#7C909C')
t.speed(100)
col = ('#ED7864','#6E544F','#592F2F','#6E382E')
for n in range(5):
for x in range(8):
t.speed(x+10)
for i in range(2):
t.pensize(2)
t.circle(80+n*20,90)
t.lt(90)
t.lt(45)
t.pencolor(col[n%4])
s.exitonclick()
# ## 10.Ellipsis
def func():
pass
def func():
...
...
#
# +
from dataclasses import dataclass
@dataclass
class Card:
rank: str
suit: str
card = Card("Q", "hearts")
print(card == card)
# True
print(card.rank)
# 'Q'
print(card)
Card(rank='Q', suit='hearts')
# -
# ## 12.In place variable swapping
a = 1
b = 2
a, b = b, a
print (a)
# 2
print (b)
# 1
# ## 13 Pretty Print: (Pprint)
# +
import pprint
data = {'place_id': 259174015, 'licence': 'Data © OpenStreetMap contributors, ODbL 1.0. https://osm.org/copyright',
'osm_type': 'way', 'osm_id': 12316939, 'boundingbox': ['42.983431438214', '42.983531438214', '-78.706880444495',
'-78.706780444495'], 'lat': '42.983481438214326', 'lon': '-78.70683044449504', 'display_name': '''230, Fifth Avenue,
Sheridan Meadows, Amherst Town, Erie County, New York, 14221, United States of America''',
'class': 'place', 'type': 'house', 'importance': 0.511}
pprint.pprint(data, indent=3)
# -
# ## 14.Python Working Directory: (os.getcwd())
import os
dirpath = os.getcwd()
dirpath
# ## 15. Condition Inside the print Function
#
# Have you ever thought you could write the entire condition inside the print function and print the output based on the conditions? Here is how you can achieve this:
print("odd" if int(input("Enter a num: "))%2 else "even")
# ## 16.Conditional List — All
# When you have to add conditions dynamically, this trick works like a charm. This trick can even become a stress remover when the number of conditions increases and you have to write them manually.
# The all method can be used to match all the conditions or work in place of the and condition.
score=325
wickets=7
catch=4
list_cond=[score>320,
wickets<8,
catch>3]
if(all(list_cond)):
print("<NAME>")
else:
print("Lose")
# ## 17. Conditional List — Any
# We can use the same conditional list with the any method in Python to check whether anyone’s condition satisfies it.
score=200
wickets=7
catch=4
list_cond=[score>320,
wickets<8,
catch>3]
if(any(list_cond)):
print("Win")
else:
print("Lose")
# ## 18.Most Repeated Object
# If you want to check what object in your list repeats the most, we can use the following line of code. This approach uses the max function with the key attribute to find the most repeated element:
li=[1,5,8,6,5,9,6,9,5,6,9,6,5,4,"a","a","b","b","a","a","a"]
print(max(set(li), key=li.count))
# ## 19.Partial assignments: (List Unpacking)
# Do you want to assign one or more elements of a list specifically and assign all the remains to something else? Easy with Python.
#
# Check out this syntax that makes use of * unpacking notation in Python:
lst = [10, 20, 30, 40]
a, *b = lst
print(a)
print(b)
# ## 20.Ordered Dictionaries: (from collections)
# Python’s default dictionary data structure doesn’t have any index order. You can think of key-value pairs as mixed items in a bag. This makes dictionaries very efficient to work with. However, sometimes you just need your dictionary to be ordered.
#
# No worries, Python’s collections library has a module named OrderedDict that does just that.
# +
import collections
d = collections.OrderedDict()
d[1] = 100
d[2] = 200
print(d)
# -
# ## 21.Permutations: (from itertools)
# Whether it’s betting with your friends, calculating a sophisticated mathematical equation, reading <NAME>’s Improbable or evaluating your chances before a Vegas trip; you never know when you might need a couple of permutation calculations.
#
# Thanks to Python you don’t have to go through the tedious arithmetics of permutations and you can get it done with a simple piece of code.
#
# Check out this making use of permutations method from itertools library:
#
# First print shows to total amount of different probabilities, second print demonstrates all the possible events.
# +
import itertools
lst = [1,2,3,4,5]
print(len(list((itertools.permutations(lst)))))
for i in itertools.permutations(lst):
print (i, end="\n")
# -
# Here is a little twist if you’d like to see all the different numbers that are possible to generate with those 5 integers:
# +
import itertools
lst = [1,2,3,4,5]
print(len(list((itertools.permutations(lst)))))
for i in itertools.permutations(lst):
print (''.join(str(j) for j in i), end="|")
# -
# ## 22.Make it immutable: (frozenset)
# frozenset() function allows lists to be immutable.
#
# On some occasions you might have started your program with a list instead of tuples because of the conclusion that mutable data structure is more suitable for the project. But things change, projects evolve and ideas are known to change route. Now you decide that you needed immutable structure but it seems to late to undertake the tedious work of converting your lists? No worries. frozenset() will make it a breeze.
# +
lst = ["June", "July", "August"]
lst[0]="May"
print(lst)
# -
# ## 23.Swapping Dictionary Key & Values: (dict comprehension method)
# Dictionary Comprehension is a great way to achieve some dictionary operations, here is a great example:
# +
a = {1:11, 2:22, 3:33}
b = (a.items())
c = {i:j for j,i in b}
print(c)
# -
# ## 24. Debugging
# Debugging is also something which once mastered can greatly enhance your bug hunting skills. Most newcomers neglect the importance of the Python debugger (pdb). In this section I am going to tell you only a few important commands. You can learn more about it from the official documentation.
#
# Commands:
#
# - c: continue execution
# - w: shows the context of the current line it is executing.
# - a: print the argument list of the current function
# - s: Execute the current line and stop at the first possible occasion.
# - n: Continue execution until the next line in the current function is reached or it returns.
#
# +
import pdb
def math(number):
pdb.set_trace()
return number/0
# -
math(5)
| PyTricks/Python Tricks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CNN model
import tensorflow as tf
import matplotlib.pyplot as plt
# %matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# ### 模型架構參考:python機器學習與深度學習實作教材範例
input_dim = 28*28
output_dim = 10
x = tf.placeholder("float",[None, input_dim])
y = tf.placeholder("float",[None, output_dim])
x_image = tf.reshape(x, [-1, 28, 28, 1])
# +
with tf.name_scope('Convolution_layer1'):
W_conv1 = tf.Variable(tf.random_normal([5, 5, 1, 16]), name='Conv1_weight')
b_conv1 = tf.Variable(tf.random_normal([16]), name='Conv1_bias')
conv1 = tf.nn.relu(tf.add(tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME'), b_conv1))
tf.summary.histogram("conv1", conv1)
with tf.name_scope('Max-pooling_layer1'):
max_pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
tf.summary.histogram("max_pool1", max_pool1)
# +
with tf.name_scope('Convolution_layer2'):
W_conv2 = tf.Variable(tf.random_normal([5, 5, 16, 36]), name='Conv2_weight')
b_conv2 = tf.Variable(tf.random_normal([36]), name='Conv2_bias')
conv2 = tf.nn.relu(tf.add(tf.nn.conv2d(max_pool1, W_conv2, strides=[1, 1, 1, 1], padding='SAME'), b_conv2))
tf.summary.histogram("conv2", conv2)
with tf.name_scope('Max-pooling_layer2'):
max_pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
tf.summary.histogram("max_pool2", max_pool2)
# -
with tf.name_scope('Flatten_layer'):
flatten = tf.reshape(max_pool2, [-1, 7 * 7 * 36])
learning_rate = 0.001
training_epochs = 100
batch_size = 200
MLP_input_dim = 7*7*36
hidden1_dim = 256
hidden2_dim = 256
with tf.name_scope('InputLayer_to_HiddenLayer1'):
# input layer to hidden layer 1
w1 = tf.Variable(tf.random_normal([MLP_input_dim, hidden1_dim]),name='weight1')
b1 = tf.Variable(tf.random_normal([hidden1_dim]),name='bias1')
a1 = tf.nn.relu(tf.add(tf.matmul(flatten,w1),b1))
# add summary
tf.summary.histogram("w1", w1)
tf.summary.histogram("b1", b1)
tf.summary.histogram("a1", a1)
with tf.name_scope('HiddenLayer1_to_HiddenLayer2'):
# input layer to hidden layer 2
w2 = tf.Variable(tf.random_normal([hidden1_dim, hidden2_dim]),name='weight2')
b2 = tf.Variable(tf.random_normal([hidden2_dim]),name='bias2')
a2 = tf.nn.relu(tf.add(tf.matmul(a1,w2),b2))
# add summary
tf.summary.histogram("w2", w2)
tf.summary.histogram("b2", b2)
tf.summary.histogram("a2", a2)
with tf.name_scope('HiddenLayer2_to_OutputLayer'):
# hidden layer 2 to output layer
w3 = tf.Variable(tf.random_normal([hidden2_dim, output_dim]),name='weight3')
b3 = tf.Variable(tf.random_normal([output_dim]),name='bias3')
y_pred = tf.add(tf.matmul(a2,w3),b3)
# add summary
tf.summary.histogram("w3", w3)
tf.summary.histogram("b3", b3)
tf.summary.histogram("y_pred", y_pred)
with tf.name_scope('Loss'):
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_pred))
tf.summary.scalar("loss", loss)
with tf.name_scope('Accuracy'):
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
tf.summary.scalar("accuracy", accuracy)
with tf.name_scope('Optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
merged_summary = tf.summary.merge_all()
# +
losses = []
val_losses = []
saver = tf.train.Saver()
with tf.Session() as sess:
# 初始化Variables
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter("log_cnn/", graph=sess.graph)
global_step = 0
for epoch in range(training_epochs):
num_batch = int(mnist.train.num_examples/batch_size)
for i in range(num_batch):
batch_x_train, batch_y_train = mnist.train.next_batch(batch_size)
batch_x_validation, batch_y_validation = mnist.validation.next_batch(batch_size)
# training by optimizer
sess.run(optimizer, feed_dict={x: batch_x_train, y: batch_y_train})
# get training/validation loss & acc
batch_loss = sess.run(loss, feed_dict={x: batch_x_train, y: batch_y_train})
batch_acc = sess.run(accuracy, feed_dict={x: batch_x_train, y: batch_y_train})
batch_val_loss = sess.run(loss, feed_dict={x: batch_x_validation, y: batch_y_validation})
batch_val_acc = sess.run(accuracy, feed_dict={x: batch_x_validation, y: batch_y_validation})
# 紀錄每個batch的summary並加到writer中
global_step += 1
result = sess.run(merged_summary, feed_dict={x: batch_x_train, y: batch_y_train})
writer.add_summary(result,global_step)
losses.append(batch_loss)
val_losses.append(batch_val_loss)
print("Epoch:", '%d' % (epoch+1), ", loss=", batch_loss, ", acc=", batch_acc,
", val_loss=", batch_val_loss, ", val_acc=", batch_val_acc)
# Test Dataset
print ("Test Accuracy:", sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))
#儲存現有模型
save_path = saver.save(sess,"save_model/cnn_model.ckpt")
# -
plt.ylabel('loss')
plt.xlabel('epochs')
xtick = [i for i in range(1,len(losses)+1)]
plt.xticks(xtick)
plt.plot(xtick, losses, label='training_losses')
plt.plot(xtick, val_losses, label='validation_losses')
plt.legend()
plt.show()
| CNN/CNN_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.7 64-bit
# language: python
# name: python3
# ---
# +
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
# configure the GPU
physical_devices = tf.config.list_physical_devices("GPU")
tf.config.experimental.set_memory_growth(physical_devices[0], True)
# -
from PIL import Image
lr_dim = 64
hr_dim = 256
epochs = 100
scale_factor = hr_dim // lr_dim
batch_size = 1
def load_image_lr(path):
im = Image.open(path)
im_dims = im.size
# load the image in lr_dim, lr_dim
im = im.resize((lr_dim, lr_dim))
im = np.array(im).astype(np.float32)
im = im[:, :, :3]
return np.expand_dims((im / 255), axis=0)
def downscale(x, factor=2):
return tf.image.resize(x, (x.shape[1] // factor, x.shape[2] // factor), method="area")
model = keras.models.load_model(f"./production_models/SRGAN")
file_path = "./images/SRTEST_section.png"
# file_path = "./images/burger.jpg"
# file_path = "./images/number_plate_lr.png"
# file_path = "./images/number_plate.png"
lr_image = load_image_lr(file_path)
# resize the plt plot to 16, 16
plt.rcParams['figure.figsize'] = (16, 16)
# +
import cv2
def traditional_upscale_numpy(img, scale_factor):
_output = img
_output = cv2.resize(_output, (0, 0), fx=scale_factor, fy=scale_factor, interpolation=cv2.INTER_CUBIC)
return _output
def traditional_upscale(img_path, method=cv2.INTER_NEAREST):
img = cv2.imread(img_path)
# BGR to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (lr_dim, lr_dim))
img = cv2.resize(img, (hr_dim, hr_dim), interpolation=method)
# return as numpy
return img/255
# -
def upscale(img):
_output = model(img)[0]
# clip values between 0 and 1
_output = np.clip(_output, 0, 1)
return _output
def upscale_file(filepath):
_img = load_image_lr(filepath)
_output = upscale(_img)
return _img, _output
# +
# enlarge the image to the dim_ratio
def unpack(img, dim_ratio):
img = img * 255
img = img.astype(np.uint8)
img = Image.fromarray(img)
img = img.resize((int(img.size[0] * dim_ratio), int(img.size[1])))
return img
# enlarges the image in the given axis to the dim_ratio
def unpack_in_axis(img, dim_ratio, axis):
img = img * 255
img = img.astype(np.uint8)
img = Image.fromarray(img)
if axis == 1:
img = img.resize((int(img.size[0] * dim_ratio), int(img.size[1])))
if axis == 0:
img = img.resize((int(img.size[0]), int(img.size[1] * dim_ratio)))
return np.array(img, dtype=np.float32)/255
# squeezes the image such that both dimensions are divisible by 64
def pack(img):
pack_axis = np.argmin(img.shape[:-1])
lower_dim = img.shape[pack_axis]
dim_ratio = img.shape[0] / img.shape[1]
# resize image to the lower dimension
img = img * 255
img = img.astype(np.uint8)
img = Image.fromarray(img)
img = img.resize((int(lower_dim), int(lower_dim)))
img = np.array(img)
img = img.astype(np.float32)
return img/255, dim_ratio, pack_axis
# -
# file_path = "./images/elon.png"
file_path = "./images/real_small.jpeg"
# file_path = "./images/number_plate_lr.png"
# file_path = "./images/number_plate.png"
# +
original, ai_upscaled = upscale_file(file_path)
plt.subplot(1, 2, 1)
plt.imshow(pack(original[0])[0])
plt.subplot(1, 2, 2)
plt.imshow(ai_upscaled)
plt.show()
# -
plt.subplot(1, 2, 1)
plt.imshow(pack(original[0])[0])
plt.subplot(1, 2, 2)
plt.imshow(traditional_upscale(file_path, method=cv2.INTER_CUBIC
))
plt.show()
| SR_TEST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (mit_model_code)
# language: python
# name: pycharm-43a0cb91
# ---
# # Introduction
#
# This notebook contains the benchmark results of 8 selected features generated by the handbuilt featurizer in the [compound_featurizer.py](../data/compound_featurizer.py) script. The purpose is to see how closely the feature values produced by the handbuilt featurizer match up with those from [Table 2 & 3](../data/torrance_tables/torrance_tabulated.xlsx) of [Torrance et al](https://www.sciencedirect.com/science/article/abs/pii/0921453491905346). The features selected are as follows.
#
# |feature|explanation|
# |:------|:----------|
# |d_mo |the average distance (Å) between the metal sites and oxygen sites|
# |d_mm |the average distance (Å) among the metal sites|
# |iv |the $v^{\text{th}}$ ionization energy ($eV$) of the metal species ($v$ is the oxidation state)|
# |iv_p1 |the $(v+1)^{\text{th}}$ ionization energy ($eV$) of the metal species|
# |v_m |the Madelung site potential ($V$) for the metal species|
# |v_o |the Madelung site potential ($V$) for oxygen|
# |hubbard|the estimated Hubbard energy ($eV$) of the structure|
# |charge_transfer|the estimated charge transfer ($eV$) of the structure|
# # Import packages and functions
import sys
# force the notebook to look for files in the upper level directory
sys.path.insert(1, '../')
import pandas as pd
import seaborn as sns
from data.compound_featurizer import handbuilt_featurizer
from data.featurizer_benchmark import initialize_benchmark_df, process_benchmark_df, process_torrance_df, evaluate_performance
# # Set up path constant
# the path to the digitized table 2 and 3 of the Torrance paper
TORRANCE_PATH = "../data/torrance_tables/torrance_tabulated.xlsx"
# # Initialize the benchmark dataframe
#
# This step reads in all the structures in the [benchmark_structures](../data/torrance_tables/benchmark_structures.zip) folder. The original structures were queried from the Materials Project using the [API](https://github.com/materialsproject/mapidoc) provided by Pymatgen. The query matching criteria are the pretty formula of the compound and the spacegroup number. The original Torrance table 2 & 3 contain 90 unique compounds, of which we were able to find 75 matches using the 2 matching criteria mentioned earlier.
#
# The query script can be found [here](../data/torrance_tables/materials_project_query.py). Be aware that you do need to supply your **own** Materials Project API key to run this script.
df = initialize_benchmark_df()
df
# # Featurize the structure with the handbuilt featurizer
df_with_features = handbuilt_featurizer(df)
df_with_features
# We see that the raw output from the handbuilt featurizer contains way more features than needed. If we take a closer look, we can also find rows with missing values
df_with_features.isna().sum()
# As a result, we need to selected only the 8 features mentioned above and remove rows with missing values.
#
# # Process the benchmark dataframe
#
# We can see after processing, we are left with 71 compounds.
df_benchmark = process_benchmark_df(df_with_features)
df_benchmark
# # Read in the digitized Torrance Table 2 & 3
df_torrance = pd.read_excel(TORRANCE_PATH, sheet_name=["table_2", "table_3"])
df_torrance = pd.concat(df_torrance, ignore_index=True)
df_torrance.replace({"formula": {"b-Ga2O3": "Ga2O3", "b-Mn2O3": "Mn2O3", "b-MnO2": "MnO2", "a-Fe2O3": "Fe2O3"}},
inplace=True)
df_torrance
# Again, just like before, we will need to only select the 8 relevant features and compounds present in the benchmark dataframe from the Torrance dataframe.
#
# # Process the Torrance dataframe
df_torrance = process_torrance_df(df_torrance, df_benchmark)
df_torrance
# # Prepare the two dataframe for benchmarking
#
# ## Drop the `formula` column and only keep numeric ones
df_benchmark_numeric = df_benchmark.drop(columns="formula")
df_torrance_numeric = df_torrance.drop(columns="formula")
# ## Change the `v_m` and `v_o` values
#
# If we examine the Torrance dataframe, we will see that it doesn't have a value for `v_o` on row 40.
df_torrance.iloc[40]
# According to the footnote from the Torrance paper, the `v_m` here is actually not the metal site Madelung potential but the difference between the oxygen and metal site potentials. Given this information, for row 40 in both dataframes, we choose to set the `v_o` value of _df_torrance_numeric_ equal to that of _df_benchmark_numeric_ and set the `v_m` value of _df_benchmark_numeric_ equal to the difference of `v_o` and `v_m` from _df_benchmark_numeric_.
# there is a nan value in the v_o column for df_torrance_numeric
# we'll fill it with value of the corresponding value from df_benchmark_numeric
df_torrance_numeric.at[40, "v_o"] = df_benchmark_numeric.at[40, "v_o"]
# we'll also change the value of v_m for row 40 since the torrance table
# has this value already as a difference between v_o and v_m, not just v_m
df_benchmark_numeric.at[40, "v_m"] = df_benchmark_numeric.at[40, "v_o"] - df_benchmark_numeric.at[40, "v_m"]
# # Evaluate featurizer performance using [RMSE](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html), [explained variance score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html) & [$R^2$](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html) values
#
# To evaluate performance with these 3 metrics, we can treat the values from Torrance tables as the true values and those from the featurizer as the predicted values.
eval_df = evaluate_performance(df_torrance_numeric, df_benchmark_numeric)
eval_df
# From the table above, we can see that the handbuilt featurizer does a fairly good job of matching the values with those from the Torrance paper, with all the $R^2$ values greater than 0.71 and all the explained variance scores greater than 0.78.
#
# ## Examine the absulote value of differences for all columns between the two dataframes
# define a color gradient scheme to visulize the magnitude difference within each columns
# the darker the color is, the greater the magnitude
cm = sns.light_palette("green", as_cmap=True)
df_all_diff = abs(df_benchmark_numeric - df_torrance_numeric)
df_all_diff["formula"] = df_benchmark["formula"]
pd.set_option('display.max_rows', df_all_diff.shape[0]+1)
df_all_diff.style.background_gradient(cmap=cm)
# **Note**: As you browse through this table, you might notice that the rows with the biggest differences of the `charge_transfer` feature seem also to have the biggest differences in `iv` and `iv_p1` features. This is due to the fact that calculation of the charge transfer energy depends on the ionization energies. If ionization energies are different, this difference will propagate into the value for the charge transfer energies. The difference in ionization energies arises from the fact that the ionization energies used by the featurizer is scraped from a [NIST website](https://physics.nist.gov/PhysRefData/ASD/ionEnergy.html) (the scraper can be found [here](../data/nist_web_scraper.py)) whereas those from the Torrance paper were taken from various literature sources (e.g. CRC Handbook of Chemistry and Physics, 70th ed.), and the values don't always match up closely.
#
# ## See the average percent error for each column
#
# To better quantify the differences, we can examine for each column/feature the percent error as calculated below ($B_i$ is the value from _df_benchmark_numeric, $T_i$ is the value from _df_torrance_numeric_ and $N=71$)
#
# $$
# \text{avg percent error} = \frac{1}{N}\sum_{i=0}^{N-1}\frac{|B_i-T_i|}{|T_i|}\times100\%
# $$
(df_all_diff.drop(columns="formula") / abs(df_torrance_numeric)).mean() * 100
| notebooks/handbuilt_featurizer_benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import everything and define a test runner function
from importlib import reload
from io import BytesIO
import helper, op, script, tx
from ecc import (
PrivateKey,
S256Point,
Signature,
)
from helper import (
decode_base58,
encode_base58_checksum,
hash160,
hash256,
int_to_little_endian,
run,
SIGHASH_ALL,
)
from script import (
p2pkh_script,
Script,
)
from tx import (
Tx,
TxIn,
TxOut,
)
# -
# ### Exercise 1
#
# #### 1.1. Make [this test](/edit/session5/tx.py) pass
# ```
# tx.py:TxTest::test_verify_p2pkh
# ```
# +
# Exercise 1.1
reload(tx)
run(tx.TxTest('test_verify_p2pkh'))
# +
# Transaction Construction Example
# Step 1
tx_ins = []
prev_tx = bytes.fromhex('8be2f69037de71e3bc856a6627ed3e222a7a2d0ce81daeeb54a3aea8db274149')
prev_index = 4
tx_ins.append(TxIn(prev_tx, prev_index))
# Step 2
tx_outs = []
h160 = decode_base58('mzx5YhAH9kNHtcN481u6WkjeHjYtVeKVh2')
tx_outs.append(TxOut(
amount=int(0.38*100000000),
script_pubkey=p2pkh_script(h160),
))
h160 = decode_base58('mnrVtF8DWjMu839VW3rBfgYaAfKk8983Xf')
tx_outs.append(TxOut(
amount=int(0.1*100000000),
script_pubkey=p2pkh_script(h160),
))
tx_obj = Tx(1, tx_ins, tx_outs, 0, testnet=True)
# Step 3
z = tx_obj.sig_hash(0)
pk = PrivateKey(secret=8675309)
der = pk.sign(z).der()
sig = der + SIGHASH_ALL.to_bytes(1, 'big')
sec = pk.point.sec()
tx_obj.tx_ins[0].script_sig = Script([sig, sec])
print(tx_obj.serialize().hex())
# -
# ### Exercise 2
#
# #### 2.1. Make [this test](/edit/session5/tx.py) pass
# ```
# tx.py:TxTest:test_sign_input
# ```
# +
# Exercise 2.1
reload(tx)
run(tx.TxTest('test_sign_input'))
# -
# ### Exercise 3
#
# #### 3.1. Send 0.04 TBTC to this address
#
# `mwJn1YPMq7y5F8J3LkC5Hxg9PHyZ5K4cFv`
#
# #### Go here to send your transaction: https://testnet.blockchain.info/pushtx
#
# #### Bonus. Get some testnet coins and spend both outputs (one from your change address and one from the testnet faucet) to
#
# `mwJn1YPMq7y5F8J3LkC5Hxg9PHyZ5K4cFv`
#
# #### You can get some free testnet coins at: https://testnet.coinfaucet.eu/en/
# +
# Exercise 3.1
from tx import Tx
prev_tx = bytes.fromhex('<fill this in>')
prev_index = -1 # FILL THIS IN
target_address = 'mwJn1YPMq7y5F8J3LkC5Hxg9PHyZ5K4cFv'
target_amount = 0.04
change_address = '<fill this in>'
fee = 50000
secret = -1 # FILL THIS IN
private_key = PrivateKey(secret=secret)
# initialize inputs
# create a new tx input with prev_tx, prev_index
# initialize outputs
# decode the hash160 from the target address
# convert hash160 to p2pkh script
# convert target amount to satoshis (multiply by 100 million)
# create a new tx output for target with amount and script_pubkey
# decode the hash160 from the change address
# convert hash160 to p2pkh script
# get the value for the transaction input (remember testnet=True)
# calculate change_satoshis based on previous amount, target_satoshis & fee
# create a new tx output for target with amount and script_pubkey
# create the transaction (name it tx_obj to not conflict)
# now sign the 0th input with the private_key using sign_input
# SANITY CHECK: change address corresponds to private key
if private_key.point.address(testnet=True) != change_address:
raise RuntimeError('Private Key does not correspond to Change Address, check priv_key and change_address')
# SANITY CHECK: output's script_pubkey is the same one as your address
if tx_ins[0].script_pubkey(testnet=True).instructions[2] != decode_base58(change_address):
raise RuntimeError('Output is not something you can spend with this private key. Check that the prev_tx and prev_index are correct')
# SANITY CHECK: fee is reasonable
if tx_obj.fee() > 0.05*100000000 or tx_obj.fee() <= 0:
raise RuntimeError('Check that the change amount is reasonable. Fee is {}'.format(tx_obj.fee()))
# serialize and hex()
# +
# Bonus
prev_tx_1 = bytes.fromhex('<fill this in>')
prev_index_1 = -1 # FILL THIS IN
prev_tx_2 = bytes.fromhex('<fill this in>')
prev_index_2 = -1 # FILL THIS IN
target_address = 'mwJn1YPMq7y5F8J3LkC5Hxg9PHyZ5K4cFv'
fee = 50000
secret = -1 # FILL THIS IN
private_key = PrivateKey(secret=secret)
# initialize inputs
# create the first tx input with prev_tx_1, prev_index_1
# create the second tx input with prev_tx_2, prev_index_2
# initialize outputs
# decode the hash160 from the target address
# convert hash160 to p2pkh script
# calculate target amount by adding the input values and subtracting the fee
# create a single tx output for target with amount and script_pubkey
# create the transaction
# sign both inputs with the private key using sign_input
# SANITY CHECK: output's script_pubkey is the same one as your address
if tx_ins[0].script_pubkey(testnet=True).instructions[2] != decode_base58(private_key.point.address(testnet=True)):
raise RuntimeError('Output is not something you can spend with this private key. Check that the prev_tx and prev_index are correct')
# SANITY CHECK: fee is reasonable
if tx_obj.fee() > 0.05*100000000 or tx_obj.fee() <= 0:
raise RuntimeError('Check that the change amount is reasonable. Fee is {}'.format(tx_obj.fee()))
# serialize and hex()
# -
# op_checkmultisig
def op_checkmultisig(stack, z):
if len(stack) < 1:
return False
n = decode_num(stack.pop())
if len(stack) < n + 1:
return False
sec_pubkeys = []
for _ in range(n):
sec_pubkeys.append(stack.pop())
m = decode_num(stack.pop())
if len(stack) < m + 1:
return False
der_signatures = []
for _ in range(m):
# signature is assumed to be using SIGHASH_ALL
der_signatures.append(stack.pop()[:-1])
# OP_CHECKMULTISIG bug
stack.pop()
try:
raise NotImplementedError
except (ValueError, SyntaxError):
return False
return True
# ### Exercise 4
#
# #### 4.1. Make [this test](/edit/session5/op.py) pass
# ```
# op.py:OpTest:test_op_checkmultisig
# ```
# +
# Exercise 4.1
reload(op)
run(op.OpTest('test_op_checkmultisig'))
# -
# ### Exercise 5
#
# #### 5.1. Find the hash160 of the RedeemScript
# ```
# 5221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae
# ```
# +
# Exercise 5.1
hex_redeem_script = '5221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae'
# bytes.fromhex script
# hash160 result
# hex() to display
# -
# P2SH address construction example
print(encode_base58_checksum(b'\x05'+bytes.fromhex('74d691da1574e6b3c192ecfb52cc8984ee7b6c56')))
# ### Exercise 6
#
# #### 6.1. Make [these tests](/edit/session5/helper.py) pass
# ```
# helper.py:HelperTest:test_p2pkh_address
# helper.py:HelperTest:test_p2sh_address
# ```
#
# #### 6.2. Make [this test](/edit/session5/script.py) pass
# ```
# script.py:ScriptTest:test_address
# ```
# +
# Exercise 6.1
reload(helper)
run(helper.HelperTest('test_p2pkh_address'))
run(helper.HelperTest('test_p2sh_address'))
# +
# Exercise 6.2
reload(script)
run(script.ScriptTest('test_address'))
# +
# z for p2sh example
h256 = hash256(bytes.fromhex('0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c56870000000001000000'))
z = int.from_bytes(h256, 'big')
print(hex(z))
# -
# p2sh verification example
h256 = hash256(bytes.fromhex('0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c56870000000001000000'))
z = int.from_bytes(h256, 'big')
point = S256Point.parse(bytes.fromhex('022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb70'))
sig = Signature.parse(bytes.fromhex('3045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a89937'))
print(point.verify(z, sig))
# ### Exercise 7
#
# #### 7.1. Validate the second signature of the first input
#
# ```
# 0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000db00483045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a8993701483045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c568700000000
# ```
#
# The sec pubkey of the second signature is:
# ```
# 03b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb71
# ```
#
# The der signature of the second signature is:
# ```
# 3045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022
# ```
#
# The redeemScript is:
# ```
# 475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae
# ```
# +
# Exercise 7.1
hex_sec = '03b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb71'
hex_der = '3045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e754022'
hex_redeem_script = '475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152ae'
sec = bytes.fromhex(hex_sec)
der = bytes.fromhex(hex_der)
redeem_script_stream = BytesIO(bytes.fromhex(hex_redeem_script))
hex_tx = '0100000001868278ed6ddfb6c1ed3ad5f8181eb0c7a385aa0836f01d5e4789e6bd304d87221a000000db00483045022100dc92655fe37036f47756db8102e0d7d5e28b3beb83a8fef4f5dc0559bddfb94e02205a36d4e4e6c7fcd16658c50783e00c341609977aed3ad00937bf4ee942a8993701483045022100da6bee3c93766232079a01639d07fa869598749729ae323eab8eef53577d611b02207bef15429dcadce2121ea07f233115c6f09034c0be68db99980b9a6c5e75402201475221022626e955ea6ea6d98850c994f9107b036b1334f18ca8830bfff1295d21cfdb702103b287eaf122eea69030a0e9feed096bed8045c8b98bec453e1ffac7fbdbd4bb7152aeffffffff04d3b11400000000001976a914904a49878c0adfc3aa05de7afad2cc15f483a56a88ac7f400900000000001976a914418327e3f3dda4cf5b9089325a4b95abdfa0334088ac722c0c00000000001976a914ba35042cfe9fc66fd35ac2224eebdafd1028ad2788acdc4ace020000000017a91474d691da1574e6b3c192ecfb52cc8984ee7b6c568700000000'
stream = BytesIO(bytes.fromhex(hex_tx))
# parse the S256Point and Signature
# parse the Tx
# change the first input's scriptSig to redeemScript
# use Script.parse on the redeem_script_stream
# get the serialization
# add the sighash (4 bytes, little-endian of SIGHASH_ALL)
# hash256 the result
# this interpreted is a big-endian number is your z
# now verify the signature using point.verify
| session5/session5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import serial
ser = serial.Serial('COM4')
def receiving(ser):
global last_received
buffer_string = ''
while True:
buffer_string = buffer_string + ser.read(ser.inWaiting())
if '\n' in buffer_string:
lines = buffer_string.split('\n') # Guaranteed to have at least 2 entries
last_received = lines[-2]
#If the Arduino sends lots of empty lines, you'll lose the
#last filled line, so you could make the above statement conditional
#like so: if lines[-2]: last_received = lines[-2]
buffer_string = lines[-1]
print(buffer_string)
| testSerial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from dpm.criterion import (
cross_entropy, forward_kl, reverse_kl,
js_divergence, total_variation, pearson, js_divergence_2,
)
from dpm.distributions import MixtureModel
from dpm.distributions import Normal, Exponential
from dpm.visualize import plot_loss_function
from dpm.criterion import (GANLoss, WGANLoss, LSGANLoss, MMGANLoss)
import matplotlib.pyplot as plt
import torch
plot_loss_function(cross_entropy)
plot_loss_function(forward_kl)
plot_loss_function(reverse_kl)
plot_loss_function(js_divergence)
plot_loss_function(total_variation)
plot_loss_function(pearson)
plot_loss_function(js_divergence)
plot_loss_function(js_divergence_2)
p_model = MixtureModel([Normal(-5.), Normal(5.)], [0.5, 0.5])
# p_model = Normal(0.)
plot_loss_function(forward_kl, p_model=p_model)
plt.show()
plot_loss_function(reverse_kl, p_model=p_model)
plt.show()
plot_loss_function(js_divergence, p_model=p_model)
plt.show()
plot_loss_function(total_variation, p_model=p_model)
plt.show()
plot_loss_function(pearson, p_model=p_model)
plt.show()
p_model = MixtureModel([Normal(-5.), Normal(5.)], [0.5, 0.5])
# p_model = Normal(0.)
torch.manual_seed(43)
# train then pass in
plot_loss_function(GANLoss(1), p_model=p_model)
plt.show()
torch.manual_seed(43)
plot_loss_function(WGANLoss(1), p_model=p_model)
plt.show()
torch.manual_seed(43)
plot_loss_function(LSGANLoss(1), p_model=p_model)
plt.show()
torch.manual_seed(43)
plot_loss_function(MMGANLoss(1), p_model=p_model)
plt.show()
| Notebooks/Divergences/LossDynamics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import keras
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.preprocessing import image
from keras.applications.inception_resnet_v2 import preprocess_input
import numpy as np
model = InceptionResNetV2(weights='imagenet', include_top=False)
img_path = 'dog.jpg'
img = image.load_img(img_path, target_size=(299, 299))
x = image.img_to_array(img)
print(x.shape)
x = np.expand_dims(x, axis=0)
print(x.shape)
x = preprocess_input(x)
print(x.shape)
features = model.predict(x)
# -
print(keras.__version__)
print(features)
| local/Full-version/Experiments/transfer-learning-examples/inception_resnet_extract_embedding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# <style>div.container { width: 100% }</style>
# <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/PyViz_logo_wm_line.png" />
# <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial A2. Dashboard Workflow</h2></div>
# + [markdown] slideshow={"slide_type": "slide"}
# # Workflow for building and deploying interactive visualizations
# + [markdown] slideshow={"slide_type": "slide"}
# **Let's say you want to make it easy to explore some dataset, i.e.:**
#
# * Make a visualization of the data
# * Maybe add some custom widgets to see the effects of some variables
# * Then deploy the result as a web app.
# + [markdown] slideshow={"slide_type": "slide"}
# **You can definitely do that in Python, but you would expect to:**
# * Spend days of effort to get some initial prototype working in a Jupyter notebook, every time
# * Work hard to tame the resulting opaque mishmash of domain-specific, widget, and plotting code
# * Start over nearly from scratch whenever you need to:
# - Deploy in a standalone server
# - Visualize different aspects of your data
# - Scale up to larger (>100K) datasets
# + [markdown] slideshow={"slide_type": "slide"}
# # Step-by-step data-science workflow
#
# Here we'll show a simple, flexible, powerful, step-by-step workflow, explaining which open-source tools solve each of the problems involved:
#
# - Step 1: Get some data
# - Step 2: Prototype a plot in a notebook
# - Step 3: Model your domain
# - Step 4: Get a widget-based UI for free
# - Step 5: Link your domain model to your visualization
# - Step 6: Widgets now control your interactive plots
# - Step 7: Deploy your dashboard
# + slideshow={"slide_type": "slide"}
import holoviews as hv, geoviews as gv, param, dask.dataframe as dd, cartopy.crs as crs
from colorcet import cm, fire
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade
from holoviews.streams import RangeXY
from geoviews.tile_sources import EsriImagery
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 1: Get some data
#
# * Here we'll use a subset of the often-studied NYC Taxi dataset
# * About 12 million points of GPS locations from taxis
# * Stored in the efficient Parquet format for easy access
# * Loaded into a Dask dataframe for multi-core<br>(and if needed, out-of-core or distributed) computation
# + [markdown] slideshow={"slide_type": "fragment"}
# <div class="alert alert-warning" role="alert"><small>
# If you have less than 8GB of memory or haven't downloaded the full data, substitute <tt>data/.data_stubs/</tt> for <tt>data/</tt> in the following cell:</small>
# </div>
# + slideshow={"slide_type": "fragment"}
# %time df = dd.read_parquet('../data/nyc_taxi_wide.parq').persist()
print(len(df))
df.head(2)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 2: Prototype a plot in a notebook
#
# * A text-based representation isn't very useful for big datasets like this, so we need to build a plot
# * But we don't want to start a software project, so we use HoloViews:
# - Simple, declarative way to annotate your data for visualization
# - Large library of Elements with associated visual representation
# - Elements combine (lay out or overlay) easily
# * And we'll want live interactivity, so we'll use a Bokeh plotting extension
# * Result:
# + slideshow={"slide_type": "skip"}
hv.extension('bokeh')
# + slideshow={"slide_type": "slide"}
points = hv.Points(df, ['pickup_x', 'pickup_y'])
decimate(points)
# + [markdown] slideshow={"slide_type": "fragment"}
# Here ``Points`` declares an object wrapping `df`, visualized as a scatterplot of the pickup locations. `decimate` limits how many points will be sent to the browser so it won't crash.
# + [markdown] slideshow={"slide_type": "slide"}
# As you can see, HoloViews makes it simple to pop up a visualization of your data, getting *something* on screen with only a few characters of typing. But it's not particularly pretty, so let's customize it a bit:
# + slideshow={"slide_type": "fragment"}
opts = dict(width=700, height=600, xaxis=None, yaxis=None, bgcolor='black')
decimate(points.options(**opts))
# + [markdown] slideshow={"slide_type": "slide"}
# That looks a bit better, but it's still decimating the data nearly beyond recognition, so let's try using Datashader to rasterize it into a fixed-size image to send to the browser:
# + slideshow={"slide_type": "fragment"}
taxi_trips = datashade(points, cmap=fire).options(**opts)
taxi_trips
# + [markdown] slideshow={"slide_type": "slide"}
# Ok, that looks good now; there's clearly lots to explore in this dataset. Notice that the aspect ratio changed because Datashader is using *every* point, including some distant outliers. One way to fix the aspect ratio is to indicate that it's geographic data by overlaying it on a map:
# + slideshow={"slide_type": "fragment"}
taxi_trips = datashade(points, x_sampling=1, y_sampling=1, cmap=fire).options(**opts)
EsriImagery * taxi_trips
# + [markdown] slideshow={"slide_type": "slide"}
# We could add lots more visual elements (laying out additional plots left and right, overlaying annotations, etc.), but let's say that this is our basic visualization we'll want to share.
#
# To sum up what we've done so far, here are the complete 10 lines of code required to generate this geo-located interactive plot of millions of datapoints in Jupyter:
#
# ```
# import holoviews as hv, geoviews as gv, dask.dataframe as dd
# from colorcet import fire
# from holoviews.operation.datashader import datashade
# from geoviews.tile_sources import EsriImagery
# hv.extension('bokeh')
# df = dd.read_parquet('../data/nyc_taxi_wide.parq').persist()
# opts = dict(width=700, height=600, xaxis=None, yaxis=None, bgcolor='black')
# points = hv.Points(df, ['pickup_x', 'pickup_y'])
# taxi_trips = datashade(points, x_sampling=1, y_sampling=1, cmap=fire).options(\*\*opts)
# EsriImagery * taxi_trips
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 3: Model your domain
#
# Now that we've prototyped a nice plot, we could keep editing the code above to explore this data. But at this point we will instead often wish to start sharing our results with people not familiar with programming visualizations in this way.
#
# So the next step: figure out what we want our intended user to be able to change, and declare those variables or parameters with:
#
# - type and range checking
# - documentation strings
# - default values
#
# The Param library allows declaring Python attributes having these features (and more, such as dynamic values and inheritance), letting you set up a well-defined space for a user (or you!) to explore.
# + [markdown] slideshow={"slide_type": "slide"}
# ## NYC Taxi Parameters
# + slideshow={"slide_type": "fragment"}
cmaps = ['bgy','bgyw','bmw','bmy','fire','gray','kbc','kgy']
class NYCTaxiExplorer(param.Parameterized):
alpha = param.Magnitude(default=0.75, doc="Alpha value for the map opacity")
plot = param.ObjectSelector(default="pickup", objects=["pickup","dropoff"])
colormap = param.ObjectSelector(default='fire', objects=cmaps)
passengers = param.Range(default=(0, 10), bounds=(0, 10), doc="""
Filter for taxi trips by number of passengers""")
# + [markdown] slideshow={"slide_type": "fragment"}
# Each Parameter is a normal Python attribute, but with special checks and functions run automatically when getting or setting.
#
# Parameters capture your goals and your knowledge about your domain, declaratively.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Class level parameters
# + slideshow={"slide_type": "fragment"}
NYCTaxiExplorer.alpha
# + slideshow={"slide_type": "fragment"}
NYCTaxiExplorer.alpha = 0.5
NYCTaxiExplorer.alpha
# + [markdown] slideshow={"slide_type": "slide"}
# ### Validation
# + slideshow={"slide_type": "fragment"}
try:
NYCTaxiExplorer.alpha = '0'
except Exception as e:
print(e)
try:
NYCTaxiExplorer.passengers = (0,100)
except Exception as e:
print(e)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Instance parameters
# + slideshow={"slide_type": "fragment"}
explorer = NYCTaxiExplorer(alpha=0.6)
explorer.alpha
# + slideshow={"slide_type": "fragment"}
NYCTaxiExplorer.alpha
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 4: Get a widget-based UI for free
#
# * Parameters are purely declarative and independent of any widget toolkit, but contain all the information needed to build interactive widgets
# * Panel can generate UIs in Jupyter or Bokeh Server from Parameters
# * Declaration of parameters is independent of the UI library used
#
# + slideshow={"slide_type": "fragment"}
import panel as pn
pn.Row(NYCTaxiExplorer)
# + slideshow={"slide_type": "fragment"}
NYCTaxiExplorer.passengers
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 5: Link your domain model to your visualization
#
# We've now defined the space that's available for exploration, and the next step is to link up the parameter space with the code that specifies the plot:
# + slideshow={"slide_type": "fragment"}
class NYCTaxiExplorer(param.Parameterized):
alpha = param.Magnitude(default=0.75, doc="Alpha value for the map opacity")
colormap = param.ObjectSelector(default='fire', objects=cmaps)
plot = param.ObjectSelector(default="pickup", objects=["pickup","dropoff"])
passengers = param.Range(default=(0, 10), bounds=(0, 10))
def make_view(self, x_range=None, y_range=None, **kwargs):
points = hv.Points(df, kdims=[self.plot+'_x', self.plot+'_y'], vdims=['passenger_count'])
selected = points.select(passenger_count=self.passengers)
taxi_trips = datashade(selected, x_sampling=1, y_sampling=1, cmap=cm[self.colormap],
width=800, height=475)
return EsriImagery.clone(crs=crs.GOOGLE_MERCATOR).options(alpha=self.alpha, **opts) * taxi_trips
# + [markdown] slideshow={"slide_type": "slide"}
# Note that the `NYCTaxiExplorer` class is entirely declarative (no widgets), and can be used "by hand" to provide range-checked and type-checked plotting for values from the declared parameter space:
# + slideshow={"slide_type": "fragment"}
explorer = NYCTaxiExplorer(alpha=0.4, plot="dropoff")
explorer.make_view()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 6: Widgets now control your interactive plots
#
# But in practice, why not pop up the widgets to make it fully interactive?
# + slideshow={"slide_type": "fragment"}
explorer = NYCTaxiExplorer()
r = pn.Row(explorer, explorer.make_view)
r
# + [markdown] slideshow={"slide_type": "slide"}
# ## Step 7: Deploy your dashboard
#
# Ok, now you've got something worth sharing, running inside Jupyter. But if you want to share your interactive app with people who don't use Python, you'll now want to run a server with this same code.
#
# * Deploy with **Bokeh Server**:
# - Write the above code to a file
# - Add `r.server_doc();` to the end to tell Bokeh Server which object to show in the dashboard
# - ``bokeh serve nyc_taxi/main.py``
# + [markdown] slideshow={"slide_type": "slide"}
# # Complete dashboard code
# + slideshow={"slide_type": "fragment"}
with open('apps/nyc_taxi/main.py', 'r') as f: print(f.read())
# + [markdown] slideshow={"slide_type": "slide"}
# # Branching out
#
# The other sections in this tutorial will expand on steps in this workflow, providing more step-by-step instructions for each of the major tasks. These techniques can create much more ambitious apps with very little additional code or effort:
#
# * Adding additional linked or separate subplots of any type;<br>see [2 - Annotating your data](./02_Annotating_Data.ipynb) and [4 - Working with datasets](./04-Working_with_Datasets.ipynb).
# * Declaring code that runs for clicking or selecting *within* the Bokeh plot; see [8 - Custom interactivity](./08_Custom_Interactivity.ipynb).
# * Using multiple sets of widgets of many different types; see [Panel](http://panel.pyviz.org).
# * Using datasets too big for any one machine, with [Dask.Distributed](https://distributed.readthedocs.io).
# * Presenting Jupyter notebooks like this one as slides using [RISE](https://github.com/damianavila/RISE).
| examples/tutorial/A2_Dashboard_Workflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: RecyclingLabels
# language: python
# name: recyclinglabels
# ---
# %matplotlib inline
# +
from IPython.display import display
import numpy
from numpy import linspace
from sympy import lambdify, init_printing
from sympy import symbols, pi, sqrt, exp
from matplotlib import pyplot
init_printing()
# -
# # Logistic function
#
# $$ S(x) = \frac{1}{1 + e^{-x}} = \frac{e^x}{e^x + 1} $$
#
# # Normal distribution
#
# $$ f(x | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$
#
# - $\mu$ is the mean or expectation of the distribution
# - $\sigma$ is the standard deviation
# - $\sigma^2$ is the variance
#
# # Likelihood ratio of two Normals with same variance
#
# $$ f(x | \mu_0, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu_0)^2}{2\sigma^2}} $$
# $$ f(x | \mu_1, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu_1)^2}{2\sigma^2}} $$
#
# The likelihood ratio is
#
# $$
# \begin{align}
# LR(x) =& \frac{f(x|\mu_0, \sigma^2)}{f(x|\mu_1, \sigma^2)} \\
# =& \frac{C exp[-\frac{(x-\mu_0)^2}{2\sigma^2}]}
# {C exp\left[-\frac{(x-\mu_1)^2}{2\sigma^2}\right]} \\
# =& exp\left[\frac{-(x-\mu_0)^2 + (x-\mu_1)^2}{2\sigma^2}\right] \\
# =& exp\left[\frac{-(x^2 + \mu_0^2 - 2\mu_0x) + (x^2 + \mu_1^2 - 2\mu_1x)}{2\sigma^2}\right] \\
# =& exp\left[\frac{-\mu_0^2 + 2\mu_0x + \mu_1^2 - 2\mu_1x}{2\sigma^2}\right] \\
# =& exp\left[\frac{2(\mu_0 - \mu_1)x + \mu_1^2 - \mu_0^2}{2\sigma^2}\right] \\
# =& exp\left[\frac{2(\mu_0 - \mu_1)x - (\mu_0^2 - \mu_1^2)}{2\sigma^2}\right] \\
# =& exp\left[\frac{2(\mu_0 - \mu_1)x + (\mu_1 + \mu_0)(\mu_0 - \mu_1)}{2\sigma^2}\right] \\
# =& exp\left[\frac{(\mu_0 - \mu_1)(2x - (\mu_0 + \mu_1))}{2\sigma^2}\right] \\
# =& exp\left[\frac{(\mu_0 - \mu_1)}{\sigma^2}\left(x - \frac{(\mu_0 + \mu_1)}{2}\right)\right] \\
# =& exp\left[\gamma(x - m)\right] \\
# \end{align}
# $$
#
# where
#
# - $\gamma = \frac{(\mu_0 - \mu_1)}{\sigma^2}$ and
# - $m = \frac{(\mu_0 + \mu_1)}{2}$
# +
from sympy import simplify
x, mu0, mu1, sigma = symbols('x mu0 mu1 sigma')
pdf_normal_0 = (1/sqrt(2*pi*sigma**2))*exp(-((x-mu0)**2)/(2*sigma**2))
pdf_normal_1 = (1/sqrt(2*pi*sigma**2))*exp(-((x-mu1)**2)/(2*sigma**2))
display(simplify(pdf_normal_0))
display(simplify(pdf_normal_0/(pdf_normal_0 + pdf_normal_1)))
# +
sigmoid = 1 / (1 + exp(-x))
display(sigmoid)
# +
from sympy import simplify
x, mu_0, sigma_0, mu_1, sigma_1 = symbols('x mu_0 sigma_0 mu_1 sigma_1')
y_0 = (1/sqrt(2*pi*sigma_0**2))*exp(-((x-mu_0)**2)/(2*sigma_0**2))
y_1 = (1/sqrt(2*pi*sigma_1**2))*exp(-((x-mu_1)**2)/(2*sigma_1**2))
display(simplify(y_0))
display(simplify(y_0/(y_0 + y_1)))
# -
# +
pdf_normal_0 = lambdify((x, mu_0, sigma_0), y_0, "numpy")
pdf_normal_1 = lambdify((x, mu_1, sigma_1), y_1, "numpy")
a = linspace(-5,5,100)
m0 = 0
s0 = 1
m1 = 2
s1 = 1
pyplot.plot(a, pdf_normal_0(a, m0, s0))
pyplot.plot(a, pdf_normal_1(a, m1, s1))
pyplot.plot(a, pdf_normal_0(a, m1, s1)/(pdf_normal_0(a, m0, s0) + pdf_normal_1(a, m1, s1)))
pyplot.grid(True)
# -
# # Expected brier score
#
# Let's visualise the error space of a logistic regression with only one parameter $w$
#
# $$ \frac{1}{1 + exp(-xw)} $$
# +
from numpy import exp as e
def brier_score(x1, x2):
return numpy.mean((x1 - x2)**2)
w_list = linspace(-5, 5)
errors = []
for w in w_list:
errors.append(brier_score(1/(1 + e(-a*w)),
pdf_normal_0(a, m1, s1)/(pdf_normal_0(a, m0, s0) + pdf_normal_1(a, m1, s1))))
pyplot.plot(w_list, errors)
min_idx = numpy.argmin(errors)
print((w_list[min_idx], errors[min_idx]))
pyplot.annotate("w = {:.2f}, BS = {:.2f}".format(w_list[min_idx], errors[min_idx]), (w_list[min_idx], errors[min_idx]),
(1.5, 0.1), arrowprops={'arrowstyle': '->'})
# -
| notebooks/expected_error_surface_proper_losses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test Recurrent Policy with Extreme Parameter Variation
# +
import numpy as np
import os,sys
sys.path.append('../../../RL_lib/Agents')
sys.path.append('../../../RL_lib/Policies/PPO')
sys.path.append('../../../RL_lib/Policies/Common')
sys.path.append('../../../RL_lib/Utils')
sys.path.append('../../../Env')
sys.path.append('../../../Imaging')
# %load_ext autoreload
# %load_ext autoreload
# %autoreload 2
# %matplotlib nbagg
import os
print(os.getcwd())
# + language="html"
# <style>
# .output_wrapper, .output {
# height:auto !important;
# max-height:1000px; /* your desired max-height here */
# }
# .output_scroll {
# box-shadow:none !important;
# webkit-box-shadow:none !important;
# }
# </style>
# -
# # Optimize Policy
# +
from env import Env
import env_utils as envu
from dynamics_model import Dynamics_model
from lander_model import Lander_model
from ic_gen import Landing_icgen
import rl_utils
import attitude_utils as attu
import optics_utils as optu
from arch_policy_vf import Arch
from policy_ppo import Policy
from softmax_pd import Softmax_pd as PD
from value_function import Value_function
import policy_nets as policy_nets
import valfunc_nets as valfunc_nets
from agent import Agent
import torch.nn as nn
from flat_constraint import Flat_constraint
from glideslope_constraint import Glideslope_constraint
from rh_constraint import RH_constraint
from no_attitude_constraint import Attitude_constraint
from w_constraint import W_constraint
from reward_attitude import Reward
from hf_asteroid import Asteroid
from thruster_model_cubesat import Thruster_model
from sensor import Sensor
from seeker import Seeker
landing_site_range = 10.0
landing_site = None #np.asarray([-250.,0.,0.])
asteroid_model = Asteroid(landing_site_override=landing_site, omega_range=(1e-5,5e-4))
ap = attu.Quaternion_attitude()
C_cb = optu.rotate_optical_axis(0.0, 0.0, np.pi)
r_cb = np.asarray([0,0,0])
fov=envu.deg2rad(90)
seeker = Seeker(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb,
radome_slope_bounds=(-0.05,0.05), range_bias=(-0.05,0.05),
fov=fov, debug=False)
sensor = Sensor(seeker, attitude_parameterization=ap, use_range=True, apf_tau1=300, use_dp=False,
landing_site_range=landing_site_range,
pool_type='max', state_type=Sensor.optflow_state_range_dp1)
print(sensor.track_func)
sensor.track_func = sensor.track_func1
print(sensor.track_func)
logger = rl_utils.Logger()
dynamics_model = Dynamics_model(h=2)
thruster_model = Thruster_model(pulsed=True, scale=1.0, offset=0.4)
lander_model = Lander_model(asteroid_model, thruster_model, attitude_parameterization=ap, sensor=sensor,
landing_site_range=landing_site_range, com_range=(-0.10,0.10),
attitude_bias=0.05, omega_bias=0.05)
lander_model.get_state_agent = lander_model.get_state_agent_sensor_att_w2
obs_dim = 13
action_dim = 12
actions_per_dim = 2
logit_dim = action_dim * actions_per_dim
recurrent_steps = 60
reward_object = Reward(landing_rlimit=2, landing_vlimit=0.1,
tracking_bias=0.01, fov_coeff=-50.,
att_coeff=-0.20,
tracking_coeff=-0.5, magv_coeff=-1.0,
fuel_coeff=-0.10, landing_coeff=10.0)
glideslope_constraint = Glideslope_constraint(gs_limit=-1.0)
shape_constraint = Flat_constraint()
attitude_constraint = Attitude_constraint(ap)
w_constraint = W_constraint(w_limit=(0.1,0.1,0.1), w_margin=(0.05,0.05,0.05))
rh_constraint = RH_constraint(rh_limit=150)
wi=0.05
ic_gen = Landing_icgen((800,1000),
p_engine_fail=0.5,
engine_fail_scale=(0.5,1.0),
lander_wll=(-wi,-wi,-wi),
lander_wul=(wi,wi,wi),
attitude_parameterization=ap,
position_error=(0,np.pi/4),
heading_error=(0,np.pi/8),
attitude_error=(0,np.pi/16),
min_mass=450, max_mass=500,
mag_v=(0.05,0.1),
debug=False,
inertia_uncertainty_diag=10.0,
inertia_uncertainty_offdiag=1.0)
env = Env(ic_gen, lander_model, dynamics_model, logger,
landing_site_range=landing_site_range,
debug_done=False,
reward_object=reward_object,
glideslope_constraint=glideslope_constraint,
attitude_constraint=attitude_constraint,
w_constraint=w_constraint,
rh_constraint=rh_constraint,
tf_limit=5000.0,print_every=10,nav_period=6)
env.ic_gen.show()
arch = Arch()
policy = Policy(policy_nets.GRU1(obs_dim, logit_dim, recurrent_steps=recurrent_steps),
PD(action_dim, actions_per_dim),
shuffle=False,
kl_targ=0.001,epochs=20, beta=0.1, servo_kl=True, max_grad_norm=30, scale_vector_obs=True,
init_func=rl_utils.xn_init)
value_function = Value_function(valfunc_nets.GRU1(obs_dim, recurrent_steps=recurrent_steps), scale_obs=True,
shuffle=False, batch_size=9999999, max_grad_norm=30,
verbose=False)
agent = Agent(arch, policy, value_function, None, env, logger,
policy_episodes=30, policy_steps=3000, gamma1=0.95, gamma2=0.995,
recurrent_steps=recurrent_steps, monitor=env.rl_stats)
fname = "optimize_WATTVW_FOV-AR=5-RPT3"
policy.load_params(fname)
# -
# # Test Policy
# +
env.test_policy_batch(agent,5000,print_every=100,keys=lander_model.get_engagement_keys())
#env.test_policy_batch(agent,10,print_every=1)
# +
#np.save('traj_1.npy',traj)
| Experiments/Extended/Test_HF/TEST-WATTVW_FOV-AR=5-RPT3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <span style="color:Maroon">Trade Strategy
# __Summary:__ <span style="color:Blue">In this code we shall test the results of given model
# Import required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
np.random.seed(0)
import warnings
warnings.filterwarnings('ignore')
# User defined names
index = "S&P500"
filename_whole = "whole_dataset"+index+"_gbm_model.csv"
filename_trending = "Trending_dataset"+index+"_gbm_model.csv"
filename_meanreverting = "MeanReverting_dataset"+index+"_gbm_model.csv"
date_col = "Date"
Rf = 0.01 #Risk free rate of return
# Get current working directory
mycwd = os.getcwd()
print(mycwd)
# Change to data directory
os.chdir("..")
os.chdir(str(os.getcwd()) + "\\Data")
# Read the datasets
df_whole = pd.read_csv(filename_whole, index_col=date_col)
df_trending = pd.read_csv(filename_trending, index_col=date_col)
df_meanreverting = pd.read_csv(filename_meanreverting, index_col=date_col)
# Convert index to datetime
df_whole.index = pd.to_datetime(df_whole.index)
df_trending.index = pd.to_datetime(df_trending.index)
df_meanreverting.index = pd.to_datetime(df_meanreverting.index)
# Head for whole dataset
df_whole.head()
df_whole.shape
# Head for Trending dataset
df_trending.head()
df_trending.shape
# Head for Mean Reverting dataset
df_meanreverting.head()
df_meanreverting.shape
# Merge results from both models to one
df_model = df_trending.append(df_meanreverting)
df_model.sort_index(inplace=True)
df_model.head()
df_model.shape
#
#
# ## <span style="color:Maroon">Functions
def initialize(df):
days, Action1, Action2, current_status, Money, Shares = ([] for i in range(6))
Open_price = list(df['Open'])
Close_price = list(df['Adj Close'])
Predicted = list(df['Predicted'])
Action1.append(Predicted[0])
Action2.append(0)
current_status.append(Predicted[0])
if(Predicted[0] != 0):
days.append(1)
if(Predicted[0] == 1):
Money.append(0)
else:
Money.append(200)
Shares.append(Predicted[0] * (100/Open_price[0]))
else:
days.append(0)
Money.append(100)
Shares.append(0)
return days, Action1, Action2, current_status, Predicted, Money, Shares, Open_price, Close_price
def Action_SA_SA(days, Action1, Action2, current_status, i):
if(current_status[i-1] != 0):
days.append(1)
else:
days.append(0)
current_status.append(current_status[i-1])
Action1.append(0)
Action2.append(0)
return days, Action1, Action2, current_status
def Action_ZE_NZE(days, Action1, Action2, current_status, i):
if(days[i-1] < 5):
days.append(days[i-1] + 1)
Action1.append(0)
Action2.append(0)
current_status.append(current_status[i-1])
else:
days.append(0)
Action1.append(current_status[i-1] * (-1))
Action2.append(0)
current_status.append(0)
return days, Action1, Action2, current_status
def Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i):
current_status.append(Predicted[i])
Action1.append(Predicted[i])
Action2.append(0)
days.append(days[i-1] + 1)
return days, Action1, Action2, current_status
def Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i):
current_status.append(Predicted[i])
Action1.append(Predicted[i])
Action2.append(Predicted[i])
days.append(1)
return days, Action1, Action2, current_status
def get_df(df, Action1, Action2, days, current_status, Money, Shares):
df['Action1'] = Action1
df['Action2'] = Action2
df['days'] = days
df['current_status'] = current_status
df['Money'] = Money
df['Shares'] = Shares
return df
def Get_TradeSignal(Predicted, days, Action1, Action2, current_status):
# Loop over 1 to N
for i in range(1, len(Predicted)):
# When model predicts no action..
if(Predicted[i] == 0):
if(current_status[i-1] != 0):
days, Action1, Action2, current_status = Action_ZE_NZE(days, Action1, Action2, current_status, i)
else:
days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i)
# When Model predicts sell
elif(Predicted[i] == -1):
if(current_status[i-1] == -1):
days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i)
elif(current_status[i-1] == 0):
days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted,
i)
else:
days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted,
i)
# When model predicts Buy
elif(Predicted[i] == 1):
if(current_status[i-1] == 1):
days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i)
elif(current_status[i-1] == 0):
days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted,
i)
else:
days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted,
i)
return days, Action1, Action2, current_status
def Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price):
for i in range(1, len(Open_price)):
if(Action1[i] == 0):
Money.append(Money[i-1])
Shares.append(Shares[i-1])
else:
if(Action2[i] == 0):
# Enter new position
if(Shares[i-1] == 0):
Shares.append(Action1[i] * (Money[i-1]/Open_price[i]))
Money.append(Money[i-1] - Action1[i] * Money[i-1])
# Exit the current position
else:
Shares.append(0)
Money.append(Money[i-1] - Action1[i] * np.abs(Shares[i-1]) * Open_price[i])
else:
Money.append(Money[i-1] -1 *Action1[i] *np.abs(Shares[i-1]) * Open_price[i])
Shares.append(Action2[i] * (Money[i]/Open_price[i]))
Money[i] = Money[i] - 1 * Action2[i] * np.abs(Shares[i]) * Open_price[i]
return Money, Shares
def Get_TradeData(df):
# Initialize the variables
days,Action1,Action2,current_status,Predicted,Money,Shares,Open_price,Close_price = initialize(df)
# Get Buy/Sell trade signal
days, Action1, Action2, current_status = Get_TradeSignal(Predicted, days, Action1, Action2, current_status)
Money, Shares = Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price)
df = get_df(df, Action1, Action2, days, current_status, Money, Shares)
df['CurrentVal'] = df['Money'] + df['current_status'] * np.abs(df['Shares']) * df['Adj Close']
return df
def Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year):
"""
Prints the metrics
"""
print("++++++++++++++++++++++++++++++++++++++++++++++++++++")
print(" Year: {0}".format(year))
print(" Number of Trades Executed: {0}".format(number_of_trades))
print("Number of days with Active Position: {}".format(active_days))
print(" Annual Return: {:.6f} %".format(annual_returns*100))
print(" Sharpe Ratio: {:.2f}".format(sharpe_ratio))
print(" Maximum Drawdown (Daily basis): {:.2f} %".format(drawdown*100))
print("----------------------------------------------------")
return
def Get_results_PL_metrics(df, Rf, year):
df['tmp'] = np.where(df['current_status'] == 0, 0, 1)
active_days = df['tmp'].sum()
number_of_trades = np.abs(df['Action1']).sum()+np.abs(df['Action2']).sum()
df['tmp_max'] = df['CurrentVal'].rolling(window=20).max()
df['tmp_min'] = df['CurrentVal'].rolling(window=20).min()
df['tmp'] = np.where(df['tmp_max'] > 0, (df['tmp_max'] - df['tmp_min'])/df['tmp_max'], 0)
drawdown = df['tmp'].max()
annual_returns = (df['CurrentVal'].iloc[-1]/100 - 1)
std_dev = df['CurrentVal'].pct_change(1).std()
sharpe_ratio = (annual_returns - Rf)/std_dev
Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year)
return
#
#
# Change to Images directory
os.chdir("..")
os.chdir(str(os.getcwd()) + "\\Images")
# ## <span style="color:Maroon">Whole Dataset
df_whole_train = df_whole[df_whole["Sample"] == "Train"]
df_whole_test = df_whole[df_whole["Sample"] == "Test"]
df_whole_test_2019 = df_whole_test[df_whole_test.index.year == 2019]
df_whole_test_2020 = df_whole_test[df_whole_test.index.year == 2020]
output_train_whole = Get_TradeData(df_whole_train)
output_test_whole = Get_TradeData(df_whole_test)
output_test_whole_2019 = Get_TradeData(df_whole_test_2019)
output_test_whole_2020 = Get_TradeData(df_whole_test_2020)
output_train_whole["BuyandHold"] = (100 * output_train_whole["Adj Close"])/(output_train_whole.iloc[0]["Adj Close"])
output_test_whole["BuyandHold"] = (100*output_test_whole["Adj Close"])/(output_test_whole.iloc[0]["Adj Close"])
output_test_whole_2019["BuyandHold"] = (100 * output_test_whole_2019["Adj Close"])/(output_test_whole_2019.iloc[0]
["Adj Close"])
output_test_whole_2020["BuyandHold"] = (100 * output_test_whole_2020["Adj Close"])/(output_test_whole_2020.iloc[0]
["Adj Close"])
Get_results_PL_metrics(output_test_whole_2019, Rf, 2019)
Get_results_PL_metrics(output_test_whole_2020, Rf, 2020)
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.plot(output_train_whole["CurrentVal"], 'b-', label="Value (Model)")
plt.plot(output_train_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Value", fontsize=12)
plt.legend()
plt.title("Train Sample "+ str(index) + " GBM Whole Dataset", fontsize=16)
plt.savefig("Train Sample Whole Dataset GBM Model" + str(index) +'.png')
plt.show()
plt.close()
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.plot(output_test_whole["CurrentVal"], 'b-', label="Value (Model)")
plt.plot(output_test_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Value", fontsize=12)
plt.legend()
plt.title("Test Sample "+ str(index) + " GBM Whole Dataset", fontsize=16)
plt.savefig("Test Sample Whole Dataset GBM Model" + str(index) +'.png')
plt.show()
plt.close()
# __Comments:__ <span style="color:Blue"> Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. But the performance of model in Test Sample is very poor
# ## <span style="color:Maroon">Segment Model
df_model_train = df_model[df_model["Sample"] == "Train"]
df_model_test = df_model[df_model["Sample"] == "Test"]
df_model_test_2019 = df_model_test[df_model_test.index.year == 2019]
df_model_test_2020 = df_model_test[df_model_test.index.year == 2020]
output_train_model = Get_TradeData(df_model_train)
output_test_model = Get_TradeData(df_model_test)
output_test_model_2019 = Get_TradeData(df_model_test_2019)
output_test_model_2020 = Get_TradeData(df_model_test_2020)
output_train_model["BuyandHold"] = (100 * output_train_model["Adj Close"])/(output_train_model.iloc[0]["Adj Close"])
output_test_model["BuyandHold"] = (100 * output_test_model["Adj Close"])/(output_test_model.iloc[0]["Adj Close"])
output_test_model_2019["BuyandHold"] = (100 * output_test_model_2019["Adj Close"])/(output_test_model_2019.iloc[0]
["Adj Close"])
output_test_model_2020["BuyandHold"] = (100 * output_test_model_2020["Adj Close"])/(output_test_model_2020.iloc[0]
["Adj Close"])
Get_results_PL_metrics(output_test_model_2019, Rf, 2019)
Get_results_PL_metrics(output_test_model_2020, Rf, 2020)
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.plot(output_train_model["CurrentVal"], 'b-', label="Value (Model)")
plt.plot(output_train_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Value", fontsize=12)
plt.legend()
plt.title("Train Sample Hurst Segment GBM Models "+ str(index), fontsize=16)
plt.savefig("Train Sample Hurst Segment GBM Models" + str(index) +'.png')
plt.show()
plt.close()
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.plot(output_test_model["CurrentVal"], 'b-', label="Value (Model)")
plt.plot(output_test_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Value", fontsize=12)
plt.legend()
plt.title("Test Sample Hurst Segment GBM Models" + str(index), fontsize=16)
plt.savefig("Test Sample Hurst Segment GBM Models" + str(index) +'.png')
plt.show()
plt.close()
# __Comments:__ <span style="color:Blue"> Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. The model does perform well in Test sample (Not compared to Buy and Hold strategy) compared to single model. Hurst Exponent based segmentation has definately added value to the model
#
#
| Dev/S&P 500/Codes/07 GBM Performance Results .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Details on the system module
# **For a description on how to use the most important functionality, please checkout the tutorials and the API documentation.**
#
# The following example code shows how to use most of the functionality of the `tempo.dynamics` module. This code is *suplementary* to the documentation and also includes functionality that is only relevant to the inner workings of the TimeEvolvingMPO package. Sections that show example code that is not part of the API is marked with three asterix `***`.
#
# **Contents:**
#
# * A: Dynamics class
# * A1: expectations()
# * A2: export() and import_dynamics()
#
# +
import sys
sys.path.insert(0,'..')
import numpy as np
import matplotlib.pyplot as plt
import time_evolving_mpo as tempo
# -
# ## A: Dynamics class
dyn_A = tempo.Dynamics(name="coherent spin dynamics A")
print(dyn_A)
dyn_A.add(0.1, 0.95* tempo.operators.spin_dm("up") + 0.05* tempo.operators.spin_dm("x+") )
dyn_A.add(0.2, 0.8* tempo.operators.spin_dm("up") + 0.2* tempo.operators.spin_dm("x+") )
dyn_A.add(0.3, 0.6* tempo.operators.spin_dm("up") + 0.4* tempo.operators.spin_dm("x+") )
dyn_A.add(0.4, 0.2* tempo.operators.spin_dm("up") + 0.8* tempo.operators.spin_dm("x+") )
print(dyn_A)
len(dyn_A)
dyn_A.states
dyn_A.times
dyn_A.add(0.0, 1.0* tempo.operators.spin_dm("up") + 0.0* tempo.operators.spin_dm("z+") )
dyn_A.times
dyn_A.add(0.15, 0.9* tempo.operators.spin_dm("up") + 0.1* tempo.operators.spin_dm("z+") )
dyn_A.times
# ### A1: expectations()
t, z = dyn_A.expectations(tempo.operators.sigma("z"), real=True)
print(t, z)
plt.plot(*dyn_A.expectations(real=True), label=r"$trace$")
plt.plot(*dyn_A.expectations(tempo.operators.sigma("x"), real=True), label=r"$<\sigma_x>$")
plt.plot(*dyn_A.expectations(tempo.operators.sigma("y"), real=True), label=r"$<\sigma_y>$")
plt.plot(*dyn_A.expectations(tempo.operators.sigma("z"), real=True), label=r"$<\sigma_z>$")
plt.legend()
# ### A2: export() and import_dynamics()
dyn_A.export("details_dynamics.tempoDynamics", overwrite=True)
dyn_A2 = tempo.import_dynamics("details_dynamics.tempoDynamics")
print(dyn_A2)
plt.plot(*dyn_A2.expectations(real=True), label=r"$trace$")
plt.plot(*dyn_A2.expectations(tempo.operators.sigma("x"), real=True), label=r"$<\sigma_x>$")
plt.plot(*dyn_A2.expectations(tempo.operators.sigma("y"), real=True), label=r"$<\sigma_y>$")
plt.plot(*dyn_A2.expectations(tempo.operators.sigma("z"), real=True), label=r"$<\sigma_z>$")
plt.legend()
| examples/details_dynamics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## A Study of Housing price in Beijing
# Project follows the CRISP-DM process outlined for questions, by <NAME>
# +
#import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.svm import LinearSVR, SVR
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV, train_test_split
import random
# -
# ### Business Understanding
#
# The Beijing housing market is highly important to the people and leaders of Beijing, and recently the housing price is increasing at a tremendous speed. As to be a Chinese, I am quite anxious on this market in the capital of China.
#
# With the housing price data of Beijing from 2011 to 2017, we will at first have a study with main focus on the following questions:
# 1. Which factors influence price the greatest?
# 2. The price increases at which speed?
#
# Then we will try to build a model to forecast the housing price in Beijing, and compare the result with the real result with RMSE(Root mean squared error) as the metric to measure our model.
#
# With these studies may provide some advice to help people have a better understanding of the market in Beijing, and make decisions. And may also help the Chinese government to control the market more efficiently.
# ### Data Understanding
#load dataset
housing = pd.read_csv('Data/data.csv', sep=',', encoding='iso-8859-1')
housing.info()
# Here, according to the Kaggle data context:
#
# 1. url: the url which fetches the data
# 2. id: the id of transaction
# 3. Lng: and Lat coordinates, using the BD09 protocol.
# 4. Cid: community id
# 5. tradeTime: the time of transaction
# 6. DOM: active days on market.Know more in https://en.wikipedia.org/wiki/Days_on_market
# 7. followers: the number of people follow the transaction.
# 8. totalPrice: the total price
# 9. price: the average price by square
# 10. square: the square of house
# 11. livingRoom: the number of living room
# 12. drawingRoom: the number of drawing room
# 13. kitchen: the number of kitchen
# 14. bathroom the number of bathroom
# 15. floor: the height of the house.
# 16. buildingType: including tower( 1 ) , bungalow( 2 ),combination of plate and tower( 3 ), plate( 4 ).
# 17. constructionTime: the time of construction
# 18. renovationCondition: including other( 1 ), rough( 2 ),Simplicity( 3 ), hardcover( 4 )
# 19. buildingStructure: including unknow( 1 ), mixed( 2 ), brick and wood( 3 ), brick and concrete( 4 ),steel( 5 ) and steel-concrete composite ( 6 ).
# 20. ladderRatio: the proportion between number of residents on the same floor and number of elevator of ladder. It describes how many ladders a resident have on average.
# 21. elevator have ( 1 ) or not have elevator( 0 )
# 22. fiveYearsProperty: if the owner have the property for less than 5 years
housing.head()
# We will build a map to have a general idea:
fig = plt.scatter(x=housing['Lat'], y=housing['Lng'], s=10, label='Price', \
c=housing['district'], cmap=plt.get_cmap('jet'))
plt.colorbar(fig)
plt.legend()
plt.show()
# ### Prepare Data
# Some data preparation steps need to be done before using the dataset for exploration, including:
#
# 1. Drop the following columns: 'url', 'id', 'price', 'Cid', 'DOM', 'followers', 'communityAverage'
# 2. Drop polluted rows on 'livingRoom', 'drawingRoom', 'bathRoom', 'constructionTime'
# 3. Drop NA rows in 'buildingType'
# 4. Format data
#1. Drop the following columns: 'url', 'id', 'price', 'Cid', 'DOM', 'followers', 'communityAverage'
housing = housing.drop(['url','id','price','Cid','DOM', 'followers', 'communityAverage'], axis=1)
housing.head()
housing.info()
#2. Drop polluated rows
# At first, we would like to see the polluated data:
print("livingRoom:" + str(housing['livingRoom'].unique()))
print("drawingRoom:" + str(housing['drawingRoom'].unique()))
print("bathRoom:" + str(housing['bathRoom'].unique()))
print("constructionTime:" + str(housing['constructionTime'].unique()))
housing = housing[housing.livingRoom != '#NAME?']
print("livingRoom:" + str(housing['livingRoom'].unique()))
print("drawingRoom:" + str(housing['drawingRoom'].unique()))
print("bathRoom:" + str(housing['bathRoom'].unique()))
print("constructionTime:" + str(housing['constructionTime'].unique()))
housing = housing[housing.constructionTime != 'δ֪']
print("livingRoom:" + str(housing['livingRoom'].unique()))
print("drawingRoom:" + str(housing['drawingRoom'].unique()))
print("bathRoom:" + str(housing['bathRoom'].unique()))
print("constructionTime:" + str(housing['constructionTime'].unique()))
housing.info()
#3 Drop NA rows in 'buildingType'
housing = housing.dropna(subset=['buildingType'], axis=0)
housing['livingRoom'] = housing['livingRoom'].astype('int64')
housing['drawingRoom'] = housing['drawingRoom'].astype('int64')
housing['bathRoom'] = housing['bathRoom'].astype('int64')
housing['constructionTime'] = housing['constructionTime'].astype('int64')
housing.info()
housing.isnull().sum()
# Here, we have finally a clean data.
# ### Answer Questions base on dataset
# Now our data is ready! We can use this dataset to study our questions:
# Question 1: Which factors influence price the greatest?
f,ax = plt.subplots(figsize=(20, 20))
sns.heatmap(housing.corr(), annot = True, linewidth = .5, fmt = ".3f")
plt.show()
# From the heatmap above, withdraw the line on totalPrice, we see that the square, livingroom, drawingRoom and bathroom are with greatest correlation with
#
# 0.559(square), 0.427(livingroom), 0.430(bathroom) and 0.416(tradeTime).
#
# In general, the bigger the house is, its price is more expensive, it's quite normal.
#
# And a high correlation with tradeTime, which means housing price increase with time.
# Question 2: The price increases at which speed?
housing['tradeTime'] = housing['tradeTime'].astype('datetime64[ns]')
housing['tradeTime'].head()
price_by_trade_time = pd.DataFrame()
price_by_trade_time['price'] = housing['totalPrice']/housing['square']
price_by_trade_time.index = housing['tradeTime'].astype('datetime64[ns]')
price_by_trade_month = price_by_trade_time.resample('M').mean().to_period('M').fillna(0)
price_by_trade_month.plot(kind='line')
plt.show()
# It seems that we have too much data missing before 2010, so we will only concentrate on data after 2010.
price_by_trade_month = price_by_trade_month[price_by_trade_month.index > datetime(2009, 12, 31)]
price_by_trade_month.plot(kind='line')
plt.show()
print(price_by_trade_month.head())
print(price_by_trade_month.tail())
# Suppose that the price in 2010-01 and 2010-02 are the same, we can have a simple computation that the geometric average growth rate from 2010 to 2018 is about:
average_growth_rate = (5.966570/1.252939) ** (1/8) - 1
print("The geometric average growth rate from 2010 to 2018 is about " + str(average_growth_rate*100) +'%')
# Well, that is a quite tremendous speed, investment in the real estate is good idea in this period!
# ### Build model to forcast the housing price in Beijing
# As we can see from the previous result, we have much data missing before 2010, the data before 2010 may cause outlier problem, we should better not use them.
housing = housing[housing.tradeTime > datetime(2009, 12, 31)]
#Change tradeTime to numerical result
housing.tradeTime = pd.to_datetime(housing.tradeTime).astype(np.int64)
# We split the data in training and test sets:
X = housing.loc[:, housing.columns != 'totalPrice']
Y = housing['totalPrice']
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
# #### Pipeline Construction:
#
# We will build a following pipeline to construct our model:
#
# 
# create DataFrameSelector class
# ref: https://stackoverflow.com/questions/48491566/name-dataframeselector-is-not-defined?noredirect=1&lq=1
class DataFrameSelector(BaseEstimator, TransformerMixin):
"""DataFrameSelector"""
def __init__(self, attribute_name):
self.attribute_name = attribute_name
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_name].values
num_attributes = ['Lng', 'Lat', 'square', 'tradeTime', 'livingRoom', 'drawingRoom', 'kitchen', 'bathRoom', 'constructionTime', 'ladderRatio', 'elevator', 'fiveYearsProperty', 'subway']
cat_attributes = ['buildingType', 'renovationCondition', 'buildingStructure', 'district']
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attributes)),
('std_scaler', StandardScaler())
])
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attributes)),
('encoder', OneHotEncoder())
])
pipeline_prepare = FeatureUnion([
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline)
])
pipeline_total = Pipeline([
('prepare', pipeline_prepare),
('svr', LinearSVR()),
])
# We will use GridSearchCV to find the optimal parameters:
parameters = {'svr__C': [0.1, 0.5, 0.7],
'svr__loss': ['squared_epsilon_insensitive','epsilon_insensitive'],
'svr__max_iter':[1000]}
model = GridSearchCV(pipeline_total, param_grid=parameters, verbose=3, n_jobs=4)
model.fit(X_train, Y_train)
Y_preds = model.predict(X_test)
Y_preds[:10]
Y_test.head(10)
# #### Result & Measure Performance
mse = mean_squared_error(Y_test, Y_preds)
mse
rmse = np.sqrt(mse)
print("Root-mean-square error is " + str(rmse))
# +
display_size = 300
test_index = random.choices(X_test.index, k=display_size)
Y_label = [Y_test[index] for index in test_index]
Y_predict = model.predict(X_test.loc[test_index])
x = [i+1 for i in range(display_size)]
plt.figure(figsize=(18,10))
plt.plot(x, Y_label, c='red', label='label')
plt.plot(x, Y_predict, c='blue', label='predict')
plt.title('Prediction vs Label on a sample data')
plt.xlabel('ramdon sample')
plt.ylabel('price')
plt.legend()
plt.show()
# -
# Here we can see that in general, we have a quite good result. However some pic values (very high and very low prices) we are not able to predict well. This pics values may relate to some information we cannot infer from the information from dataset.
#
# For exmaple, the information related to advantage in each location: numbers of school in less than 1km, numbers of metro station nearby, the security information etc.
#
# This information is quite important to the house pricing, but it's not given in the dataset. Which may be the reason we are not able to predict a more accurate result.
# ### Conclusion & Reflection
# In this notebook, we have studied the factors which impact house pricing the most, and a tremendous speed about more than 20% growth in the market.
#
# Finally we build a model to predict Beijing's housing pricing with an RMSE about 123.89, which is not quite perfect result, as some information is not provided as advantage in each location.
#
# So an improvement may be with data mining technology or search information with google map to get this formation and re-train our model.
# Another idea comes to me that, split our data randomly may not be a quite good idea, as we want to predict future prices, so we should split data with date:
#
# That is test data should use more recent data, ok here we will try:
# +
housing.sort_values('tradeTime', ascending=False, inplace = True)
housing.shape
# +
X_test2 = housing.head(59597).loc[:, housing.columns != 'totalPrice']
Y_test2 = housing.head(59597)['totalPrice']
X_train2 = housing.tail(238388).loc[:, housing.columns != 'totalPrice']
Y_train2 = housing.tail(238388)['totalPrice']
# -
model2 = GridSearchCV(pipeline_total, param_grid=parameters, verbose=3, n_jobs=4)
model2.fit(X_train2, Y_train2)
Y_preds2 = model2.predict(X_test2)
mse2 = mean_squared_error(Y_test2, Y_preds2)
rmse2 = np.sqrt(mse2)
print("Root-mean-square error is " + str(rmse2))
# +
display_size2 = 300
test_index2 = random.choices(X_test2.index, k=display_size2)
Y_label2 = [Y_test2[index] for index in test_index2]
Y_predict2 = model.predict(X_test2.loc[test_index2])
x2 = [i+1 for i in range(display_size2)]
plt.figure(figsize=(18,10))
plt.plot(x2, Y_label2, c='red', label='label')
plt.plot(x2, Y_predict2, c='blue', label='predict')
plt.title('Prediction vs Label on future data')
plt.xlabel('ramdon sample')
plt.ylabel('price')
plt.legend()
plt.show()
# -
# Compare rmse2 and rmse, 237.83 vs 123.89, we see that the error increased by more than 90%, so predict the future data can be a more challenge problem, the reason maybe that for future, the uncertainty increase, for example the policies may change over time which makes prediction more difficult.
| A_Study_of_housing_price_in_Beijing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# +
import os
import azureml.core
from azureml.core import Workspace, Dataset, Datastore, ComputeTarget, Experiment, ScriptRunConfig
from azureml.pipeline.steps import PythonScriptStep
from azureml.pipeline.core import Pipeline
# -
from azureml.core import Workspace, Datastore, Experiment
ws = Workspace('bd04922c-a444-43dc-892f-74d5090f8a9a','mlplayarearg','testdeployment')
datastore = Datastore.get(ws,'gen2asmanagedidentity')
mydataset = Dataset.File.from_files([(datastore, 'somedata/')])
mydataset.to_path()
pip install -U azureml-sdk
pip list
# +
from azureml.data import OutputFileDatasetConfig
# learn more about the output config
help(OutputFileDatasetConfig)
# +
# %%writefile step1.py
import os
import sys
import pandas as pd
mounted_input_path = sys.argv[1]
mounted_output_path = sys.argv[2]
os.makedirs(mounted_output_path, exist_ok=True)
print(sys.argv[1])
print(sys.argv[2])
mydf1 = pd.read_csv(os.path.join(sys.argv[1],'mydata.csv'))
mydf2 = pd.read_csv(os.path.join(sys.argv[1],'testdataatroot.csv'))
print(mydf1.shape)
print(mydf2.shape)
test = pd.DataFrame(range(1,1000), columns=["testColumn"])
mounted_output_path = sys.argv[2]
print(f"mounted output path is {mounted_output_path}")
os.makedirs(mounted_output_path, exist_ok=True)
csvfilepath = os.path.join(sys.argv[2] ,'file.csv')
test.to_csv(csvfilepath)
# +
outputds = OutputFileDatasetConfig(destination=(datastore, 'somedata/20201218'))
# -
prep_step = PythonScriptStep(name='demo step',
script_name="step1.py",
# mount fashion_ds dataset to the compute_target
arguments=[mydataset.as_named_input('myds').as_mount(), outputds],
compute_target='automlmanishds3',
allow_reuse=True)
pipeline = Pipeline(workspace=ws, steps=[prep_step])
# +
experiment_name = 'showcasing-outputdata'
source_directory = '.'
experiment = Experiment(ws, experiment_name)
experiment
# -
run = experiment.submit(pipeline)
#run.wait_for_completion(show_output=True)
| notebooks/variabledatastore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Read from tokenized text
f = open('C:\\Users\\user\\Desktop\\nlp\\ark-tweet-nlp-0.3.2\\sentiment-analysis\\data\\output\\tokenized2class.txt', 'r')
temp = f.readlines();
f.close();
len(temp)
#split the tokens,types,confidence and original text
tokens = []
print(len(temp))
for i in range (len(temp)):
#for i in range (3000):
tokens.append(temp[i].split('\t'));
tokens[0][1]
len(tokens)
#split the tokens and types into arrays
for i in range(len(tokens)):
tokens[i][0]=tokens[i][0].split(" ");
tokens[i][1]=tokens[i][1].split(" ");
tokens[0][0]
#remove the duplicate words in list
def unique_list(l):
ulist = []
[ulist.append(x) for x in l if x not in ulist]
return ulist
# +
#remove the non-content words
featurePosList = {'N','O','^','S','Z','V','A','R','!','#','E'}
extractedFeatures = [];
for i in range(len(tokens)):
#x=set();
str="";
for j in range(len(tokens[i][1])):
if(tokens[i][1][j] in featurePosList):
str+=tokens[i][0][j];
str+=" ";
print(str);
str=' '.join(unique_list(str.split()))
extractedFeatures.append(str);
# -
extractedFeatures
#relate with classes from original text file
f = open('C:\\Users\\user\\Desktop\\nlp\\ark-tweet-nlp-0.3.2\\sentiment-analysis\\data\\input\\topic.2class.txt', 'r')
lines = f.readlines()
f.close()
arr= [];
print(len(lines))
for i in range ( len(lines) ):
arr.append(lines[i].split('\t'))
# +
#write the extracted features
file = open('C:\\Users\\user\\Desktop\\nlp\\ark-tweet-nlp-0.3.2\\sentiment-analysis\\data\\output\\extracted.2class.txt',"w")
for i in range (len(temp)):
text = arr[i][2];
file.write(text);
file.write('\t');
file.write(extractedFeatures[i]);
file.write('\n');
file.close()
# -
| rb/extract2class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bhWV8oes-wKR"
# # COURSE: A deep understanding of deep learning
# ## SECTION: Overfitting, cross-validation, regularization
# ### LECTURE: Splitting data into train, devset, test
# #### TEACHER: <NAME>, sincxpress.<EMAIL>
# ##### COURSE URL: udemy.com/course/dudl/?couponCode=202201
# + id="YeuAheYyhdZw"
# import libraries
import numpy as np
from sklearn.model_selection import train_test_split
# + id="q-YUb7pW19yy"
### create fake dataset (same as in previous videos)
fakedata = np.tile(np.array([1,2,3,4]),(10,1)) + np.tile(10*np.arange(1,11),(4,1)).T
fakelabels = np.arange(10)>4
print(fakedata), print(' ')
print(fakelabels)
# + [markdown] id="UhkvsJ6g6uXr"
# # Using train_test_split
# + id="8bxbHGkP7JW3"
# specify sizes of the partitions
# order is train,devset,test
partitions = [.8,.1,.1]
# split the data (note the third input, and the TMP in the variable name)
train_data,testTMP_data, train_labels,testTMP_labels = \
train_test_split(fakedata, fakelabels, train_size=partitions[0])
# now split the TMP data
split = partitions[1] / np.sum(partitions[1:])
devset_data,test_data, devset_labels,test_labels = \
train_test_split(testTMP_data, testTMP_labels, train_size=partitions[1])
# print out the sizes
print('Training data size: ' + str(train_data.shape))
print('Devset data size: ' + str(devset_data.shape))
print('Test data size: ' + str(test_data.shape))
print(' ')
# print out the train/test data
print('Training data: ')
print(train_data)
print(' ')
print('Devset data: ')
print(devset_data)
print(' ')
print('Test data: ')
print(test_data)
# + [markdown] id="EvUQFxSTV2SB"
# # Splitting the data manually using numpy
# + id="XUZcKNWsXg00"
# partition sizes in proportion
partitions = np.array([.8,.1,.1])
print('Partition proportions:')
print(partitions)
print(' ')
# convert those into integers
partitionBnd = np.cumsum(partitions*len(fakelabels)).astype(int)
print('Partition boundaries:')
print(partitionBnd)
print(' ')
# random indices
randindices = np.random.permutation(range(len(fakelabels)))
print('Randomized data indices:')
print(randindices)
print(' ')
# + id="Vre4YiQBZmjy"
# select rows for the training data
train_dataN = fakedata[randindices[:partitionBnd[0]],:]
train_labelsN = fakelabels[randindices[:partitionBnd[0]]]
# select rows for the devset data
devset_dataN = fakedata[randindices[partitionBnd[0]:partitionBnd[1]],:]
devset_labelsN = fakelabels[randindices[partitionBnd[0]:partitionBnd[1]]]
# select rows for the test data
test_dataN = fakedata[randindices[partitionBnd[1]:],:]
test_labelsN = fakelabels[randindices[partitionBnd[1]:]]
# + id="vbTLW0MkXg-V"
# print out the sizes
print('Training data size: ' + str(train_dataN.shape))
print('Devset size: ' + str(devset_dataN.shape))
print('Test data size: ' + str(test_dataN.shape))
print(' ')
# print out the train/test data
print('Training data: ')
print(train_dataN)
print(' ')
print('Devset data: ')
print(devset_dataN)
print(' ')
print('Test data: ')
print(test_dataN)
# + id="CaP1IQDrXhBc"
| 04_Overfitting_n_CrossValidation/DUDL_overfitting_trainDevsetTest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from selenium import webdriver
browser = webdriver.Chrome('./chromedriver.exe')
browser.get('https://www.melon.com/chart/index.htm')
html = browser.page_source
from bs4 import BeautifulSoup as bs
soup = bs(html, 'html.parser')
tags = soup.select('div.ellipsis.rank01> span > a')
for i in tags:
print(i['title'].strip('재생'))
| scraping_selenium_melon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/roneljr/Linear-Algebra-58019/blob/main/Application.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Iulxt2vdEtGI"
# ##Price of one apple and one orange
# + colab={"base_uri": "https://localhost:8080/"} id="A9Ed6rfhEl3r" outputId="c335c26f-c97a-4771-9c81-08fce3fb4fe8"
import numpy as np
from scipy.linalg import solve
A = np.array([[20,10],[17,22]])
B = np.array([[350],[500]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
# + colab={"base_uri": "https://localhost:8080/"} id="8G3GNX6PHLM5" outputId="f31e8f97-b935-42e7-f8fd-aa61c870c02f"
inv_A = np.linalg.inv(A)
print(inv_A)
print()
X = np.linalg.inv(A).dot(B)
print(X)
# + colab={"base_uri": "https://localhost:8080/"} id="1-47PrqJH9_J" outputId="f6757e3d-5797-496c-f9b1-73d31e0fc582"
X = np.dot(inv_A,B)
print(X)
# + [markdown] id="j_ydh4RtLEPT"
# ##Solving for three linear equations with unknown variables of x, y, and z
# + colab={"base_uri": "https://localhost:8080/"} id="1nCz3oHXLCir" outputId="bd7e1533-1582-4d8f-ccda-787e12a48e5c"
#4x+3y+2z=25
#-2x+2y+3z=-10
#3x-5y+2z=-4
import numpy as np
from scipy.linalg import solve
A = np.array([[4,3,2],[-2,2,3],[3,-5,2]])
B = np.array([[25],[-10],[-4]])
print(A)
print()
print(B)
print()
X = solve(A,B)
print(X)
| Application.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="assets/arise/header.jpg" width="900">
# <img style="float: left;" src="assets/arise/logo_NOMAD.png" width=300>
# <img style="float: right;" src="assets/arise/logo_MPG.png" width=170>
# In this tutorial , we introduce the computational framework ARISE (<ins>Ar</ins>tificial <ins>I</ins>ntelligence based <ins>S</ins>tructure <ins>E</ins>valuation) for crystal-structure recognition in single- and polycyrstalline systems[1]. ARISE can treat more than 100 crystal structures, including one-, two-dimensional, and bulk materials - in robust and threshold-independent fashion. The Bayesian-deep-learning model yields not only a classification but also uncertainty estimates, which are principled (i.e., they approximate the uncertainty estimates of a Gaussian process) [2,3]. These uncertainty estimates correlate with crystal order.
#
# 
#
# For additional details, please refer to
#
# [1] <NAME>, <NAME>, and <NAME>, arXiv:2103.09777, (2021)
#
# ARISE is part of the code framework *ai4materials* (https://github.com/angeloziletti/ai4materials).
#
#
# The outline of this tutorial is as follows:
#
# * Quickstart (jump here if you want to directly use ARISE)
# * Single-crystal classification
# * Polycyrstal classification
# * Unsupervised learning / explanatory analysis of the trained model
# ## Import packages
# +
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import ase
from ase.io import read, write
from ase.visualize import view
import ipywidgets as widgets
from ipywidgets import interactive
from timeit import default_timer as timer
from collections import defaultdict
from ai4materials.utils.utils_crystals import get_nn_distance, scale_structure
from ai4materials.descriptors.quippy_soap_descriptor import quippy_SOAP_descriptor
from ai4materials.utils.utils_config import set_configs, setup_logger
from ai4materials.models.cnn_polycrystals import predict_with_uncertainty
from ai4materials.models import ARISE
import quippy
from quippy import descriptors
import matplotlib
import matplotlib.pyplot as plt
import os
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # Suppress TF warnings
from keras.models import load_model
import json
import pickle
# filepaths
prototype_path = './data/arise/single_crystals/PROTOTYPES' # prototypes used in [1]
bcc_prototype_path = os.path.join(prototype_path,
'Elemental_solids',
'bcc_W_A_cI2_229_a',
'bcc_W_A_cI2_229_a.in' )
fcc_prototype_path = os.path.join(prototype_path,
'Elemental_solids',
'fcc_Cu_A_cF4_225_a',
'fcc_Cu_A_cF4_225_a.in' )
NaCl_prototype_path = os.path.join(prototype_path,
'Binaries',
'RockSalt_NaCl',
'RockSalt_NaCl.in' )
nn_model_path = './data/arise/nn_model/AI_SYM_Leitherer_et_al_2021.h5'
nn_model_class_info_path = './data/arise/nn_model'
calculations_path = './data/arise/calculations'
fcc_bcc_desc = np.load('./data/arise/saved_soap_descriptors/fcc_Cu_A_cF4_225_a_bcc_W_A_cI2_229_a.npy')
fcc_desc = fcc_bcc_desc[:1]
bcc_desc = fcc_bcc_desc[1:]
nacl_desc = np.load('./data/arise/saved_soap_descriptors/RockSalt_NaCl.npy')
version = 'py3' # or 'py2' -> defines python version of quippy
nb_jobs = 1
# set config file
main_folder = calculations_path
configs = set_configs(main_folder=main_folder)
logger = setup_logger(configs, level='INFO', display_configs=False)
logger.setLevel('INFO')
# -
# ## Quickstart
# Please specify the geometry files that you want to analyze in the list 'geometry_files'.
#
# ARISE can be used in two modes: Local and global (controlled via the keyword 'mode'). If mode = 'local', then the strided pattern matching (SPM) framework introduced in [1] is employed, requiring the definition of stride and box size (standard setting: $4 \overset{\circ}{\mathrm {A}}$ stride in all directions and $16 \overset{\circ}{\mathrm {A}}$ box size).
# ```python
# from ai4materials.models import ARISE
#
# geometry_files = [ ... ]
#
# predictions, uncertainty = ARISE.analyze(geometry_files, mode='global')
#
# predictions, uncertainty = ARISE.analyze(geometry_files, mode='local')
# ```
# Typically, one uses the global mode for single crystals an the local mode for large (polycrystalline) samples (to zoom into a given structure and detect structural defects such as grain boundaries). However, one can also mix this up and investigate, for instance, the global assignments for polycrystals (see for instance the analysis of STEM graphene images with grain boundaries in Fig. 3 of Ref. [1]).
#
# The following sections provide more details on the internal workings of ARISE and SPM.
# ## The Bayesian-deep-learning model
# Given an unknown atomic structure, the goal of crystal-strucutre recognition is - in general terms - to find the most similar prototype (that is currently known to be found in nature, for example). In the figure below, this list includes fcc, bcc, diamond, and hcp symmetry but as we will see later, our framework allows to treat a much broader range of materials.
#
# 
#
# The initial representation of a given atomic structure (single- or polycrystalline) are atomic positions and chemical species symbols (as well as lattice vectors for periodic systems).
#
# The first step is to define how this input will be treated in the ARISE framework, i.e., how we arrive at a meaningful prediction.
# ### Classification of mono-species single crystals
# Considering exemplarily the body-centered cubic system (see http://www.aflowlib.org/prototype-encyclopedia/A_cI2_229_a.html and also [4]), the prediction pipeline for single crystals is sketched as follows:
# 
# Each of these steps will be explained in the following.
#
# We first load the geometry file using ASE (https://wiki.fysik.dtu.dk/ase/index.html). We typically use the FHI-aims format for the geometry files (while via ASE's geometry file parser, compatibility with many other formats is provided).
structure = read(bcc_prototype_path, ':', 'aims')[0]
# To avoid dependence on the lattice parameters, we isotropically scale a given structure using the function 'get_nn_distance' from the *ai4materials* package. By default, we compute the radial distribution function (as approximated by the histogram of nearest neighbor distances) and then choose the center of the maximally populated bin (i.e., the mode) as the nearest neighbor distance:
scale_factor = get_nn_distance(structure)
print('Scale factor for fcc structure = {}'.format(scale_factor))
# Given an atomic structure, scaling of atomic positions and unit cell is summarized in the function 'scale_structure':
scaled_structure = scale_structure(structure, scaling_type='quantile_nn',
atoms_scaling_cutoffs=[10., 20., 30., 40.])
# Using atomic positions $\mathbf{r}_i$ labeled by chemical species $c_i$ ($i = 1, ..., \text{N}_{\text{atoms}}$) directly as input to a classification model introduces several issues. Most importantly, physically meaningful invariants that we know to be true are not respected (invariance with respect to translations, rotations and permutations of idential atoms). Also the input dimension would depend on the number of atoms.
#
# A well-known and -tested materials science descriptor that is by definition invariant to the above mentioned symmetries is the so-called smooth-overlap-of-atomic-positions (SOAP) descriptor [5-7].
#
# In this tutorial (and in [1]) we employ the SOAP implementation that is being made available via the quippy package (https://github.com/libAtoms/QUIP) for non-commercial use.
#
# In short, given an atomic structure with N atoms, we obtain N SOAP vectors (respecting the mentioned invariants) that represent the local atomic environments of each atom. The local atomic environments of an atom is defined as a sphere (centered at that particular atom) that has a certain tunable cutoff radius $R_C$. Each atom within that sphere is represented by a Gaussian function (with tunable width $\sigma$). The sum of these Gaussians defines the local density:
#
# $$ \rho_\mathscr{X}(\vec{r})=\sum_{i\in \mathscr{X}} \exp{\left(-\frac{(\vec{r}-\vec{r}_i)^2}{2\sigma^2}\right)}$$
#
# This density is expanded in terms of radial basis functions and spherical harmonics
#
# $$ \rho_\mathscr{X}(\vec{r})=\sum_{i\in \mathscr{X}} \exp{\left(-\frac{(\vec{r}-\vec{r}_i)^2}{2\sigma^2}\right)} =\sum_{blm}c_{blm}u_b(r) Y_{lm}(\hat{\vec{r}}),$$
#
# yielding a set of expansion coefficients $c_{blm}$.
#
# A rotational average of these coefficients yields the SOAP representation of the local atomic environment:
#
# $$ p(\mathscr{X})_{b_1b_2l}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}(c_{b_1 lm})^{\dagger} c_{b_2 lm}.$$
#
# Applying this procedure to every atom, gives a collection of SOAP vectors, which one may average. The default behavior in quippy (that is employed in [1]) is to first average the coefficients and then perform the rotational average.
#
# The output of quippy is adapted using a typical averaging procedure (while additional details such as the treatment of "cross-correlation terms" is discussed in the next section and also the supplementary material of [1]).
#
# First, we have to define some parameters necessary for the calculation of SOAP, summarized in the string
# descriptor_options that is used to create the SOAP descriptor object:
# +
cutoff = 4.0
central_weight = 0.0 # relative weight of central atom in atom density
atom_sigma = 0.1
p_b_c = scaled_structure.get_pbc()
l_max = 6
n_max = 9
n_Z = 1
n_species = 1
species_Z = structure.get_atomic_numbers()[0]
Z = structure.get_atomic_numbers()[0]
average = True
descriptor_options = 'soap '+'cutoff='+str(cutoff)+' l_max='+str(l_max)+' n_max='+str(n_max) + \
' atom_sigma='+str(atom_sigma)+ ' n_Z='+str(n_Z) + \
' Z={'+str(Z)+'} n_species='+str(n_species)+' species_Z={'+str(species_Z) + \
'} central_weight='+str(central_weight)+' average='+str(average)
desc = descriptors.Descriptor(descriptor_options)
if version == 'py2':
from quippy import Atoms as quippy_Atoms
struct = quippy_Atoms(scaled_structure)
elif version == 'py3':
struct = scaled_structure
struct.set_pbc(p_b_c)
# -
# Given this, we can calculate the SOAP descriptor and visualize its components:
# +
#Compute SOAP descriptor
if version == 'py2':
struct.set_cutoff(desc.cutoff())
struct.calc_connect()
SOAP_descriptor = desc.calc(struct)['descriptor']
elif version == 'py3':
SOAP_descriptor = desc.calc(struct)['data']
# visualize descriptor
def vis_desc(SOAP_descriptor):
fig, ax = plt.subplots(1,1, figsize=(8,2))
ax.plot(SOAP_descriptor[0])
ax.set_xlabel('SOAP component #')
ax.set_ylabel('SOAP component value (normalized)')
plt.show()
vis_desc(SOAP_descriptor)
# -
# Scaling, descriptor definition and calculation are summarized in the following cell (using *ai4materials*):
# +
# Define the descriptor
descriptor = quippy_SOAP_descriptor()
soap_desc = descriptor.calculate(structure).info['descriptor']['SOAP_descriptor']
# visualize descriptor
vis_desc(SOAP_descriptor)
# -
# The object quippy_SOAP_descriptor above has standard settings that allow to reproduce the results of [1], but in principle one may choose different values - in particular for the SOAP parameters:
# ```python
# descriptor = quippy_SOAP_descriptor(configs=configs,
# cutoff=cutoff, l_max=l_max,
# n_max=n_max, atom_sigma=atom_sigma)
# ```
# Given the descriptor, the next step in the pipeline is to apply a classification model. We use the Bayesian neural network introduced in [1]:
model = load_model(nn_model_path)
model.summary()
# The neurons in the final layer each correspond to a specific class. This information is saved in a json file that we are loading below:
# +
with open(os.path.join(nn_model_class_info_path, 'class_info.json')) as file_name:
data = json.load(file_name)
class_labels = data["data"][0]['classes']
numerical_to_text_label = dict(zip(range(len(class_labels)), class_labels))
text_to_numerical_label = dict(zip(class_labels, range(len(class_labels))))
# -
# In [1], ARISE employs a Bayesian neural network as a classification model, which it does not only yield classification probabilities, but also allows to quantify the predictive uncertainty.
#
# Specifically, we employ so-called Monte Carlo (MC) Dropout introduced in [2,3], which employs the stochastic regularization technique dropout. In dropout, individual neurons are randomly dropped in each layer, usually only during training to avoid over-specialization of individual units and this way control overfitting. One can show that employing dropout also at test time yields a probabilistic model whose uncertainty estimates approximate those of a Gaussian process. In particular, neural-network optimization with dropout is equivalent to performing variational inference (see [3] for more details, especially sections 1, 2, and 3.3).
#
# In practice, for a given input, the model output is computed for several iterations (100-1000 typically suffice), yielding a collection of differing probability vectors. In contrast, standard neural networks are deterministic and thus all predictions are identical. Sampling the output layer of the Bayesian neural network actually corresponds to sampling from (an approximated version of) the true probability distribution of outputs. Deterministic networks only provide a point estimate of this distribution, leading to overconfident predictions (see section 1.5 of [3]). Averaging the forward-passes gives an average classification probability vector, and the predicted class label can be inferred by selecting the most likely class (i.e., computing the argmax). Additional statistical information is contained in the classification probability samples. In [1], we employ mutual information (as implemented in *ai4materials*) to obtain a single uncertainty-quantifying number from the output samples.
#
# For the bcc structure from above, prediction and uncertainty quantification is performed in the following cell:
# +
# reshape data array such that it fits model input shape
input_shape_from_model = model.layers[0].get_input_at(0).get_shape().as_list()[1:]
target_shape = tuple([-1] + input_shape_from_model)
data = np.reshape(soap_desc, target_shape)
# Obtain classifcation probabilities (numpy array 'prediction')
# and uncertainty quantifiers (dictionary of numpy arrays, with three keys
# 'mutual_information', 'predictive_entropy', and 'variation_ratio' corresponding to
# three uncertainty quantifiers - see also <NAME> & <NAME>, arXiv:1803.08533v1, 2018 for further
# reading).
prediction, uncertainty = predict_with_uncertainty(data,
model=model,
model_type='classification',
n_iter=1000)
# -
# The array 'prediction' gives the classification probabilities, which are clearly peaked at the entry corresponding to bcc:
argmax_prediction = prediction.argmax(axis=-1)[0]
predicted_label = numerical_to_text_label[argmax_prediction].split('_')[:2]
predicted_label = '_'.join(predicted_label)
print('Maximal classifcation probability with value {:.4f} for prototype {}'.format(prediction[0][argmax_prediction],
predicted_label))
plt.plot(prediction[0])
plt.xlabel('Class (integer encoding)')
plt.ylabel('Probability')
plt.show()
# The dictionary "uncertainty" contains quantifiers of uncertainty, in particular the mutual information employed in [1]:
print('Mutual information = {:.4f}'.format(uncertainty["mutual_information"][0]))
# ***In summary***: All of these steps - isotropic scaling, descriptor calculation, neural network predictions (classification probabilities + uncertainty) - are summarized in the following function, to provide a quick and easy usage. In particular, you may simply pass a list of geometry files, here we choose the fcc (Cu) and bcc (W) prototypes from AFLOW:
# +
from ai4materials.models import ARISE
geometry_files = [fcc_prototype_path, bcc_prototype_path]
predictions, uncertainty = ARISE.analyze(geometry_files, mode='global')
# -
# To analyze the predictions, in particular the ranking by classification probability and the uncertainty quantifiers, we can use the function 'analyze_predictions':
ARISE.analyze_predictions(geometry_files, predictions, uncertainty)
# For the predictions in the fcc, non-zero probability is also assigned to a tetragonal prototype (identifier 'bctIn_In_A_tI2_139_a', where the AFLOW identifier is specified in this case as well, see [4] for more details). This assignment is justified since it is a slightly distorted (tetragonal) version of fcc (http://aflowlib.org/prototype-encyclopedia/A_tI2_139_a.In.html). Moreover, the uncertainty is non-zero, indicating that there is more than one candidate for the most similar prototype. Interestingly, also the low probability candidates are still meaningful. Further analysis of this classification-probability induced ranking is provided in [1], specifically Fig. 3 (for experimental scanning transmission electron microscopy data of graphene) and Supplementary note 2.3 (for out-of-sample single-crystal structures from AFLOW and NOMAD). This study becomes particularly important if one wants to investigate atomic structures that are not include in the training set. In the mentioned figures of [1], we provide guidelines on how one can still use ARISE in this situation and interpret its predictions.
#
# In the next section, we will first extend to multi-species systems and then provide the code for this out-of-sample studies, where you may also try different structures (upload the geometry files to './data/arise/calculations_path').
# ### Classification of multi-species single crystals
# One of the most important characteristics of ARISE is that it can treat much more classes than available methods (see [1] for a benchmarking study).
#
# So far we only considered mono-species systems. We consider the rock salt structure (NaCl) as an example for the extension of ARISE to multi-component systems.
#
# The usual way in treating multiple chemical species is to calculate [8]
#
# $$p(\mathscr{X})_{b_1b_2l}^{\alpha\beta}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}(c_{b_1 lm}^{\alpha})^{\dagger} c_{b_2 lm}^\beta,$$
#
#
# where the coefficients $c_{b_1 lm}^{\alpha}, c_{b_1 lm}^{\beta}$ are obtained from separate densities for species $\alpha,\beta$. Then one may simply append all of these components, or average over certain combinations. We take the latter route, where it turns out that it suffices (for our purposes) to only select coefficients for $Z_1 = Z_2$ and ignore cross terms for $Z_1 \neq Z_2$. Then, we consider all substructures, i.e., we compute SOAP according to
#
# $$ p(\mathscr{X})_{b_1b_2l}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}(c_{b_1 lm})^{\dagger} c_{b_2 lm}, $$
#
# where we sit on all Na atoms and consider only Na atoms, sit on all Na atoms and consider only Ca atoms etc. This procedure yields 4 vectors of equal length, which are averaged. Generalization to structures with arbitrary number of chemical species is straightforward - and so we end up with a descriptor that is independent of 1. the number of atoms, 2. number of chemical species, and 3. incorporates all relevant symmetries (rotational invariance, translational invariance, and invariance with respect to permutations of identical atoms).
#
# Coming back to our implementation, an important keyword is 'scale_elment_sensitive' which is by default set to 'True' - this way, we isotropically scale each substructure separately (one may also not do this, but the model we provide her is trained for 'scale_element_sensitive'==True):
geometry_files = [NaCl_prototype_path]
NaCl_structure = read(NaCl_prototype_path, ':', format='aims')[0]
view(NaCl_structure, viewer='ngl')
predictions, uncertainty = ARISE.analyze(geometry_files, mode='global', save_descriptors=True)
ARISE.analyze_predictions(geometry_files, predictions, uncertainty)
# One can see that NaCl is correctly predicted with low mutual information.
# You may try to choose a different prototype and analyze its predictions in the following cells. You can choose one of the prototypes of [1] (see also Supplementary Table 4-6 for a complete list), or other examples from AFLOW (including those explored in Supplementary Figure 2).
#
# First we define some functions for loading (no need to understand this in detail, so you can skip for now):
# +
# Load all prototypes
from ai4materials.models.spm_prediction_tools import load_all_prototypes
#########################################################
# PARAMETERS WHICH YOU MAY WANT TO CHANGE
#########################################################
# For the training prototypes:
periodic = True # if False, take supercell structure with at least min_atoms atoms.
min_atoms = 100
# Exception: nanotubes, which are here of fixed length. For quippy, the SOAP descriptor
# does not change significantly such that change of length does not matter (at least for
# Carbon nanotubes).
# Change this value to smooth the fluctuating predictions in case of high-uncertainty points.
# Please also have a look at the remark in the next cell.
n_iter = 1000
#########################################################
prototypes = load_all_prototypes(periodic_boundary_conditions=[False])
for prototype in prototypes:
if not prototype.info['dataset'] == 'Nanotubes':
if periodic:
prototype.set_pbc(True)
else:
prototype.set_pbc(True)
replica = 0
third_dimension = replica
while (prototype*(replica,replica,third_dimension)).get_number_of_atoms()<min_atoms:
replica += 1
third_dimension += 1
if prototype.info['dataset'] == '2D_materials':
third_dimension = 1
prototype *= (replica, replica, replica)
training_prototypes = [_.info['crystal_structure'] for _ in prototypes]
name_to_material_type_dict = {_.info['crystal_structure'] : _.info['dataset'] for _ in prototypes}
#prototypes_geo_files = [os.path.join(prototype_path,
# prototype.info['crystal_structure'],
# prototype.info['crystal_structure'] + '.in')]
AFLOW_other_protos_path = './data/arise/single_crystals/exploration_examples'
AFLOW_geo_files = os.listdir(AFLOW_other_protos_path)
out_of_sample_prototypes = [_.split('.')[0] for _ in AFLOW_geo_files]
AFLOW_geo_files_paths = [os.path.join(AFLOW_other_protos_path, _) for _ in AFLOW_geo_files]
#other_AFLOW_prototypes = [read(_, ':', format='aims') for _ in AFLOW_geo_files]
def save_pred_and_u(predictions, uncertainty, file_name):
np.save(file_name+'_predictions.npy', predictions)
pickle.dump(np.array(predictions), open(file_name+'_uncertainty.pkl', "wb"))
# -
# <span style="color:red">Some practical tips:</span>
# * When executing the code below, don't be surprised if the predictions are not identical between different runs. Due to the stochastic nature of the model, this may very well happen, especially for out-of-sample predictions. Still, the final class prediction (via the argmax operation) remains constant for the parameters being used here.
# * To arrive at more constant classification probabilties (and uncertainty quantifications), one may increase the stochastic forward passes (see 'n_iter' argument in the above cell). The increase of computational cost for larger values of n_iter is typicall mild (unless you go to extreme values with billions/trillions of forward passes).
#
# Run the following cell to investigate the training prototypes used in [1]. Click 'Run Interact' to start a calculation:
@widgets.interact_manual(prototype_name=training_prototypes)
def given_geo_predict_and_analyze_training_prototypes(prototype_name):
print('Start calculation.')
geometry_file_path = os.path.join(prototype_path,
name_to_material_type_dict[prototype_name],
prototype_name)
geometry_file = [_ for _ in os.listdir(geometry_file_path) if _[-3:]=='.in']
if not len(geometry_file)==1:
raise ValueError("Poluted prototype path. In [1] we only have one representative for each \
structural class")
geometry_file = os.path.join(geometry_file_path, geometry_file[0])
logger_level = logger.getEffectiveLevel()
logger.setLevel(0)
predictions, uncertainty = ARISE.analyze([geometry_file], mode='global', n_iter=n_iter)
save_pred_and_u(predictions, uncertainty, os.path.join(calculations_path,
prototype_name))
logger.setLevel(logger_level)
print('Finished calculation.')
top_n_labels_list = ARISE.analyze_predictions([geometry_file], predictions,
uncertainty, return_predicted_prototypes=True)
#np.save(os.path.join(calculations_path, 'top_n_labels.npy'), np.array(top_n_labels_list[0], dtype=object))
pickle.dump(top_n_labels_list[0], open(os.path.join(calculations_path, 'top_n_labels.pkl'), "wb"))
# You may also want to have a look at the top predicted prototypes. When executing the next cell, an interactive menu will open up where you can change via "Ranking_idx" which of the top predictions you want to have a look at and via "Supercell" which replicas you want to inspect - exception being nanotubes which have fixed length. Executing the cell after the following one visualizes the selected structure. We employ NGLview, where you can zoom in/out via the mouse wheel and may also select which species to show via "Show".
@widgets.interact(Rankind_index=[1,2,3], Supercell=[(1,1,1),(2,2,2), (3,3,3), (4,4,4)])
def vis_predicted_structures(Rankind_index, Supercell):
#top_n_labels = np.load(os.path.join(calculations_path, 'top_n_labels.npy'))
top_n_labels = pickle.load(open(os.path.join(calculations_path, 'top_n_labels.pkl'), "rb"))
structure_label = top_n_labels[Rankind_index-1] # Rankind_idx = 1,2,3,.. -> subtract 1
geometry_file_path = os.path.join(prototype_path,
name_to_material_type_dict[structure_label],
structure_label)
geometry_file = [_ for _ in os.listdir(geometry_file_path) if _[-3:]=='.in']
if not len(geometry_file)==1:
raise ValueError("Poluted prototype path. In [1] we only have one representative for each \
structural class")
geometry_file = os.path.join(geometry_file_path, geometry_file[0])
structure = read(geometry_file, ':', 'aims')[0]
if not structure.cell.flatten().any()==0.0:
structure *= Supercell # only if lattice defined, create supercell
write(os.path.join(calculations_path, 'ranking_structure.in'), structure, format='aims')
view(read(os.path.join(calculations_path, 'ranking_structure.in'),':', 'aims'), viewer='ngl')
# Please run the following cell if you want to study out-of-sample structures (including the prototypes investigated in Supplementary Note 2.3 of [1]). If you want to inspect your own, new structures, just upload the to './data/arise/calculations_path' and rerun the code cell before 'Some practical tips').
<EMAIL>(prototype_name=out_of_sample_prototypes)
@widgets.interact_manual(prototype_name=out_of_sample_prototypes)
def given_geo_predict_and_analyze_out_of_sample(prototype_name):
print('Start calculation.')
geometry_file = os.path.join(AFLOW_other_protos_path, prototype_name + '.in')
selected_structure = read(geometry_file, ':', 'aims')[0]
write(os.path.join(calculations_path, 'selected_structure.in'), selected_structure, format='aims')
logger_level = logger.getEffectiveLevel()
logger.setLevel(0)
predictions, uncertainty = ARISE.analyze([geometry_file], mode='global', n_iter=n_iter)
save_pred_and_u(predictions, uncertainty, os.path.join(calculations_path,
prototype_name))
logger.setLevel(logger_level)
print('Finished calculation.')
top_n_labels_list = ARISE.analyze_predictions([geometry_file], predictions,
uncertainty, return_predicted_prototypes=True)
#np.save(os.path.join(calculations_path, 'top_n_labels.npy'), np.array(top_n_labels_list[0], dtype=object))
pickle.dump(top_n_labels_list[0], open(os.path.join(calculations_path, 'top_n_labels.pkl'), "wb"))
# The selected structure can be visualized via executing the following cell:
view(read(os.path.join(calculations_path, 'selected_structure.in'),':', 'aims'), viewer='ngl')
# You may also want to have a look at the top predicted prototypes:
@widgets.interact(Rankind_index=[1,2,3], Supercell=[(1,1,1),(2,2,2), (3,3,3), (4,4,4)])
def vis_predicted_structures(Rankind_index, Supercell):
#top_n_labels = np.load(os.path.join(calculations_path, 'top_n_labels.npy'))
top_n_labels = pickle.load(open(os.path.join(calculations_path, 'top_n_labels.pkl'), "rb"))
structure_label = top_n_labels[Rankind_index-1] # Rankind_idx = 1,2,3,.. -> subtract 1
geometry_file_path = os.path.join(prototype_path,
name_to_material_type_dict[structure_label],
structure_label)
geometry_file = [_ for _ in os.listdir(geometry_file_path) if _[-3:]=='.in']
if not len(geometry_file)==1:
raise ValueError("Poluted prototype path. In [1] we only have one representative for each \
structural class")
geometry_file = os.path.join(geometry_file_path, geometry_file[0])
structure = read(geometry_file, ':', 'aims')[0]
if not structure.cell.flatten().any()==0.0:
structure *= Supercell # only if lattice defined, create supercell
write(os.path.join(calculations_path, 'ranking_structure.in'), structure, format='aims')
view(read(os.path.join(calculations_path, 'ranking_structure.in'),':', 'aims'), viewer='ngl')
# # Polycyrstal classification
# To classify polycrystals, we introduce the strided pattern matching (SPM) framework, sketched in the following figure:
#
# 
#
# ***The first line*** depicts this procedure for slab-like systems and ***the second line*** for bulk materials.
#
# In both scenarios, a box of certain size is scanned across the whole crystal volume with a certain stride. For each stride, the classification model is applied, yielding both predictions and uncertainty values. These values are then rearranged in 2D or 3D maps. One can construct these for each of the classification probabilities and the uncertainty (here as quantified by mutual information), or also for the most similar prototypes (i.e., the argmax prediction).
#
#
# This way one can discover the most prevalent symmetries (via classification maps) and detect defective regions. In particular, the uncertainty can indicate when a particular region is defective or far outside the known training examples, thus allowing for instance to discover grain boundaries, or, as it is depicted above, a cubic-shaped precipitate.
# As introduced in the 'Quickstart' section, the local analysis can be obtained by simply changing the mode to 'local', while one also has to provide stride and box size (standard settings stride=1.0, box_size=12.0).
#
# We investigate exemplarily the mono-species structure shown in the first line of the above plot (and discussed in more detail in Fig. 2 of [1]):
geometry_files = ['./data/arise/polycrystals/four_grains_elemental_solid.xyz']
view(read(geometry_files[0], ':', 'xyz')[0], viewer='ngl')
# We choose a box size that is roughly equal to the slab thickness and perform the SPM analysis. We choose here a very coarse stride and the smaller we will choose it the better the resolution will get. We select a stride along the z-axis that is larger than the slab thickness, a setting for which no stride in this direction will be made.
#
# A folder (in './data/arise/calculations_path') in which all relevant information is saved is created automatically. The name of the folder is given by the structure file being passed and the options for SPM (box size and stride).
#
# Note that if you run the cell below multiple times, a new folder (with "_run_i", i=1,... appended to the file name) will be created, so no data will be lost.
#
# <font color=red>Warning:</font> Executing the following cell takes ~ 3 min.
# +
box_size = [16.0]
# you may also pass a list for both stride and box size if you have more than one geometry file.
stride = [[15.0, 15.0, 20.0]]
# choosing the stride in z larger as the extension of the material (here as 20.0, exceeding
# the slab thickness by about 4 angstrom) will prohibit that a stride is made in z direction.
# previous_level = logger.getEffectiveLevel()
# logger.setLevel(0) # switch off logging, you may change this.
predictions, uncertainty = ARISE.analyze(geometry_files, mode='local',
stride=stride, box_size=box_size, configs=configs, nb_jobs=nb_jobs)
# logger.setLevel(previous_level)
# -
# Within the automatically created folder, the results are saved in the folder 'results_folder' (in this case, the probabilities as 'four_grains_elemental_solid.xyz_probabilities.npy' and the uncertainty as 'four_grains_elemental_solid.xyz_mutual_information.npy'), where each of the numpy arrays has in general the shape (#classes, z_coordinate, y_coordinate, x_coordinate), where here #classes=108 and the coordinates refer to the spatial positions of the box. In this case of a slab structure, the array shape is (#classes, y_coordinate, x_coordinate) while in general it is (#classes, z_coordinate, y_coordinate, x_coordinate). Moreover, in 'desc_folder' a .tar.gz file is provided that contains all information on the boxes obtained via SPM (in particular the atomic positions and the calculated descriptor).
print('Uncertainty (mutual information) array shape = {}'.format(uncertainty[0]['mutual_information'].shape))
print('Predictions (i.e., classification probabilities) array shape {}'.format(predictions[0].shape))
# We exemplarily visualize the four classes fcc, bcc, hcp, and diamond alongside the mutual information in the following cell:
# +
import matplotlib
classes_of_interest = ['fcc_Cu_A_cF4_225_a', 'bcc_W_A_cI2_229_a', 'hcp_Mg_A_hP2_194_c', 'diam_C_A_cF8_227_a']
uncertainty_quantifier = ['mutual_information']
fig_classes, ax_classes = plt.subplots(1, len(classes_of_interest), figsize=(20,5))
for idx, class_of_interest in enumerate(classes_of_interest):
class_idx = text_to_numerical_label[class_of_interest]
classification_probabilities = predictions[0][class_idx]
ax_class = ax_classes[idx]
ax_class.imshow(classification_probabilities)
cmap = matplotlib.cm.get_cmap(name='viridis')
# set the color for NaN values
cmap.set_bad(color='lightgrey')
cax = ax_class.imshow(classification_probabilities, interpolation='none', vmin=0.0, vmax=1.0, cmap=cmap,
origin='lower')
ax_class.set_title('Class {}'.format(class_of_interest))
ax_class.set_xlabel(u'x stride #')
ax_class.set_ylabel(u'y stride #')
fig_classes.colorbar(cax)
fig_u, ax_u = plt.subplots(1,1, figsize=(20,5))
for u in uncertainty_quantifier:
uncertainty_values = uncertainty[0][u]
cmap = matplotlib.cm.get_cmap(name='hot')
# set the color for NaN values
cmap.set_bad(color='lightgrey')
uax = ax_u.imshow(uncertainty_values, cmap=cmap, origin='lower', vmin=0.0)
ax_u.set_title('Mutual_information')
fig_u.colorbar(uax)
plt.show()
# -
# The finer you make the stride, the better the resolution will get. In [1], we employ a stride of $1 \overset{\circ}{\mathrm {A}}$, resulting in the following high-resolution picture, revealing further details at the grain boundary (see also supplementary figure 7 and 8 of [1]):
#
#
# 
# # Unsupervised analysis
# One important aspect of [1] is the application of unsupervised learning techniques to analyze the internal neural network representations. This allows to *explain* the trained model by showing that regularities in the neural-network representation space correspond to physical characteristics. Using this strategy, one can conduct exploratory analysis, with the goal to discover patterns in high-dimensional representation space and to relate them to the crystal structure - leading to the discovery of defective regions (e.g., grain boundaries or regions that are distorted in similar/distinct fashion, cf. Fig. 2 and 4 of [1]).
#
# Specifically, we apply clustering (HDBSCAN https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html) to identify groups in the high-dimensional neural-network representations. Furthermore the manifold-learning method UMAP (https://umap-learn.readthedocs.io/en/latest/index.html) is employed to obtain a low-dimensional embedding (here 2D) that can be interpreted by human eye.
#
# The most important parameters for the unsupervised techniques employed in this work are the following:
#
# * HDBSCAN: min_cluster_size (int) - determines the minimum number of points a cluster must contain. In more detail, HDBSCAN employs (agglomerative) hierarchical clustering (https://www.displayr.com/what-is-hierarchical-clustering/) to construct a hierarchical cluster tree, where only those clusters are considered that contain as many data points as defined via min_cluster_size. Cluster assignments are extracted by computing a cluster stability value and the most stable clusters are used to construct a final, so-called flat clustering.
#
#
# * UMAP: n_neighbors (int) - determines the trade-off between capturing local (small n_neighbors) vs global (large n_neighbors) relationships in the data. Specifically, UMAP constructs a topological representation of the data (in practice corresponding to a specific type of k-neighbor graph), while some assumptions are made (e.g., the data being uniformly distributed on a Riemannian manifold). Then a low-dimensional representation is found by matching its topological representation with the one of the original data (where the matching procedure amounts to the definition of a cross-entropy loss function and the application of stochastic gradient descent). The parameter n_neighbors influences the high-dimensional topological representation. The distances in the low-dimensional representation can also be tuned via 'min_dist', while this parameter is typically considered more as only tuning the visualization / the distances in the low-dimensional embedding - see also https://pair-code.github.io/understanding-umap/ for a discussion with hands-on examples.
# In the cell below, we consider again the mono-species polycrystal with four crystalline grains (the slab structure depicted above and discussed in Fig. 2 of [1], where we chose a stride of 1 $ \overset{\circ}{\mathrm {A}}$ in x,y and a box size of 16 $ \overset{\circ}{\mathrm {A}}$).
#
# You can change the parameters to get a feeling for the importance of min_clsuter_size, n_neighbors.
#
# Also you may choose the color scale according to ARISE's predictions or uncertainty (mutual information), or the HDBSCAN cluster labels.
#
#
# ***Remark:*** For the HDBSCAN clustering, there are two options: Either you can choose the 'cluster_labels', which is the cluster assignment that arises from the standard (flat) clustering procedure, or the 'argmax_clustering' color scale, which is determined from the soft-clustering feature of HDBSCAN. Specifically, HDBSCAN allows to calculate a vector for each point, with component i capturing the "probability" that the point is a member of cluster i. Then, we can infer a cluster assignment for points that would normally be considered outliers by taking the cluster whose membership probability is maximal (while we consider the point an outlier if none of its cluster probabilities exceeds a certain threshold - here, we choose 10 %). This procedure allows to obtain reasonable clusterings also in the presence of higher levels of noise (as encountered in atomic electron tomography data[9-12], which is analyzed using ARISE in Fig. 4 of [1]). Here, it will not make a qualitative difference in performance, i.e., the basic structure of the polycrystal (four grains + grain boundaries) can be recovered by both clustering procedures.
# +
# %matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
from collections import Counter
n_neighbors_list = [25, 50, 100, 250, 1000]
min_cluster_sizes = n_neighbors_list
analysis_folder_path = './data/arise/unsupervised_learning'
file_name = 'four_grains_elemental_solid.xyz_stride_1.0_1.0_16.0_box_size_16.0_.tar.gz_'
layer_name = 'dense_1'
# other possible values : 'soap', dense_0, dense_2, 'final_dense' corresponding to input, hidden layers and output
s = 32
edgecolors = 'face'
color_scales = ['cluster_labels', 'argmax_clustering', 'ARISE_predictions', 'ARISE_mutual_information']
df_dict = {}
for min_cluster_size in min_cluster_sizes:
for n_neighbors in n_neighbors_list:
quantities_to_plot = {#'cluster_probs': [], # soft clustering probabilties
#'outlier_scores': [], # GLOSH -> see HDBSCAN docs
'cluster_labels': [],
'embedding' : [],
'argmax_clustering': [],
}
for q_to_plot in quantities_to_plot:
q = quantities_to_plot[q_to_plot]
q_filename = os.path.join(analysis_folder_path,
file_name
+ '_'
+ layer_name
+ '_mincsize_'
+ str(min_cluster_size)
+ '_nneighbors_'
+ str(n_neighbors)
+ '_' + q_to_plot + '.npy')
quantities_to_plot[q_to_plot] = np.load(q_filename)
embedding = quantities_to_plot['embedding']
# save embedding
df = pd.DataFrame({'Dim_1': embedding[:, 0], 'Dim_2': embedding[:, 1]})
# save possible color scales
df['cluster_labels'] = quantities_to_plot['cluster_labels']
df['argmax_clustering'] = quantities_to_plot['argmax_clustering']
#df['ARISE_predictions'] = pickle.load(open(os.path.join(analysis_folder_path,
# 'ARISE_predictions.pkl'), "rb"))
#df['ARISE_mutual_information'] = pickle.load(open(os.path.join(analysis_folder_path,
# 'uncertainty_filtered_mutual_information.pkl'), "rb"))
with open(os.path.join(analysis_folder_path, 'ARISE_predictions.json')) as f:
ARISE_predictions = json.load(f)
df['ARISE_predictions'] = ARISE_predictions
df_dict[(min_cluster_size, n_neighbors)] = df
df['ARISE_mutual_information'] = np.load(os.path.join(analysis_folder_path,
'uncertainty_filtered_mutual_information.npy'))
@widgets.interact_manual(min_cluster_size=min_cluster_sizes,
n_neighbors=n_neighbors_list,
color_scale=color_scales)
def f(min_cluster_size, n_neighbors, color_scale):
df = df_dict[(min_cluster_size, n_neighbors)]
df['target'] = df[color_scale]
#if color_scale == 'ARISE_predictions':
# df['target'] = [text_to_numerical_label[_] for _ in df['target'].values]
vmin = min(df['target'].values)
vmax = max(df['target'].values)
if color_scale == 'ARISE_mutual_information':
palette = 'hot'
elif color_scale == 'ARISE_predictions':
palette = None
vmin = None
vmax = None
unique_targets = np.unique(df['target'].values)
target_to_int = {t : i for t,i in zip(unique_targets, range(len(unique_targets)))}
int_to_target = {i : t for t,i in zip(unique_targets, range(len(unique_targets)))}
#palette = plt.get_cmap('tab10', len(unique_targets))
#palette = sns.color_palette(palette='tab10', n_colors=len(unique_targets))
palette_cmap = plt.get_cmap('tab10', np.max(range(len(unique_targets)))-np.min(range(len(unique_targets)))+1)
palette = sns.color_palette(palette_cmap.colors)
df['target'] = [target_to_int[_] for _ in df['target']]
else:
palette = 'viridis'
print('Frequency per label : {}'.format(dict(Counter(df['target'].values))))
print('Remark: -1 = outlier')
fig, axs = plt.subplots(1, 2, figsize=(15,10))
fig.suptitle('min_cluster_size (HDBSCAN) = {}, n_neighbors (UMAP) = {}, color scale = {}'.format(min_cluster_size, n_neighbors, color_scale))
ax = axs[0]
ax_ = sns.scatterplot(ax=ax, x='Dim_1', y='Dim_2',
data=df, s=s, hue='target', edgecolor=edgecolors,
palette=palette, vmin=vmin, vmax=vmax)
ax.set_aspect('equal')
ax.axis('off')
ax.set_title('UMAP 2D embedding of NN representations')
if color_scale == 'ARISE_predictions':
#plt.legend()
ax.get_legend().remove()
#ax.legend(range(len(unique_targets)), labels = [int_to_target[_] for _ in range(len(unique_targets))])
pass
else:
norm = plt.Normalize(df['target'].min(), df['target'].max())
sm = plt.cm.ScalarMappable(cmap=palette, norm=norm)
sm.set_array([])
# Remove the legend and add a colorbar
ax.get_legend().remove()
ax.figure.colorbar(sm)
#plt.legend()
# second part of figure: crystal maps
color_scale_ = color_scale
path_to_crystal_maps = os.path.join(analysis_folder_path,
layer_name+'_neighbors_'+str(n_neighbors)
+'_cluster_analysis_'+color_scale_
+'_min_csize_'+str(min_cluster_size)+'.npy')
if color_scale == 'ARISE_predictions':
color_scale_ = 'argmax_preds'
path_to_crystal_maps = os.path.join(analysis_folder_path, 'ARISE_argmax_predictions_crystal_map.npy')
elif color_scale == 'ARISE_mutual_information':
color_scale_ = 'mutual_information'
path_to_crystal_maps = os.path.join(analysis_folder_path, 'ARISE_mutual_information_crystal_map.npy')
crystal_map = np.load(path_to_crystal_maps)
vmin = None
vmax = None
if color_scale == 'ARISE_predictions':
palette = palette_cmap
vmin = np.min(range(len(unique_targets)))-.5
vmax = np.max(range(len(unique_targets)))+.5
crystal_map_shape = crystal_map.shape
crystal_map = crystal_map.flatten()
for idx, c in enumerate(crystal_map):
if not crystal_map[idx] in list(numerical_to_text_label.keys()):
continue # nan number
else:
if numerical_to_text_label[crystal_map[idx]] in target_to_int.keys():
crystal_map[idx] = target_to_int[numerical_to_text_label[crystal_map[idx]]]
else:
crystal_map[idx] = target_to_int['other']
#crystal_map = np.full(crystal_map_shape, np.nan)
crystal_map = np.reshape(crystal_map, crystal_map_shape)
#print(Counter(crystal_map.flatten()))
ax = axs[1]
cax = ax.imshow(crystal_map, cmap=palette, origin='lower', vmin=vmin, vmax=vmax)
#ax.set_xticks([])
#ax.set_yticks([])
if color_scale == 'ARISE_predictions':
cbar = fig.colorbar(cax, ticks = range(len(unique_targets)))
cbar.ax.set_yticklabels([int_to_target[_] for _ in range(len(unique_targets))])
ax.axis('off')
if not color_scale == 'ARISE_predictions':
ax.set_title('Crystal map (real space)')
plt.show()
# -
# A consistent picture with four main clusters corresponding to the four crystalline grains separated by grain boundaries arises for min_cluster_size > 250 (and also n_neighbors > 250, while also at lower values reasonable embeddings are obtained, i.e., points corresponding to different grains are separated). For more details on the interpretation of these figures, we refer to [1].
# # Conclusion
# You have reached the end of this tutorial.
#
# *Please let us know if you have any questions, wishes, or suggestions for improvement. Feel free to reach out to us, e.g., via mail (<EMAIL>, <EMAIL>).*
# # References
# [1] ARISE: <NAME>, <NAME>, and <NAME>, arXiv:2103.09777, (2021)
#
# [2] <NAME>. & <NAME>. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, 1050-1059 (2016)
#
# [3] <NAME>. Uncertainty in deep learning. Ph.D. thesis, University of Cambridge (2016)
#
# [4] <NAME>. *et al.* The AFLOW library of crystallographic prototypes: part 1. Comput. Mater. Sci. 136, S1–S828 (2017).
#
# [5] <NAME> *et al.* Physical Review Letters 104, 136403 (2010)
#
# [6] <NAME> *et al.* Physical Review B 87, 184115 (2013)
#
# [7] The quippy software is available for non-commercial use from www.libatoms.org (or https://github.com/libAtoms/QUIP)
#
# [8] <NAME>., <NAME>., <NAME>. & <NAME>. Comparing molecules and solids across structural and alchemical space. Phys. Chem. Chem. Phys. 18, 13754–13769 (2016)
#
# [9] <NAME>. *et al.* Three-dimensional imaging of dislocations in a nanoparticle at atomic resolution. Nature 496, 74–77 (2013)
#
# [10] <NAME>., <NAME>. & <NAME>. Atomic electron tomography: 3D structures without crystals. Science 353, aaf2157–aaf2157 (2016).
#
# [11] <NAME>. *et al.* Deciphering chemical order/disorder and material properties at the single-atom level. Nature 542, 75–79 (2017).
#
# [12] <NAME>. *et al.* Observing crystal nucleation in four dimensions using atomic electron tomography. Nature 570, 500–503 (2019).
# Please refer to [1] for additional information and references.
| ARISE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyze International Debt Statistics
#
# Write SQL queries to answer interesting questions about international debt using data from The World Bank.
#
# ## Project Description
#
# It's not that we humans only take debts to manage our necessities. A country may also take debt to manage its economy. For example, infrastructure spending is one costly ingredient required for a country's citizens to lead comfortable lives. [The World Bank](https://www.worldbank.org/) is the organization that provides debt to countries.
#
# In this project, you are going to analyze international debt data collected by The World Bank. The dataset contains information about the amount of debt (in USD) owed by developing countries across several categories. You are going to find the answers to questions like:
#
# - What is the total amount of debt that is owed by the countries listed in the dataset?
# - Which country owns the maximum amount of debt and what does that amount look like?
# - What is the average amount of debt owed by countries across different debt indicators?
#
# The data used in this project is provided by [The World Bank](https://www.worldbank.org/). It contains both national and regional debt statistics for several countries across the globe as recorded from 1970 to 2015.
#
# ### Project Tasks
#
# 1. The World Bank's international debt data
# 2. Finding the number of distinct countries
# 3. Finding out the distinct debt indicators
# 4. Totaling the amount of debt owed by the countries
# 5. Country with the highest debt
# 6. Average amount of debt across indicators
# 7. The highest amount of principal repayments
# 8. The most common debt indicator
# 9. Other viable debt issues and conclusion
| analyze_international_debt_statistics/summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import astropy.table as table
import numpy as np
from defcuts import *
from defflags import *
from def_get_mags import *
bands=['g', 'r', 'i','z', 'y']
daperture=[1.01,1.51,2.02,3.02,4.03,5.71,8.40,11.8,16.8,23.5]
aperture=[x*0.5 for x in daperture]
ty='mean'
tag='outcut'
txtdist= 'Figure2'
txtslope='Figure1'
outdir='/Users/amandanewmark/repositories/galaxy_dark_matter/lumprofplots/clumps/'+ty+tag
doutdir='/Users/amandanewmark/repositories/galaxy_dark_matter/lumprofplots/distribution/'+ty+tag
Flags=['flags_pixel_bright_object_center', 'brobj_cen_flag-', 'No Bright Ojbect Centers', 'Only Bright Object Centers', 'brobj_cen_flag']
indir='/Users/amandanewmark/repositories/galaxy_dark_matter/GAH/'
bigdata = table.Table.read(indir+ 'LOWZ_HSCGAMA15_apmgs+cmodmag.fits')
def do_cuts(datatab):
parm=['flags_pixel_saturated_center','flags_pixel_edge','flags_pixel_interpolated_center','flags_pixel_cr_center','flags_pixel_suspect_center', 'flags_pixel_clipped_any','flags_pixel_bad']
ne=[99.99, 199.99, 0.0]
mincut=0.1
maxcut=''
cutdata=not_cut(datatab, bands, 'mag_forced_cmodel', ne)
for b in range(0, len(bands)-1):
newdata=many_flags(cutdata, parm, bands[b]) #flags not in y?
cutdata=newdata
return newdata
def get_TF(data):
bandi=['i']
Flag, Not,lab= TFflag(bandi,Flags, data)
return Flag, Not
newdata=do_cuts(bigdata)
Flagdat, Notdat=get_TF(newdata)
# +
def get_ind_lums(newdata, bands, aperture, scale=''):
import numpy as np
from def_get_mags import get_zdistmod, get_kcorrect2, aper_and_comov, abs2lum, lumdensity, abs_mag
import math
from defclump import meanlum2
from my_def_plots import halflight_plot, scatter_fit
from scipy import interpolate
import matplotlib.pyplot as plt
from def_mymath import halflight
Naps=len(aperture)
Ndat=len(newdata)
try:
redshifts=newdata['Z']
DM= get_zdistmod(newdata, 'Z')
except:
redshifts=newdata['Z_2']
DM= get_zdistmod(newdata, 'Z_2')
kcorrect=get_kcorrect2(newdata,'mag_forced_cmodel', '_err', bands, '','hsc_filters.dat',redshifts)
fig=plt.figure()
bigLI=[]
bigrad=[]
bigden=[]
for n in range(0, Ndat):
LI=[]
LI2=[]
lumdi=[]
string=str(n)
radkpc=aper_and_comov(aperture, redshifts[n])
#print('redshifts is ', redshifts[n])
for a in range(0, Naps): #this goes through every aperture
ns=str(a)
#print('aperture0',ns)
absg, absr, absi, absz, absy= abs_mag(newdata[n], 'mag_aperture0', kcorrect, DM[n], bands, ns, n)
Lumg, Lumr, Lumi, Lumz, Lumy=abs2lum(absg, absr, absi, absz, absy)
Lg, Lr, Li, Lz, Ly=lumdensity(Lumg, Lumr, Lumi, Lumz, Lumy, radkpc[a])
if scale== 'log':
#print('getting logs')
logLumi=math.log10(Lumi)
logLi=math.log10(Li)
LI.append(logLumi)
lumdi.append(logLi)
else:
LI.append(Lumi)
lumdi.append(Li)
#print('LI for ',n,' galaxy is ', LI)
bigLI.append(LI)
bigden.append(lumdi)
if scale== 'log':
lograd=[math.log10(radkpc[n]) for n in range(len(radkpc))]
bigrad.append(lograd)
else:
bigrad.append(radkpc)
bigLIs=np.array(bigLI)
bigrads=np.array(bigrad)
lumdensi=np.array(bigden)
return bigLIs, bigrads, lumdensi
def my_halflight2(dat1):
loglum, lograd, loglumd= get_ind_lums(dat1, bands, aperture, scale='log')
return loglum, lograd, loglumd
# -
loglum, lograd, loglumd=my_halflight2(Flagdat)
# + active=""
#
# -
#
# +
def get_halfrad(lograds, loglums):
from scipy import interpolate
import math
import numpy as np
print('from halflight_math')
maxlums=10**np.max(loglums)
halfL=maxlums/2
loghalfL=np.log10(halfL)
f=interpolate.interp1d(loglums,lograds, kind='linear', axis=-1)
logr12=f(loghalfL)
return logr12
def upper_rad_cut1(loglum, lograd, logden, m): #this should get rid of galaxies outside 4r1/2
from def_halflight_math import get_halfrad
nloglum=[]
nlograd=[]
nlogden=[]
mult=m
print(len(loglum), len(lograd))
N=len(lograd)
for n in range(N):
loglums=loglum[n]
lograds=lograd[n]
logdens=logden[n]
#print(loglums, lograds)
logr12=get_halfrad(lograds,loglums)
print(n)
r12=10**logr12
print(logr12)
r412=mult*r12
logr412=np.log10(r412)
print(logr412)
if np.max(lograds) >= logr412:
lograd=lograds[(lograds>=logr412)&(lograds<=logr412)]
if len(lograd)>=4:
nloglum.append(loglums)
nlograd.append(lograds)
nlogden.append(logdens)
else:
print('not enough data points')
else:
print('Upper limit out of range')
nloglum=np.array(nloglum)
nlograd=np.array(nlograd)
nlogden=np.array(nlogden)
return nloglum, nlograd, nlogden
loglum, lograd, loglumd= upper_rad_cut(loglum, lograd, loglumd, 4)
# -
| transit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 数据分析实践-手机
#
# 1. 数据调查
# 2. 外形分析
import requests
import pandas as pd
urlb = r'https://club.jd.com/comment/productCommentSummaries.action?referenceIds='
fn = 'phone_data2.csv'
#df = pd.read_csv(fn)
# # 1.
# 获取SKU对应的json
# ### 1.1. 从CSV读取SKU
#
# 希望它是字符串,可它是float64
# > read_csv : dtype={'SKU':str}
#
# 删除nan
# > Series.dropna()
df = pd.read_csv(fn, dtype={'SKU':str})
r = requests.get( urlb + ','.join(df['SKU'].dropna().values) )
# ### 1.2. 读JSON
#
# 将字典写入Serial,SkuId作为Index,CommentCount
#
#
import json
d = json.loads(r.text)
i,v = list(zip(* [[str(i['SkuId']), i['CommentCount']] for i in d['CommentsCount']] ))
type(d['CommentsCount'][0]['SkuId'])
pd.Series(v, index=i, name='评价')
idf = df[['SKU','0604价格']].dropna().set_index('SKU')
pd.concat([idf, pd.Series(v, index=i, name='评价')], axis=1)
| phone_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ************
# # World data
# ************
import pandas as pd
# ## Area de los paises
A = pd.read_csv('../dat/table-1.csv')
# source: https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_area
# preprocessed with https://wikitable2csv.ggor.de/
# eliminar los ultimos paises porque tienen areas muy chicas
A2 = A[:-20]
A2[-10:]
d = A2['Total in km2 (mi2)'].values
d[-20:]
area = []
for a in d:
s = a.split('(')[0].replace(',','').strip()
s = float(s)
area.append(s)
area[:10]
print(A2.shape)
print(len(area))
country = A2['Sovereign state/dependency'].values
d = {'country': country, 'area': area}
df = pd.DataFrame(data=d)
df.to_csv('../dat/world_area.csv', index=None)
# ## Poblacion de los paises
P = pd.read_csv('../dat/table-2.csv')
# source: https://en.wikipedia.org/wiki/List_of_countries_by_population_(United_Nations)
# preprocessed with https://wikitable2csv.ggor.de/
population = []
for pop in P['Population(1 July 2019)']:
pp = pop.replace(',','')
pp = float(pp)
population.append(pp)
population[:6]
P[:5]
P.keys()
country = P['Country or area'].values
d = {'country': country, 'population': population}
df = pd.DataFrame(data=d)
df
df.to_csv('../dat/world_population.csv', index=None)
# # Combine area and population data
import pandas as pd
P = pd.read_csv('../dat/population.csv')
P
A = pd.read_csv('../dat/area.csv')
A
df = pd.merge(P, A, on='country')
df
df.to_csv('../dat/pop_area.csv')
| src/world.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="TsHV-7cpVkyK"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" id="atWM-s8yVnfX"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="TB0wBWfcVqHz"
# # Using TensorBoard in Notebooks
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] id="elH58gbhWAmn"
# TensorBoard can be used directly within notebook experiences such as [Colab](https://colab.research.google.com/) and [Jupyter](https://jupyter.org/). This can be helpful for sharing results, integrating TensorBoard into existing workflows, and using TensorBoard without installing anything locally.
# + [markdown] id="VszJNloY3ZU3"
# ## Setup
# + [markdown] id="E6QhA_dp3eRq"
# Start by installing TF 2.0 and loading the TensorBoard notebook extension:
#
# **For Jupyter users:** If you’ve installed Jupyter and TensorBoard into
# the same virtualenv, then you should be good to go. If you’re using a
# more complicated setup, like a global Jupyter installation and kernels
# for different Conda/virtualenv environments, then you must ensure that
# the `tensorboard` binary is on your `PATH` inside the Jupyter notebook
# context. One way to do this is to modify the `kernel_spec` to prepend
# the environment’s `bin` directory to `PATH`, [as described here][1].
#
# [1]: https://github.com/ipython/ipykernel/issues/395#issuecomment-479787997
#
# + [markdown] id="9w7Baxc8aCtJ"
# In case you are running a [Docker](https://docs.docker.com/install/) image of [Jupyter Notebook server using TensorFlow's nightly](https://www.tensorflow.org/install/docker#examples_using_cpu-only_images), it is necessary to expose not only the notebook's port, but the TensorBoard's port.
#
# Thus, run the container with the following command:
#
# ```
# docker run -it -p 8888:8888 -p 6006:6006 \
# tensorflow/tensorflow:nightly-py3-jupyter
# ```
#
# where the `-p 6006` is the default port of TensorBoard. This will allocate a port for you to run one TensorBoard instance. To have concurrent instances, it is necessary to allocate more ports.
#
# + id="8p3Tbx8cWEFA"
# Load the TensorBoard notebook extension
# %load_ext tensorboard
# + [markdown] id="9GtR_cTTkf9G"
# Import TensorFlow, datetime, and os:
# + id="mVtYvbbIWRkV"
import tensorflow as tf
import datetime, os
# + [markdown] id="Cu1fbH-S3oAX"
# ## TensorBoard in notebooks
# + [markdown] id="XfCa27_8kov6"
# Download the [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset and scale it:
# + id="z8b82G7YksOS"
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# + [markdown] id="lBk1BqAZKEKd"
# Create a very simple model:
# + id="OS7qGYiMKGQl"
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# + [markdown] id="RNaPPs5ZKNOV"
# Train the model using Keras and the TensorBoard callback:
# + id="lpUO9HqUKP6z"
def train_model():
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
train_model()
# + [markdown] id="SxvXc4hoKW7d"
# Start TensorBoard within the notebook using [magics](https://ipython.readthedocs.io/en/stable/interactive/magics.html):
# + id="KBHp6M_zgjp4"
# %tensorboard --logdir logs
# + [markdown] id="Po7rTfQswAMT"
# <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard.png?raw=1"/>
# + [markdown] id="aQq3UHgmLBpC"
# You can now view dashboards such as scalars, graphs, histograms, and others. Some dashboards are not available yet in Colab (such as the profile plugin).
#
# The `%tensorboard` magic has exactly the same format as the TensorBoard command line invocation, but with a `%`-sign in front of it.
# + [markdown] id="NiIMwOG8MR_g"
# You can also start TensorBoard before training to monitor it in progress:
# + id="qyI5lrXoMw9K"
# %tensorboard --logdir logs
# + [markdown] id="ALxC8BbWWV91"
# <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard_two_runs.png?raw=1"/>
# + [markdown] id="GUSM8yLrO2yZ"
# The same TensorBoard backend is reused by issuing the same command. If a different logs directory was chosen, a new instance of TensorBoard would be opened. Ports are managed automatically.
#
# Start training a new model and watch TensorBoard update automatically every 30 seconds or refresh it with the button on the top right:
# + id="ixZlmtWhMyr4"
train_model()
# + [markdown] id="IlDz2oXBgnZ9"
# You can use the `tensorboard.notebook` APIs for a bit more control:
# + id="ko9qeSQHLrEh"
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
# + id="hzm9DNVILxJe"
# Control TensorBoard display. If no port is provided,
# the most recently launched TensorBoard is used
notebook.display(port=6006, height=1000)
# + [markdown] id="za2GqzKiWY-R"
# <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard_tall.png?raw=1"/>
| site/en-snapshot/tensorboard/tensorboard_in_notebooks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 60分钟入门深度学习工具-PyTorch(二、Autograd: 自动求导)
# # 二、Autograd: 自动求导(automatic differentiation)
# PyTorch 中所有神经网络的核心是`autograd`包.我们首先简单介绍一下这个包,然后训练我们的第一个神经网络.
#
# `autograd`包为张量上的所有操作提供了自动求导.它是一个运行时定义的框架,这意味着反向传播是根据你的代码如何运行来定义,并且每次迭代可以不同.
#
# 接下来我们用一些简单的示例来看这个包:
# ## 张量(Tensor)
# `torch.Tensor`是包的核心类。如果将其属性`.requires_grad`设置为True,则会开始跟踪其上的所有操作。完成计算后,您可以调用`.backward()`并自动计算所有梯度。此张量的梯度将累积到`.grad`属性中。
#
# 要阻止张量跟踪历史记录,可以调用`.detach()`将其从计算历史记录中分离出来,并防止将来的计算被跟踪。
#
# 要防止跟踪历史记录(和使用内存),您还可以使用torch.no_grad()包装代码块:在评估模型时,这可能特别有用,因为模型可能具有`requires_grad = True`的可训练参数,但我们不需要梯度。
#
# 还有一个类对于autograd实现非常重要 - Function。
#
# Tensor和Function互相连接并构建一个非循环图构建一个完整的计算过程。每个张量都有一个`.grad_fn`属性,该属性引用已创建Tensor的Function(除了用户创建的Tensors - 它们的`grad_fn`为`None`)。
#
# 如果要计算导数,可以在Tensor上调用`.backward()`。如果Tensor是标量(即它包含一个元素数据),则不需要为`backward()`指定任何参数,但是如果它有更多元素,则需要指定一个梯度参数,该参数是匹配形状的张量。
import torch
# 创建一个张量并设置`requires_grad = True`以跟踪它的计算
x = torch.ones(2, 2, requires_grad=True)
print(x)
# 在张量上执行操作:
y = x + 2
print(y)
# 因为y是通过一个操作创建的,所以它有grad_fn,而x是由用户创建,所以它的grad_fn为None.
print(y.grad_fn)
print(x.grad_fn)
# 在y上执行操作
# +
z = y * y * 3
out = z.mean()
print(z, out)
# -
# `.requires\_grad_(...)`就地更改现有的Tensor的`requires_grad`标志。 如果没有给出,输入标志默认为False。
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
# ## 梯度(Gradients)
#
# 现在我们来执行反向传播,`out.backward()`相当于执行`out.backward(torch.tensor(1.))`
out.backward()
# 输出out对x的梯度d(out)/dx:
print(x.grad)
# 你应该得到一个值全为4.5的矩阵,我们把张量out称为"$o$". 则:$o = \frac{1}{4}\sum_i z_i$,${{z}_{i}}=3{{(x+2)}^{2}}$ ,并且 $z\left| _{{{x}_{i}}=1} \right.=27$ ,所以,$\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)$, 因此$\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$
# 在数学上,如果你有一个向量值函数$\vec{y}=f(\vec{x})$ ,则$\vec{y}$相对于$\vec{x}$的梯度是雅可比矩阵:
#
#
# $J=\left( \begin{matrix}
# \frac{\partial {{y}_{1}}}{\partial {{x}_{1}}} & \ldots & \frac{\partial {{y}_{m}}}{\partial {{x}_{1}}} \\
# \vdots & \ddots & \vdots \\
# \frac{\partial {{y}_{1}}}{\partial {{x}_{n}}} & \cdots & \frac{\partial {{y}_{m}}}{\partial {{x}_{n}}} \\
# \end{matrix} \right)$
#
# 一般来说,torch.autograd是一个计算雅可比向量积的引擎。 也就是说,给定任何向量$v =(v_1 v_2 ...v_m)^T$,计算乘积$J\cdot v$。 如果$v$恰好是标量函数的梯度$l=g(\vec{y})$,即$v={{(\frac{\partial l}{\partial {{y}_{1}}}\cdots \frac{\partial l}{\partial {{y}_{m}}})}^{T}}$ 然后根据链式法则,雅可比向量乘积将是$l$相对于$\vec{x}$的梯度
#
# $J\centerdot v=\left( \begin{matrix}
# \frac{\partial {{y}_{1}}}{\partial {{x}_{1}}} & \ldots & \frac{\partial {{y}_{m}}}{\partial {{x}_{1}}} \\
# \vdots & \ddots & \vdots \\
# \frac{\partial {{y}_{1}}}{\partial {{x}_{n}}} & \cdots & \frac{\partial {{y}_{m}}}{\partial {{x}_{n}}} \\
# \end{matrix} \right)\left( \begin{matrix}
# \frac{\partial l}{\partial {{y}_{1}}} \\
# \vdots \\
# \frac{\partial l}{\partial {{y}_{m}}} \\
# \end{matrix} \right)=\left( \begin{matrix}
# \frac{\partial l}{\partial {{x}_{1}}} \\
# \vdots \\
# \frac{\partial l}{\partial {{x}_{n}}} \\
# \end{matrix} \right)$
#
# 雅可比向量积的这种特性使得将外部梯度馈送到具有非标量输出的模型中非常方便。
#
# 现在让我们来看一个雅可比向量积的例子:
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
# 现在在这种情况下,y不再是标量。 `torch.autograd`无法直接计算完整雅可比行列式,但如果我们只想要雅可比向量积,只需将向量作为参数向后传递:
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
# 您还可以通过torch.no_grad()代码,在张量上使用.requires_grad = True来停止使用跟踪历史记录。
# +
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
# -
# 关于`autograd`和`Function`的文档在http://pytorch.org/docs/autograd
| 60分钟入门PyTorch/60分钟入门PyTorch-2.AUTOGRAD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="69d678e4" outputId="3739069f-9413-452e-ce01-529c128b15c0" executionInfo={"status": "ok", "timestamp": 1644161049775, "user_tz": 300, "elapsed": 970, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %pylab inline
# + [markdown] id="c2d90e59"
# # Exploratory Data Analysis (EDA)
# + id="393528ac"
Data = pd.read_csv("https://raw.githubusercontent.com/iad34/seminars/master/materials/data_sem1.csv", sep=";")
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="4ff45d3a" outputId="5e216d0a-3e34-4b4f-80a4-26cc09c8100e" executionInfo={"status": "ok", "timestamp": 1644161051751, "user_tz": 300, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.head()
# + [markdown] id="xQz-6FVnZDrJ"
# **1.** Look at the shape of data
# + colab={"base_uri": "https://localhost:8080/"} id="1_rL6w1bYxs5" outputId="03b8023c-5d73-4823-f36a-b223b3bb85e9" executionInfo={"status": "ok", "timestamp": 1644161149091, "user_tz": 300, "elapsed": 315, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.shape
# + [markdown] id="E4_rCGcMZPzj"
# **2.** Information about numerical features
# + colab={"base_uri": "https://localhost:8080/", "height": 293} id="nRa9X2_ZYwO5" outputId="10abc0d7-5cbb-4f2b-b950-4c155733ba96" executionInfo={"status": "ok", "timestamp": 1644161184020, "user_tz": 300, "elapsed": 304, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.describe()
# + [markdown] id="rxC2a1aUZfI8"
# **3.** Information about all features
# + colab={"base_uri": "https://localhost:8080/"} id="rT3VLYsNY1pI" outputId="fd21a4e3-1492-4bc0-fd85-9a17e8ea17d8" executionInfo={"status": "ok", "timestamp": 1644161230504, "user_tz": 300, "elapsed": 294, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.info()
# + [markdown] id="O9BZoUIdaA4d"
# **4.** Get information about columns
# + [markdown] id="ba7f6dd3"
# Survived
# + colab={"base_uri": "https://localhost:8080/"} id="b_us-18fa2Qm" outputId="4c1b4ff5-086b-481f-ecbf-75a98e3c875b" executionInfo={"status": "ok", "timestamp": 1644161456139, "user_tz": 300, "elapsed": 297, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Survived'].value_counts() #dropna=False
# + [markdown] id="Ym55AHzBqayJ"
# Pclass
# + colab={"base_uri": "https://localhost:8080/"} id="h1cxkd9jbEWu" outputId="8b32b860-896c-4de8-db1a-bf4967dd8ddb" executionInfo={"status": "ok", "timestamp": 1644161499591, "user_tz": 300, "elapsed": 11, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Pclass'].value_counts()
# + [markdown] id="-8rZG9Ugq58a"
# Sex
# + colab={"base_uri": "https://localhost:8080/"} id="znFrIMZobM8y" outputId="a2a2e5b1-2c31-42b3-ddfd-637da8648a69" executionInfo={"status": "ok", "timestamp": 1644161658423, "user_tz": 300, "elapsed": 311, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Sex'].value_counts()
# + [markdown] id="HkAOeSxzcRO6"
# How to handle 'Sex'?
# + [markdown] id="i4KlWVTZ7VwR"
# **5.** Is the sex an important feature?
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="4gWvJ0y87teh" outputId="e610d8aa-e433-4623-fe62-094d8730392b" executionInfo={"status": "ok", "timestamp": 1644161740235, "user_tz": 300, "elapsed": 572, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.groupby('Survived').count()['Sex'].plot.bar(rot=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="6H2pwQG19gRo" outputId="ab9151c4-813b-46a2-c22e-92e7184e9f25" executionInfo={"status": "ok", "timestamp": 1644161848130, "user_tz": 300, "elapsed": 20, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
sns.set_theme(style="whitegrid")
#sns.barplot(x="Sex", y="Survived", hue='Pclass', data=Data)
sns.barplot(x="Sex", y="Survived", data=Data)
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="vpD16U67c4db" outputId="d0daac40-2889-4351-833a-03bb216279a7" executionInfo={"status": "ok", "timestamp": 1644161938591, "user_tz": 300, "elapsed": 289, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data[Data['Sex'] == 'unknown']
# + [markdown] id="RlsUfWpd4wlv"
# You can fill-in manually
# + id="Acd2MYjidT6I"
Data.loc[[28,49], 'Sex'] = 'female'
# + colab={"base_uri": "https://localhost:8080/", "height": 140} id="8w6tBUcFsoYe" executionInfo={"status": "ok", "timestamp": 1644162093635, "user_tz": 300, "elapsed": 324, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="7d87cb5f-481d-4e2f-e9df-269e21381d6d"
Data[Data['Sex'] == 'unknown']
# + [markdown] id="ZAZPpjrse7eW"
# **6.** Delete records with unknown sex
# + id="0YYDOWkAfVAK"
#Data = Data[Data['Sex'] !== 'unknown']
Data.drop(index = Data[Data['Sex'] == 'unknown'].index, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 49} id="NY9AkUiDgKhN" outputId="7ece83a9-0352-46c1-f5ef-3dcfdc0da706" executionInfo={"status": "ok", "timestamp": 1644162308800, "user_tz": 300, "elapsed": 420, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data[Data['Sex'] == 'unknown']
# + [markdown] id="Fh675WEcgVyZ"
# **7.** Encode column Sex
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="4Ad96RySgcYl" outputId="6fc3c971-dcec-4217-8579-1c07ae9db0c9" executionInfo={"status": "ok", "timestamp": 1644162439469, "user_tz": 300, "elapsed": 451, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
#Data['Sex'] = Data['Sex'].apply(lambda x: 1 if x == 'male' else 0)
#Data['Sex'] = Data['Sex'].map({'male': 1, 'female': 0})
Data.replace({'Sex': {'male': 1, 'female': 0}}, inplace = True)
Data.head()
# + id="MI7udrXH8BeP" colab={"base_uri": "https://localhost:8080/", "height": 201} executionInfo={"status": "ok", "timestamp": 1644162480898, "user_tz": 300, "elapsed": 428, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="7ac1f6c9-be14-41f2-b4c3-4fe9ee9bfc16"
Data['Sex'] = Data['Sex'].map({1:'male', 0:'female'})
Data.head()
# + [markdown] id="0faaf504"
# ## Use OneHotEncoder
# + id="xer_pP3H5wf2"
from sklearn.preprocessing import OneHotEncoder
onee = OneHotEncoder(drop='first')
# + id="GR-QwS9Qug4q"
Data['Sex'] = onee.fit_transform(Data[['Sex']]).toarray()
# + colab={"base_uri": "https://localhost:8080/"} id="Y7BFq6VSvEKb" executionInfo={"status": "ok", "timestamp": 1644163090734, "user_tz": 300, "elapsed": 295, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="6b1c9678-d696-423c-d313-f96bbb4040bd"
onee.categories_
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="ElPC0AwtvPO3" executionInfo={"status": "ok", "timestamp": 1644163243625, "user_tz": 300, "elapsed": 27, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="46b01157-fabf-4e1d-d5de-174d0d5dcd85"
Data.head()
# + [markdown] id="4wY8gwac587Y"
# **8.** Explore next column 'Age'
# + colab={"base_uri": "https://localhost:8080/", "height": 413} id="64345de6" outputId="1e66e62b-3476-4a92-bbf4-926d81027f05" executionInfo={"status": "ok", "timestamp": 1644163331779, "user_tz": 300, "elapsed": 302, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data[Data['Age'].isna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="TfRXonmybPDf" outputId="491b54c5-5650-49a9-cc13-0634d08548d4" executionInfo={"status": "ok", "timestamp": 1644163444431, "user_tz": 300, "elapsed": 1139, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Age'].plot.hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="5F7b5IORcXrz" outputId="d43a42a2-d813-495c-fe0f-e7970168950b" executionInfo={"status": "ok", "timestamp": 1644163581505, "user_tz": 300, "elapsed": 1277, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
sns.displot(Data.Age, kde=True)
# + colab={"base_uri": "https://localhost:8080/"} id="Wq5cveGsL37Z" outputId="1ce0ed54-e71a-4b3c-ac42-9fe8e5a1dea7" executionInfo={"status": "ok", "timestamp": 1644163762887, "user_tz": 300, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
median_age = Data['Age'].median()
mean_age = Data['Age'].mean()
print(f'Median = {median_age},\nMean = {mean_age}')
# + id="595abde3"
#Data.loc[Data[Data['Age'].isna()].index, 'Age'] = mean_age
#Data['Age'].fillna(mean_age, inplace=True)
from sklearn.impute import SimpleImputer
imp = SimpleImputer()
Data['Age'] = imp.fit_transform(Data[['Age']])
# + colab={"base_uri": "https://localhost:8080/", "height": 49} id="fCKeRNIIeEvK" outputId="ad752348-8572-4bbc-c835-02e9d0b18edb" executionInfo={"status": "ok", "timestamp": 1644164335678, "user_tz": 300, "elapsed": 292, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data[Data['Age'].isna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="V7hcbJ0o1P7h" executionInfo={"status": "ok", "timestamp": 1644164361758, "user_tz": 300, "elapsed": 1694, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="e6eea432-09e3-4399-d5ef-a41adcc2adbc"
sns.displot(Data.Age, kde=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="eRBCY9V21hvM" executionInfo={"status": "ok", "timestamp": 1644164431447, "user_tz": 300, "elapsed": 453, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="89d3add6-82d6-4e63-b698-b3574c90468f"
Data.head()
# + colab={"base_uri": "https://localhost:8080/"} id="tI-ym8qY1nEu" executionInfo={"status": "ok", "timestamp": 1644164549074, "user_tz": 300, "elapsed": 473, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="0d5825a2-8b23-4dc2-f6ee-31a08dc7b6b9"
Data.info()
# + [markdown] id="TcPYrS9mEKwb"
# **9.** Feature Embarked
# + colab={"base_uri": "https://localhost:8080/"} id="MJeOGfjlDfup" outputId="58c51726-ff98-4810-a944-4ce3baafa4ec" executionInfo={"status": "ok", "timestamp": 1644164609162, "user_tz": 300, "elapsed": 11, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Embarked'].value_counts(dropna=False)
# + id="kou1ImUrDfzY"
Data.dropna(subset=['Embarked'], inplace=True)
# + [markdown] id="c6kSoetTFNJ_"
# **10.** Check the data again
# + colab={"base_uri": "https://localhost:8080/"} id="-UutiQMmGCH7" outputId="8fc0b322-45d4-4860-b989-0670b583cc1f" executionInfo={"status": "ok", "timestamp": 1644164781145, "user_tz": 300, "elapsed": 317, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.info()
# + [markdown] id="M6GqM0e93Iq7"
# ## One Factor Analysis
# + [markdown] id="0d0a6b9e"
# **11.** 'SibSp'
# + colab={"base_uri": "https://localhost:8080/"} id="fd441e16" outputId="7a1f40c0-8240-4af3-c401-aa91e51ccd01" executionInfo={"status": "ok", "timestamp": 1644164943663, "user_tz": 300, "elapsed": 12, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['SibSp'].unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="3XgDkFqPOuut" outputId="0ff6a00f-7daf-4365-fa14-7f5361f7b790" executionInfo={"status": "ok", "timestamp": 1644165005915, "user_tz": 300, "elapsed": 1369, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
sns.barplot(x='SibSp', y='Survived', data=Data)
# + [markdown] id="fc318af5"
# **12.** 'Parch'
# + colab={"base_uri": "https://localhost:8080/"} id="1VXvSBzANdVW" outputId="cb31cdec-d599-4d38-e28f-72680f5b678c" executionInfo={"status": "ok", "timestamp": 1644165053643, "user_tz": 300, "elapsed": 289, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Parch'].unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="6jklpi0WO0n4" outputId="2b55e1cf-e214-4f76-e003-975376993665" executionInfo={"status": "ok", "timestamp": 1644165072373, "user_tz": 300, "elapsed": 1033, "user": {"displayName": "<NAME>itskii", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
sns.barplot(x='Parch', y='Survived', data=Data)
# + [markdown] id="122c4f72"
# **13.** 'Embarked'
# + colab={"base_uri": "https://localhost:8080/"} id="uzhIl4RSNls0" outputId="62d0c01d-e055-4a5f-8ebc-9768c0a23bab" executionInfo={"status": "ok", "timestamp": 1644165175074, "user_tz": 300, "elapsed": 316, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data['Embarked'].unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="bVmpCwcePGxH" outputId="fef5b618-ccb7-40f6-88d0-3077aeb6db62" executionInfo={"status": "ok", "timestamp": 1644165202165, "user_tz": 300, "elapsed": 780, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
sns.barplot(x='Embarked', y='Survived', data=Data)
# + [markdown] id="86540870"
# # One-Hot Encoder VS Ordinal Encoder (Label Encoder)
# [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)
#
# [OrdinalEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html)
# + id="WqEoGNddTQca"
#le = preprocessing.LabelEncoder() #Use for y
from sklearn.preprocessing import OrdinalEncoder #Use for X
from sklearn.preprocessing import OneHotEncoder
# + id="b74NkKuiTTrY"
orde = OrdinalEncoder()
# + id="2f7JB8Q0QFwD" colab={"base_uri": "https://localhost:8080/", "height": 201} executionInfo={"status": "ok", "timestamp": 1644165513513, "user_tz": 300, "elapsed": 826, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="f9945151-0924-4915-af0e-8a2365a8a224"
Data['Ord_Embarked'] = orde.fit_transform(Data[['Embarked']])
Data.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="hESfICx1S_sd" outputId="ce72279b-536c-432e-8b5a-838a3cd6e27f" executionInfo={"status": "ok", "timestamp": 1644165732183, "user_tz": 300, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
#del Data['Ord_Embarked']
Data.head()
# + id="VqgukF4mT90L"
onee = OneHotEncoder()
# + id="VHn7jfROVPWx" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1644165989051, "user_tz": 300, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="ada8b284-4858-4d5d-f5ba-820336f10407"
onee.fit_transform(Data[['Embarked']]).toarray()
# + colab={"base_uri": "https://localhost:8080/"} id="6Fj7Gxr4VPw7" outputId="4738d7ca-6668-4dbf-ec36-a00c33f8b652" executionInfo={"status": "ok", "timestamp": 1644165990534, "user_tz": 300, "elapsed": 322, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
onee.categories_
# + id="32Uq7arDXxqi"
Data[onee.categories_[0]] = onee.fit_transform(Data[['Embarked']]).toarray()
# + id="qbqD9r3CW8PV" colab={"base_uri": "https://localhost:8080/", "height": 201} executionInfo={"status": "ok", "timestamp": 1644166000316, "user_tz": 300, "elapsed": 525, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}} outputId="27b5a88f-cace-4a94-97e3-2212eefbbd9b"
Data.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="Ol-cSQXcYW2Q" outputId="27e7fdd5-9f97-4a38-be2f-5478a4e13c2f" executionInfo={"status": "ok", "timestamp": 1644166083336, "user_tz": 300, "elapsed": 448, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.drop(columns = ['Embarked', 'C'], inplace=True)
Data.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 201} id="5RiKQtOHhGev" outputId="efff3dc5-cffa-40d6-f7de-8b0549a5f907" executionInfo={"status": "ok", "timestamp": 1644166162628, "user_tz": 300, "elapsed": 303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjf-Vy7mLcFEDwgse-wqN_ZtQchIjVTfFqH8BjX=s64", "userId": "15865143011662476323"}}
Data.drop(columns = ['PassengerId', 'Name', 'Ticket', 'Cabin'], inplace=True)
Data.tail()
| EDA_and_Pipeline1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PythonData] *
# language: python
# name: conda-env-PythonData-py
# ---
# Update sklearn to prevent version mismatches
# !pip install sklearn --upgrade
#import dependencies
import pandas as pd
# # Read the CSV
#read clean csv file
exoplanet = pd.read_csv('Resources/exoplanet_data.csv')
exoplanet.head()
#drop null columns
exoplanet = exoplanet.dropna(axis='columns', how='all')
# Drop the null rows
exoplanet = exoplanet.dropna()
exoplanet
# # Select X and y values
# +
#assign all columns except koi_disposition to X, koi_disposition to y
X = exoplanet.drop(columns = 'koi_disposition')
y = exoplanet['koi_disposition']
print(X.shape, y.shape)
# -
# # Train Test Split
# +
#train, test, split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# -
# # Pre-Processing
# +
#fit scaled data with MinMax Scaler
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
#tranform scaled data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# +
#Encode Labels
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
encoded_y_train
# +
from keras.utils import to_categorical
# Step 2: One-hot encoding
one_hot_y = to_categorical(encoded_y_train)
one_hot_y
# +
## Create a LogisticRegression model and fit it to the scaled training data
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
# -
classifier.fit(X_train, encoded_y_train)
| .ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="dd9kBwv7rpMv"
# ### keep in mind you need to get no of active cases(confirmed-recovered-deceased)
# + colab={"base_uri": "https://localhost:8080/"} id="g_fla5anrv-Q" outputId="2a905c92-ec50-4d60-f88f-a5e6c33be395"
from google.colab import drive
drive.mount('/content/gdrive')
root_path = '/content/gdrive/MyDrive/asgn1/'
# + id="BVcoodCMrpMz"
import json
import pandas as pd
import numpy as np
import requests
import datetime
import scipy.signal as signal
import matplotlib.pyplot as plt
# + id="igfd9UDFrpM1"
# to open json file
# /content/gdrive/MyDrive/asgn1/neighbor-districts-modified.json
f=open('/content/gdrive/MyDrive/asgn1/neighbor-districts-modified.json')
# this function basically stores json type files into python dictionary
dist_modified=json.load(f)
# + id="RbpXA_MGrpM2"
district_list_from_json=[]
for key in dist_modified:
district_list_from_json.append(key)
district_list_from_json=np.array(district_list_from_json)
district_list_from_json.sort()
state_district_codes=[]
for i in range(len(district_list_from_json)):
state_district_codes.append(district_list_from_json[i].split('/')[1])
# district names - sample entry: churu
district_names_from_json=[]
# district ids - sample entry: Q1090006
district_ids_from_json=[]
#use split() function and specify the separator '/' . Remember default seperator is whitspace
for i in range(len(district_list_from_json)):
district_names_from_json.append(district_list_from_json[i].split("/")[0])
district_ids_from_json.append(district_list_from_json[i].split("/")[1])
# + id="ZL4xDeVxrpM3"
district_ids_list={}
for i in range(len(district_names_from_json)):
district_ids_list[district_names_from_json[i]]=district_names_from_json[i] + '/' + district_ids_from_json[i]
# + id="XP5FGUwPrpM3"
# dictionaries for mapping dates to time ids
# eg. for 2020-3-15, time_id_week is 1, time_id_month is 1, time_id_ is 1
time_id_week = {}
time_id_month = {}
time_id_overall = {}
date=datetime.date(2020,3,15)
day=1
even_list=[]
odd_list=[]
for i in range(200):
if (i+1)%2==0:
even_list.append(i+1)
else:
odd_list.append(i+1)
while True:
# basically to cover overlapping weeks this part needs to be changed.
# for now we are proceeding. but change week ids according above definition of 7-DMA
list_week=[0,1,2,6]
if date.weekday() in list_week:
time_id_week[str(date)]=odd_list[int(np.ceil(day/7))-1]
else:
time_id_week[str(date)]=even_list[int(np.ceil(day/7))-1]
if str(date)[0:4]=='2020':
if int(str(date)[8:10]) <15:
time_id_month[str(date)]=int(str(date)[5:7])-3
else:
time_id_month[str(date)]=int(str(date)[5:7])-2
else:
if int(str(date)[8:10]) <15:
time_id_month[str(date)]=int(str(date)[5:7])+9
else:
time_id_month[str(date)]=int(str(date)[5:7])+10
time_id_overall[str(date)]=1
if date==datetime.date(2021,8,14):
break
day=day+1
date=date+datetime.timedelta(days=1)
# + colab={"base_uri": "https://localhost:8080/"} id="xlsDsqiOrpM4" outputId="3140f813-15b8-465a-e829-8a39a8787a52"
data_csv=pd.read_csv('/content/gdrive/MyDrive/asgn1/districts.csv')
data_csv=data_csv.drop('Tested',axis=1)
data_csv.isnull().sum()
# + id="QVtDDuR0rpM4"
data_csv=data_csv.sort_values(['District','Date'])
data_csv.reset_index(inplace=True,drop=True)
data_csv['District']=data_csv['District'].str.lower()
# + id="hDfJZXeDrpM5"
district_names_from_cases=[]
district_ids_from_cases=[]
district_uniques=np.array(np.unique(data_csv['District']))
for i in range(len(district_ids_list)):
for j in range(len(district_uniques)):
if district_uniques[j]==district_names_from_json[i]:
district_ids_from_cases.append(district_ids_from_json[i])
district_names_from_cases.append(district_uniques[j])
break
# + id="03QRcYdBrpM6"
data_csv['Active']=data_csv['Confirmed']-(data_csv['Deceased']+data_csv['Recovered'])
# + colab={"base_uri": "https://localhost:8080/"} id="Xl_OniezrpM6" outputId="7fd640dc-2d7e-431b-c648-2d1e1d8da7eb"
# %%time
data_csv['Daily Cases']=np.nan
for i in range(len(district_names_from_cases)):
foo_df = data_csv[data_csv['District']==district_names_from_cases[i]]
foo_cases = foo_df.iloc[0,3]
foo_df['Daily Cases']=foo_df['Active'].diff() # active ones
foo_df.iloc[0,-1]=foo_cases
data_csv.loc[data_csv['District']==district_names_from_cases[i],'Daily Cases']=foo_df['Daily Cases']
# + colab={"base_uri": "https://localhost:8080/"} id="CNygKjg_rpM6" outputId="4ae80df9-8956-4083-b014-ad559bbf7b9a"
data_csv.isnull().sum()
# + id="vAUCUm87rpM7"
data_csv.dropna(inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="hN1Ehgj0rpM7" outputId="e7395e18-2a9a-4b67-a008-2c507145096c"
data_csv.isnull().sum()
# + id="5NgtE9oorpM7"
dates_in_raw=np.unique(data_csv['Date']).tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="ciIO4P1nrpM8" outputId="da698572-11d9-4466-8539-3f4d37f487b9"
# %%time
data_csv['Week ID']=np.nan
data_csv['Month ID']=np.nan
for date in time_id_week:
if dates_in_raw.count(date)>0:
data_csv.loc[data_csv['Date']==date,'Week ID']=time_id_week[date]
data_csv.loc[data_csv['Date']==date,'Month ID']=time_id_month[date]
# + colab={"base_uri": "https://localhost:8080/"} id="ObMcXiQCrpM8" outputId="98032b24-6ae9-4bd4-8e80-00aa62308de9"
data_csv.isnull().sum()
# + id="XYfHwvB8rpM8"
data_csv.dropna(inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="FltVuyeZrpM8" outputId="11ed89e1-503f-4c84-c23e-12ad38bd10b6"
data_csv.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="HwYjJRpwrpM9" outputId="a6b58aa0-47a3-4ef3-dae9-8c11015ae4cf"
# %%time
# run it on your own risk it is too it might take 2-3 hours(don't know exact) hahaha
# will try to find another method that is fast enough
# no of weeks
# should take around 15 minutes
no_of_weeks=list(time_id_week.values())[-1]
districtid=[]
weekid=[]
cases=[]
active=[]
for i in range(len(district_names_from_cases)):
for j in range(no_of_weeks):
districtid.append(district_ids_from_cases[i])
weekid.append(j+1)
foo_df=data_csv[(data_csv['District']==district_names_from_cases[i]) & ((data_csv['Week ID']==j+1) | (data_csv['Week ID']==j+2))]
cases.append(foo_df['Daily Cases'].sum())
active.append(foo_df['Active'].sum())
week_df=pd.DataFrame({'districtid':districtid,'weekid':weekid,'cases':cases,'active':active})
week_df.to_csv('cases-week.csv',index=False)
# + colab={"base_uri": "https://localhost:8080/"} id="TiBM7vWGrpM-" outputId="efc29b98-1eb7-41c6-8b98-5cf5a404f2ae"
# %%time
no_of_months=list(time_id_month.values())[-1]
districtid=[]
monthid=[]
cases=[]
active=[]
for i in range(len(district_names_from_cases)): # if delhi etc are not there just
for j in range(no_of_months):
districtid.append(district_ids_from_cases[i])
monthid.append(j+1)
foo_df=data_csv[(data_csv['District']==district_names_from_cases[i]) & (data_csv['Month ID']==j+1)]
cases.append(foo_df['Daily Cases'].sum())
active.append(foo_df['Active'].astype(int).sum())
month_df=pd.DataFrame({'districtid':districtid,'monthid':monthid,'cases':cases,'active':active})
month_df.to_csv('cases-month.csv',index=False)
# + [markdown] id="29iXiiXkrpM-"
# ### timseseries can be plotted
# + id="WdxTkRExrpM_"
def choose_function(time_series,indices):
indices=indices.tolist()
foo_list=time_series[indices]
ind1=indices[foo_list.argmax()]
indices.remove(ind1)
time_series[ind1]=0
foo_list=time_series[indices]
#foo_list.remove(time_series[ind1])
ind2=indices[foo_list.argmax()]
if ind1<ind2:
return [ind1,ind2]
else:
return [ind2,ind1]
# + colab={"base_uri": "https://localhost:8080/"} id="-uWw0hxKFk26" outputId="325d37bc-16f7-4058-804a-fba698b48e74"
np.unique(peak2,return_counts=True)
# + colab={"base_uri": "https://localhost:8080/"} id="a_ngp0P-DLe0" outputId="2801bb14-ddef-4573-f30f-5f8b1511015d"
np.unique(peak1,return_counts=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="93cD4-hsLDa4" outputId="bac6ced4-7962-4cde-ab7a-3cdfd5b2d8f8"
### test for any district and also plot peaks
df=week_df[week_df['districtid']==district_ids_from_cases[peak1.index(86)]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + id="8saQjxs2A2CQ"
## this one returns nice values at least for most of the districts even with order =40 and no other complexity
## but I have some problems in only some districts that needs to be sorted_out
peak1=[]
peak2=[]
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5) ::: it does not work and specially won't in case of weeks
# although it is not a good solution as it is not applicable to weeks
# it is possible to choose more than three and then work out from there
for i in range(len(district_ids_from_cases)):
time_series=np.array(week_df[week_df['districtid']==district_ids_from_cases[i]].cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=40)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
if len(indices)<2:
indices=signal.argrelextrema(time_series,np.greater_equal, order=30)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
if len(indices)<2:
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
# 585 cases pass len==2 condition at fiirst go
# + colab={"base_uri": "https://localhost:8080/"} id="RgLZiQMdIMhA" outputId="84cb7799-06fc-4c08-8687-8bc3d1201f41"
peak2.index(2)
# + colab={"base_uri": "https://localhost:8080/"} id="lvj_TlLeJK59" outputId="3f57af2d-0caf-4c73-cbda-a26ec423ef48"
data_csv[data_csv['District']=='phek'].Confirmed.sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="FZyT0_bPIBRv" outputId="184fdbb3-171f-4130-c488-4ad3054f1c7b"
district_ids_from_cases[peak1.index(124)]
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="hrMdWQ_SKf8V" outputId="9149f370-6ec9-427a-c553-1845f728b16d"
### test for any district and also plot peaks
df=week_df[week_df['districtid']==district_ids_from_cases[peak2.index(54)]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + id="UO7ute3rKfTV"
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="LjIvt6Q4Kb-2" outputId="843a7fab-0420-40a9-832f-d3294fd9bc9a"
### test for any district and also plot peaks
df=week_df[week_df['districtid']==district_ids_from_cases[peak2.index(2)]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="Dei7YHiWFwCE" outputId="a8cd9365-86dc-44bf-a8d3-78a3db9afe67"
### test for any district and also plot peaks
df=week_df[week_df['districtid']==district_ids_from_cases[peak1.index(124)]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="NsFHF5GpALm0" outputId="92360ea1-01da-4135-f1a8-769c6a1a1883"
### test for any district and also plot peaks
df=week_df[week_df['districtid']==district_ids_from_cases[1]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=20)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="S7NnpiFirpM-" outputId="0af17e52-fb2a-4d2b-ac1e-ea403468df3a"
### test for any district and also plot peaks
df=month_df[month_df['districtid']==district_ids_from_cases[1]]
time_series=np.array(df.cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=3)[0]
print(indices)
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5)
print(time_series[indices])
# max two can be dones using no. cases in time series
df.cases.plot(figsize=(20,8), alpha=.3)
df.iloc[indices].cases.plot(style='.', lw=10, color='green', marker="^");
#df.iloc[ilocs_min].cases.plot(style='.', lw=10, color='green', marker="^");
# + id="6RVrEKJerpM_"
peak1=[]
peak2=[]
# basically order three can insure at least 2 peaks and I have to pick 2 from them only max ones I guess
# month ID automatically becomes ''indices+1''
# I just can only try order 3,4(maybe 5) ::: it does not work and specially won't in case of weeks
# although it is not a good solution as it is not applicable to weeks
# it is possible to choose more than three and then work out from there
for i in range(len(district_ids_from_cases)):
time_series=np.array(month_df[month_df['districtid']==district_ids_from_cases[i]].cases)
indices=signal.argrelextrema(time_series,np.greater_equal, order=4)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
if len(indices)<2:
indices=signal.argrelextrema(time_series,np.greater_equal, order=3)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
if len(indices)<2:
indices=signal.argrelextrema(time_series,np.greater_equal, order=2)[0]
if len(indices)==2:
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
else:
indices=choose_function(time_series,indices)
peak1.append(indices[0]+1)
peak2.append(indices[1]+1)
# 585 cases pass len==2 condition at fiirst go
# + [markdown] id="dj7A9XKJLWAx"
# ## don't run last cell as that gives away variables to month cases
| colab-peaks-test-notebooks/peaks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Post Mortem Debugging
# When an exception is raised within IPython, execute the %debug magic command to launch the debugger and step into the code. Also, the %pdb on command tells IPython to launch the debugger automatically as soon as an exception is raised.
#
# Once you are in the debugger, you have access to several special commands, the most important ones being listed here:
#
# - p varname prints the value of a variable
# - w shows your current location within the stack
# - u goes up in the stack
# - d goes down in the stack
# - l shows the lines of code around your current location
# - a shows the arguments of the current function
#
a = 0
b = 1
b / a
# %debug
# %pdb on
b/a
# %pdb off
# # Step-by-Step Debugging
# In order to put a breakpoint somewhere in your code, insert the following command:
import pdb
pdb.set_trace()
# Second, you can run a script from IPython with the following command:
# %run -d -b extscript.py:20 script
# This command runs the script.py file under the control of the debugger with a breakpoint on line 20 in extscript.py (which is imported by script.py). Finally, you can do step-by-step debugging as soon as you are in the debugger.
#
# Step-by-step debugging consists of precisely controlling the course of the interpreter. Starting from the beginning of a script or from a breakpoint, you can resume the execution of the interpreter with the following commands:
#
# - s executes the current line and stops as soon as possible afterwards (step-by-step debugging—that is, the most fine-grained execution pattern)
# - n continues the execution until the next line in the current function is reached
# - r continues the execution until the current function returns
# - c continues the execution until the next breakpoint is reached
# - j 30 brings you to line 30 in the current file
# - b insert breakpoint
#
# You can add breakpoints dynamically from within the debugger using the b command or with tbreak (temporary breakpoint). You can also clear all or some of the breakpoints, enable or disable them, and so on. You can find the full details of the debugger at https://docs.python.org/3/library/pdb.html.
#
#
#
# +
# %%writefile debug.py
def test():
a = 0
b = 1
return b / a
print(test())
# -
# %run -d debug.py
# # VS Code
# Install Python extension. It enables debugging, linting, intelly sence and switches between editor and terminal. It is helpful to run ipython in the terminal and use the following shortucts to pass the code.
#
# - Shift + Enter copy the selected python code to the terminal
# - Ctrl + \` focus on the terminal or close the terminal
# - Ctrl + J toggle the terminal
# Also it should be possible to run Jupyter Notebooks in the VS Code and convert them back in forth to Python syntax.
| Tutorial/Debugging.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:.conda-env]
# language: python
# name: conda-env-.conda-env-py
# ---
# + [markdown] id="LikIUYuS_DyQ"
# # ResNet
#
# This notebook is an implement of [___Deep Residual Learning for Image Recognition___](https://arxiv.org/pdf/1512.03385.pdf) by He et al. The original model was trained for ImageNet dataset, but in this notebook we fine-tuned it for Cifar 10 dataset, which is a relatively smaller dataset and is better to store on server.
# + [markdown] id="MfJAV5slGQq_"
# We first need to install and import all the dependent libraries in the session.
# + colab={"base_uri": "https://localhost:8080/"} id="aVhDJW4GGVUT" outputId="66856870-e9da-4eab-d752-29b82882efed"
# ! pip install -r ../requirements.txt
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
from tensorflow.keras.layers import *
from tensorflow.keras.layers.experimental.preprocessing import Resizing, RandomContrast, RandomFlip, RandomRotation
from tensorflow.keras.regularizers import l2
from tensorflow.keras import Model
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
# + [markdown] id="NLaTochAFfm8"
# In this part of the program, we get the Cifar 10 dataset using tensorflow dataset and separate it into training set validation set, and test set.
# + colab={"base_uri": "https://localhost:8080/", "height": 418, "referenced_widgets": ["ef34eedb880e47f0aafadb3120aea380", "5b03611d6e4f413880d2f78e8af48660", "86dcb6a2cce648308ce633ce58c823bf", "c7d2b12d99a94e6eac212b9dec6155e4", "1f09dfc329ee4495ae7c6e02c5ac4f20", "754dcdf1f0014f9e8ff92d079300a4b6", "<KEY>", "f02b2aed8dee418c970acd806f6b44e3", "<KEY>", "<KEY>", "584d88dea75345b6ae9a7d77bc9f21b0", "<KEY>", "8a306137fd9a47d5b2714e40c3437921", "<KEY>", "<KEY>", "454a4447a407462ca644bc81137ed5f6", "bb44a87e7b0645188471745eb88a96b3", "1fd992ac91ea4405af96725a54cf67d8", "91b0df07c4aa48fd96490e89989c32a4", "<KEY>", "<KEY>", "f3c09692f9c74c589f7f0efe7d85f037", "<KEY>", "bedd3e4bdc974d588cd706f7aa5c822e", "<KEY>", "0506b1f3ffe045af8df7624b765e4058", "388ce343e4414db794323d3d1ed8284a", "8803ce07a20340f587fbc19e3255c139", "<KEY>", "f2d4da8079be4ac9b4e843fa60f4e64a", "db765227a4a74fd0ab29bed7b5f9a9f9", "4800278ddb4746aebaefedaed79f3579", "918cc0a5f0e14470b6ee30fd91ee6f01", "4b3d25b04a2242f3aea9f8b5cc789e62", "0528db7a4e6c4b7e9eceeefc7ae5b10e", "b20ec0d20c74431eaff3354b7de34266", "<KEY>", "<KEY>", "d3a53533ffab4a76ac9ada9169bfff28", "<KEY>", "81e8f533e9134a1b8075f123ff1b7358", "2e27bbd74a24414b86e1da4a0ef640c8", "<KEY>", "c1c992efae044df39517b5c4973cef27", "<KEY>", "<KEY>", "b1eef36a9ffc41b6baf4e1e694ed69ac", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "1ae399483646413eb54d43c50ead2be8", "edca3d397f2344a6b17ce9ae8d57ab34", "2762f59788ad4aed92a7dffebec14ac2", "e150ae3eeddc42498e5a5d792093eecd", "<KEY>", "75bbb328dbef4f8f8e106663ff6dd1c7", "<KEY>", "0ffd2946869e4ba59d0e9c1c4c4e94eb", "70194de5ff26462c923dfd8ed271e114", "97b4ab535dc84e2391b567c17250a566", "<KEY>", "1b67eaff6ad144db93eab03e3fe4e679", "<KEY>", "<KEY>", "5fb22e29634c4e2a9d7b2688536effef", "7a52b439688a44cba259f55fe3b7aa45", "fdc273dbf7744e18b54eea536fada0f7", "8fb6c52e547c40ac9066ef0c21507777", "00547a582ab94234a28584c5447e5ce7", "<KEY>", "<KEY>", "<KEY>", "7bad3800395d4abebcc3df2c2ac9a9f5"]} id="B_ecmr8NFtIM" outputId="18bcff8d-d0d5-4a6c-c4a8-053ae4a8a5de"
batch_size = 256
def get_data():
(train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar10.load_data()
train_x = train_x / 255.0
test_x = test_x / 255.0
train_size = len(train_y) * 8 // 10
train = tf.data.Dataset.from_tensor_slices((train_x[:train_size],
train_y[:train_size])).shuffle(train_size).batch(batch_size)
val = tf.data.Dataset.from_tensor_slices((train_x[train_size:],
train_y[train_size:])).batch(batch_size)
test = tf.data.Dataset.from_tensor_slices((test_x, test_y)).batch(batch_size)
return train, val, test
train, val, test = get_data()
# + [markdown] id="WK2DZExoAPVk"
# This is function that constructs a ResNet model. We provide ResNet with layers 18, 34, 50, 101, and 152, with bottleneck structure for models with 50 or more layers, which are provided in the paper. The structure of the model is almost same with the original paper, but we add some preprocessing to make the network better fits Cifar 10 dataset. We define ```weight_decay``` as the hyperparameters of the model for kernel regularization. Although the original paper did not use dropout in training, we still added a few of them because the network is still overfitting the data. In addtion, we also apply data augmentation to original images to reduce overfitting.
# + id="71DVAiu4KFjz"
def bottleneck(input, f1, f3, stride, weight_decay):
x = Conv2D(kernel_size = 1, filters = f1, padding = "same", strides = stride, kernel_regularizer = l2(weight_decay))(input)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(kernel_size = 3, filters = f1, padding = "same", kernel_regularizer = l2(weight_decay))(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(kernel_size = 1, filters = f3, padding = "same", kernel_regularizer = l2(weight_decay))(x)
x = BatchNormalization()(x)
if stride == 2:
input = Conv2D(kernel_size = 1, strides = stride, filters = f3, padding = "valid", activation = "relu", kernel_regularizer = l2(weight_decay))(input)
input = BatchNormalization()(input)
x = input + x
x = BatchNormalization()(x)
x = Activation(activation = "relu")(x)
return x
def block(input, f1, stride, weight_decay):
x = Conv2D(kernel_size = 3, filters = f1, padding = "same", strides = stride, kernel_regularizer = l2(weight_decay))(input)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(kernel_size = 3, filters = f1, padding = "same", kernel_regularizer = l2(weight_decay))(x)
x = BatchNormalization()(x)
if stride == 2:
input = Conv2D(kernel_size = 1, strides = 2, filters = f1, padding = "valid", kernel_regularizer = l2(weight_decay))(input)
input = BatchNormalization()(x)
x = input + x
x = BatchNormalization()(x)
x = Activation(activation = "relu")(x)
return x
def residualBlock(input, f1, f3, layers, weight_decay, dr, bottleNeck = False):
if bottleNeck:
x = bottleneck(input, f1, f3, 2 if f1 != 64 else 1, weight_decay)
for i in range(layers - 1):
x = bottleneck(x, f1, f3, 1, weight_decay)
else:
x = block(input, f1, 2 if f1 != 64 else 1, weight_decay)
for i in range(layers - 1):
x = block(x, f1, 1, weight_decay)
if dr > 0:
x = Dropout(dr)(x)
return x
def createResNet(type, weight_decay, dropout, dropout_rate):
if type == 18:
params = [2, 2, 2, 2]
elif type == 34:
params = [3, 4, 6, 3]
elif type == 50:
params = [3, 4, 6, 3]
elif type == 101:
params = [3, 4, 23, 3]
elif type == 152:
params = [3, 8, 36, 3]
else:
raise Exception("The parameter is not valid!")
input = Input(shape = (32, 32, 3))
x = Resizing(96, 96)(input)
x = RandomRotation(.2)(x)
x = RandomFlip("horizontal")(x)
x = RandomContrast(.2)(x)
x = Conv2D(kernel_size = 7, filters = 64, strides = 2, kernel_regularizer = l2(weight_decay))(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPooling2D(pool_size = 3, strides = 2)(x)
if type >= 50:
x = Conv2D(kernel_size = 1, filters = 256, strides = 1, kernel_regularizer = l2(weight_decay))(x)
x = residualBlock(x, 64, 256, params[0], weight_decay, dropout_rate, bottleNeck = type >= 50)
x = residualBlock(x, 128, 512, params[1], weight_decay, dropout_rate, bottleNeck = type >= 50)
x = residualBlock(x, 256, 1024, params[2], weight_decay, dropout_rate, bottleNeck = type >= 50)
x = residualBlock(x, 512, 2048, params[3], weight_decay, dropout_rate, bottleNeck = type >= 50)
x = GlobalAveragePooling2D()(x)
x = Flatten()(x)
if dropout:
x = Dropout(dropout_rate)(x)
x = Dense(10, activation = "softmax")(x)
model = tf.keras.Model(inputs = input, outputs = x, name = "ResNet")
return model
# + [markdown] id="L7dpVq72RSei"
# This part trains the ResNet model on Cifar 10 dataset. We tested several sets of hyperparameters and adopted one with the best validation loss. We then store the best weights of each training epochs on drive so that we can continue training even if the session disconnects. We also store searching results and training weights in case the process takes too much time or the session crashes accidentally. We show the result of the training process with a graph about the training and validation accuracy for each epoch. The training result could vary due to randomness created by file shuffling and learning rate. This problem will be fixed soon.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FaLmu9n5ROzk" outputId="a80fc0d9-c1e7-4a5e-a5ab-3705804797b9"
# Set a checkpoint to save weights
cp = tf.keras.callbacks.ModelCheckpoint("weights", monitor = "val_loss", verbose = 0, save_best_only = True, mode = "auto")
lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor = 0.1, patience = 5, verbose = 0,
mode = 'auto', min_delta = 0.0001, cooldown = 0, min_lr = 0)
es = tf.keras.callbacks.EarlyStopping(monitor = "val_loss", patience = 20, restore_best_weights = True)
weight_decay = 1e-4
learning_rate = 1e-3
dropout = True
dropout_rate = .1
model = createResNet(34, weight_decay, dropout, dropout_rate)
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = learning_rate),
loss = tf.keras.losses.SparseCategoricalCrossentropy(),
metrics = ["accuracy"])
# We can use the existing data if the training process has started
# model.load_weights("weights")
history = model.fit(train, epochs = 150, validation_data = val, callbacks = [cp, lr])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
for i, (acc, val_acc) in enumerate(zip(history.history['accuracy'], history.history['val_accuracy'])):
if (i + 1) % 10 == 0:
plt.annotate("{:.2f}".format(acc), xy = (i + 1, acc))
plt.annotate("{:.2f}".format(val_acc), xy = (i + 1, val_acc))
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc = 'upper left')
plt.show()
# + [markdown] id="nUhdLp-cB1_G"
# Here we test our model on test set and show how ResNet predicts on sample images in the test set.
# + colab={"base_uri": "https://localhost:8080/", "height": 409} id="ok-a_tSrKW8w" outputId="d6cfd62e-84d6-4efc-f979-b43c1c9c557c"
labels = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
print("Test Accuracy: {:.2%}".format(model.evaluate(test)[1]))
fig = plt.figure(figsize = (10, 40))
for sample_data, sample_label in test.take(1):
pred = np.argmax(model.predict(sample_data), axis = 1)
for i, (img, label) in enumerate(zip(sample_data[:9], sample_label[:9])):
ax = fig.add_subplot(911 + i)
ax.imshow(img)
ax.set_title("Labelled as " + labels[int(label)] + ", classified as " + labels[int(pred[i])])
| cv/ResNet/ResNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# <div class="contentcontainer med left" style="margin-left: -50px;">
# <dl class="dl-horizontal">
# <dt>Title</dt> <dd> Labels Element</dd>
# <dt>Dependencies</dt> <dd>Matplotlib</dd>
# <dt>Backends</dt> <dd><a href='./Labels.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/Labels.ipynb'>Bokeh</a></dd>
# </dl>
# </div>
# +
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')
# -
# The ``Labels`` element may be used to annotate a plot with a number of labels. Unlike the ``Text`` element, ``Labels`` is vectorized and allows plotting many labels at once. It also supports any [tabular](../../../user_guide/08-Tabular_Datasets.ipynb) or [gridded](../../../user_guide/09-Gridded_Datasets.ipynb) data format. This also means that most other elements may be cast to a ``Labels`` element to annotate or label the values.
#
# ``Labels`` also support various options that make it convenient to use as an annotation, e.g. ``xoffset`` and ``yoffset`` options allow adjusting the position of the labels relative to an existing data point and the ``color`` option allows us to colormap the data by a certain dimension.
# +
np.random.seed(9)
data = np.random.rand(10, 2)
points = hv.Points(data)
labels = hv.Labels({('x', 'y'): data, 'text': [chr(65+i) for i in range(10)]}, ['x', 'y'], 'text')
(points* labels).opts(
opts.Labels(color='text', cmap='Category20', xoffset=0.05, yoffset=0.05, size=14, padding=0.2),
opts.Points(color='black', s=25))
# -
# If the value dimension of the data is not already of string type it will be formatted using the applicable entry in ``Dimension.type_formatters`` or an explicit ``value_format`` defined on the Dimension. Additionally the ``color_index`` option allows us to colormap the text by a dimension.
#
# Here we will create a 2D array of values, define a Dimension with a formatter and then colormap the text:
# +
value_dimension = hv.Dimension('Values', value_format=lambda x: '%.1f' % x)
xs = ys = np.linspace(-2.5, 2.5, 25)
zs = np.sin(xs**2)*np.sin(ys**2)[:, np.newaxis]
hv.Labels((xs, ys, zs), vdims=value_dimension).opts(
opts.Labels(bgcolor='black', cmap='viridis', color='Values', fig_size=200, padding=0.05, size=8))
| examples/reference/elements/matplotlib/Labels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Chapter4: 実践的なアプリケーションを作ってみよう
# ### 4.2 アプリケーションランチャーを作ってみよう②
# ### 4.2.1 GUIの作成
# +
# リスト4.2.1: ラベルフレームとボタンの設置
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# ウインドウ状態の維持
root.mainloop()
# +
# 図4-4.padding=50の場合
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 50)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# ウインドウ状態の維持
root.mainloop()
# +
# 図4-5.row, columnによる配置(左)
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 1, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# ウインドウ状態の維持
root.mainloop()
# -
# #### (補足) 「図4-5. row, column による配置」について
# grid()メソッドのrowとcolumnによる指定は、以下のように表にすると理解しやすくなります。
# 上記のコード(# 図4-5.row, columnによる配置(左))を、イメージ図に示します。
#
# | | column=0 | column=1 |
# | :-: | :-: | :-: |
# | row=0 | | RUN2 |
# | row=1 | RUN1 | |
# +
# 図4-5.row, columnによる配置(右)
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 1, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 0, padx = 5)
# ウインドウ状態の維持
root.mainloop()
# +
# 図4-6.padxの値変更(padx=20)
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 20)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 20)
# ウインドウ状態の維持
root.mainloop()
# +
# リスト4.2.4: メニューバーの作成
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row=0, column=0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# ウインドウ状態の維持
root.mainloop()
# +
# リスト4.2.5: Fileメニューの追加
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar)
menubar.add_cascade(label = "File", menu = filemenu)
# ウインドウ状態の維持
root.mainloop()
# +
# リスト4.2.6: FileメニューにSettingの追加
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar)
menubar.add_cascade(label = "File", menu = filemenu)
# >Setting
filemenu.add_command(label = "Setting")
# ウインドウ状態の維持
root.mainloop()
# +
# リスト4.2.7: FileメニューにExitとHelpメニューの追加
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar)
menubar.add_cascade(label = "File", menu = filemenu)
# >Setting
filemenu.add_command(label = "Setting")
# >Exit
filemenu.add_command(label = "Exit")
# Helpメニュー
helpmenu = tk.Menu(menubar)
menubar.add_cascade(label = "Help", menu = helpmenu)
# ウインドウ状態の維持
root.mainloop()
# +
# リスト4.2.8: メニューバーの設置
# tkinterのインポート
import tkinter as tk
from tkinter import ttk
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1")
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2")
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "File", menu = filemenu)
# >Setting
filemenu.add_command(label = "Setting")
# --- セパレータ ---
filemenu.add_separator()
# >Exit
filemenu.add_command(label = "Exit")
# Helpメニュー
helpmenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "Help", menu = helpmenu)
# ウインドウ状態の維持
root.mainloop()
# -
# ### 4.2.2 関数の準備
# +
# リスト4.2.9: run1_func関数
# サブプロセスのインポート
import subprocess
# configparserのインポート
import configparser
# configparserのインスタンス化
config = configparser.ConfigParser()
def run1_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run1"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
# プログラムのリスト作成
programs = [app1, app2]
# プログラムの起動
for v in programs:
subprocess.Popen(v)
# -
run1_func()
# #### ※ Macの場合
# +
##### Macの場合 #####
# リスト4.2.9: run1_func関数
# サブプロセスのインポート
import subprocess
# configparserのインポート
import configparser
# configparserのインスタンス化
config = configparser.ConfigParser()
def run1_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run2"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
# プログラムのリスト作成
programs = [app1, app2]
# プログラムの起動
for v in programs:
subprocess.Popen(["open", v])
run1_func()
# -
# #### ※ リスト4.2.10は、設定ファイル(config.ini)の内容をリスト4.2.11の内容に更新して実行してください。
# リスト4.2.10: run2_func関数
def run2_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run2"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
app3 = read_base.get("app3")
# プログラムのリスト作成
programs = [app1, app2, app3]
# プログラムの起動
for v in programs:
subprocess.Popen(v)
run2_func()
# +
# リスト4.2.11: 設定ファイル(config.ini)
[Run1]
app1 = C:\WINDOWS\system32\notepad.exe
app2 = C:\Program Files\Internet Explorer\iexplore.exe
[Run2]
app1 = C:\WINDOWS\system32\notepad.exe
app2 = C:\Program Files\Internet Explorer\iexplore.exe
app3 = C:\WINDOWS\system32\mspaint.exe
# -
# #### ※ Macの場合
# 「リスト4.2.11: 設定ファイル(config.ini)」の内容は、以下のようにMac用に修正してください。
##### Macの場合 #####
# リスト4.2.10: run2_func関数
def run2_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run2"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
app3 = read_base.get("app3")
# プログラムのリスト作成
programs = [app1, app2, app3]
# プログラムの起動
for v in programs:
subprocess.Popen(["open", v])
# +
##### Macの場合 #####
# リスト4.2.11: 設定ファイル(config.ini)
[Run1]
app1 = /Applications/TextEdit.app
app2 = /Applications/Safari.app
[Run2]
app1 = /Applications/TextEdit.app
app2 = /Applications/Safari.app
app3 = /Applications/Preview.app
# +
# リスト4.2.12: set_func関数
# サブプロセスのインポート
import subprocess
def set_func():
# テキストエディッタのディレクトリ
notepad = r"C:\WINDOWS\system32\notepad.exe"
# 設定ファイルを開く
subprocess.run([notepad, "config.ini"])
# -
set_func()
# #### ※ Macの場合
# +
##### Macの場合 #####
# リスト4.2.12: set_func関数
# サブプロセスのインポート
import subprocess
def set_func():
# テキストエディットのパス
textedit = "/Applications/TextEdit.app"
# 設定ファイルを開く
subprocess.run(["open", "-a", textedit, "config.ini"])
set_func()
# +
import subprocess
def set_func():
# テキストエディットのパス
textedit = "/Applications/TextEdit.app"
# 設定ファイルを開く
subprocess.run(["open", "config.ini"])
set_func()
# -
# #### ※ リスト4.2.13は、作業デレクトリに「manual.txt」というテキストファイルを作成して実行してください。
# +
# リスト4.2.13: man_func関数
# サブプロセスのインポート
import subprocess
def man_func():
# マニュアルを開く
subprocess.run(["start", "manual.txt"], shell=True)
# -
man_func()
# #### ※ Macの場合
# +
##### Macの場合 #####
# リスト4.2.13: man_func関数
# サブプロセスのインポート
import subprocess
def man_func():
# マニュアルを開く
subprocess.run(["open", "manual.txt"])
# -
man_func()
# +
import subprocess
def man_func():
# マニュアルを開く
subprocess.run(["open", "mail.txt"])
man_func()
# -
# リスト4.2.14: exit_func関数
def exit_func():
#ウインドウを閉じる
root.destroy()
# リスト4.2.15: ウインドウを閉じるラムダ式
lambda: root.destroy()
# リスト4.2.16: ラムダ式の実装
# >Exit
filemenu.add_command(label = "Exit",
command = lambda: root.destroy())
# リスト4.2.18: exit_func関数の実装
# >Exit
filemenu.add_command(label = "Exit",
command = exit_func)
# ### 4.2.3 各ウイジットへ関数の関連付け
# +
# リスト4.2.19: 完成したコード (アプリケーションランチャー)
# インポート
import tkinter as tk
from tkinter import ttk
import subprocess
import configparser
# configparserのインスタンス化
config = configparser.ConfigParser()
### 関数 ###
# RUN1
def run1_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run1"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
# プログラムのリスト作成
programs = [app1, app2]
# プログラムの起動
for v in programs:
subprocess.Popen(v)
# RUN2
def run2_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run2"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
app3 = read_base.get("app3")
# プログラムのリスト作成
programs = [app1, app2, app3]
# プログラムの起動
for v in programs:
subprocess.Popen(v)
# Setting
def set_func():
# テキストエディッタのディレクトリ
notepad = r"C:\WINDOWS\system32\notepad.exe"
# 設定ファイルを開く
subprocess.run([notepad, "config.ini"])
# Manual
def man_func():
# マニュアルを開く
subprocess.run(["start", "manual.txt"], shell=True)
### GUI ###
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1", command = run1_func)
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2", command = run2_func)
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "File", menu = filemenu)
# >Setting
filemenu.add_command(label = "Setting", command = set_func)
# --- セパレータ ---
filemenu.add_separator()
# >Exit
filemenu.add_command(label = "Exit",
command = lambda: root.destroy())
# Helpメニュー
helpmenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "Help", menu = helpmenu)
# >Manual
helpmenu.add_command(label = "Manual", command = man_func)
# ウインドウ状態の維持
root.mainloop()
# -
# #### ※ Macの場合
# +
##### Macの場合 #####
# リスト4.2.19: 完成したコード (アプリケーションランチャー)
# インポート
import tkinter as tk
from tkinter import ttk
import subprocess
import configparser
# configparserのインスタンス化
config = configparser.ConfigParser()
### 関数 ###
# RUN1
def run1_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run1"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
# プログラムのリスト作成
programs = [app1, app2]
# プログラムの起動
for v in programs:
subprocess.Popen(["open", v])
# RUN2
def run2_func():
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run2"]
app1 = read_base.get("app1")
app2 = read_base.get("app2")
app3 = read_base.get("app3")
# プログラムのリスト作成
programs = [app1, app2, app3]
# プログラムの起動
for v in programs:
subprocess.Popen(["open", v])
# Setting
def set_func():
# テキストエディットのパス
textedit = "/Applications/TextEdit.app"
# 設定ファイルを開く
subprocess.run(["open", "-a", textedit, "config.ini"])
# Manual
def man_func():
# マニュアルを開く
subprocess.run(["open", "manual.txt"])
### GUI ###
# ウインドウの作成
root = tk.Tk()
# ラベルフレーム
frame = ttk.Labelframe(root, text = "Launcher", padding = 10)
frame.grid(row = 0, column = 0, padx = 15, pady = 15)
# ボタン
run_button1 = tk.Button(frame, text = "RUN1", command = run1_func)
run_button1.grid(row = 0, column = 0, padx = 5)
run_button2 = tk.Button(frame, text = "RUN2", command = run2_func)
run_button2.grid(row = 0, column = 1, padx = 5)
# メニューバーの作成
menubar = tk.Menu(root)
root.configure(menu = menubar)
# Fileメニュー
filemenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "File", menu = filemenu)
# >Setting
filemenu.add_command(label = "Setting", command = set_func)
# --- セパレータ ---
filemenu.add_separator()
# >Exit
filemenu.add_command(label = "Exit",
command = lambda: root.destroy())
# Helpメニュー
helpmenu = tk.Menu(menubar, tearoff = 0)
menubar.add_cascade(label = "Help", menu = helpmenu)
# >Manual
helpmenu.add_command(label = "Manual", command = man_func)
# ウインドウ状態の維持
root.mainloop()
# -
| Chapter_4/Chapter_4-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [NTDS'18] milestone 2: network models
# [ntds'18]: https://github.com/mdeff/ntds_2018
#
# [Hermina Petric Maretic](https://people.epfl.ch/hermina.petricmaretic), [EPFL LTS4](https://lts4.epfl.ch)
# ### Students
#
# * Team: 37
# * Students: <NAME>, <NAME>, <NAME>, <NAME>
# * Dataset: Wikipedia
# ## Rules
#
# * Milestones have to be completed by teams. No collaboration between teams is allowed.
# * Textual answers shall be short. Typically one to two sentences.
# * Code has to be clean.
# * In the first part, you cannot import any other library than we imported. In the second part, you are allowed to import any library you want.
# * When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.
# * The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter.
# ## Objective
#
# The purpose of this milestone is to explore various random network models, analyse their properties and compare them to your network. In the first part of the milestone you will implement two random graph models and try to fit them to your network. In this part you are not allowed to use any additional package. In the second part of the milestone you will choose a third random graph model that you think shares some properties with your network. You will be allowed to use additional packages to construct this network, but you must explain your network choice. Finally, make your code as clean as possible, and keep your textual answers short.
# ## Part 0
#
# Import the adjacency matrix of your graph that you constructed in milestone 1, as well as the number of nodes and edges of your network.
# +
import numpy as np
# the adjacency matrix we will work with is the adjacency matrix of the largest weakly connected component
adjacency_disconnected = np.load('adjacency_undirected.npz')['arr_0'] # the adjacency matrix
adjacency = np.load('largest_wcc.npz')['arr_0']
n_nodes = adjacency.shape[0] # the number of nodes in the network
n_edges = int(np.sum(adjacency)/2) # the number of edges in the network
print('the network has {} nodes and {} edges'.format(n_nodes, n_edges))
# -
# ## Part 1
#
# **For the computation of this part of the milestone you are only allowed to use the packages that have been imported in the cell below.**
# +
# %matplotlib inline
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy
# -
# ### Question 1
#
# Create a function that constructs an Erdős–Rényi graph.
def erdos_renyi(n, proba, seed=None):
"""Create an instance from the Erdos-Renyi graph model.
Parameters
----------
n: int
Size of the graph.
p: float
Edge probability. A number between 0 and 1.
seed: int (optional)
Seed for the random number generator. To get reproducible results.
Returns
-------
adjacency
The adjacency matrix of a graph.
"""
if seed is not None:
np.random.seed(seed)
adjacency = np.zeros((n,n))
adjacency[np.triu_indices(n, k=1)] = np.random.choice(2, int(n*(n-1)/2), p=[1-proba, proba])
adjacency = adjacency + adjacency.T
return adjacency
er = erdos_renyi(5, 0.6, 9765)
plt.spy(er)
plt.title('Erdos-Renyi (5, 0.6)')
er = erdos_renyi(10, 0.4, 7648)
plt.spy(er)
plt.title('Erdos-Renyi (10, 0.4)')
# ### Question 2
#
# Use the function to create a random Erdos-Renyi graph. Choose the parameters such that number of nodes is the same as in your graph, and the number of edges similar. You don't need to set the random seed. Comment on your choice of parameters.
proba= n_edges/(n_nodes*(n_nodes-1)/2)
randomER= erdos_renyi(n_nodes,proba)
plt.spy(randomER)
# **Your answer here.**
# We chose the same number of nodes as our graph. In order to have a similar number of edges as our graph, we chose the probability to be the number of edges divided by the maximum number of edges possible for a graph on n_nodes
print('The probability chosen is ',round(proba,4))
# ### Question 3
#
# Create a function that constructs a Barabási-Albert graph.
def barabasi_albert(n, m, m0=2, seed=None):
"""Create an instance from the Barabasi-Albert graph model.
Parameters
----------
n: int
Size of the graph.
m: int
Number of edges to attach from a new node to existing nodes.
m0: int (optional)
Number of nodes for the inital connected network.
seed: int (optional)
Seed for the random number generator. To get reproducible results.
Returns
-------
adjacency
The adjacency matrix of a graph.
"""
assert m <= m0
if seed is not None:
np.random.seed(seed)
adjacency = np.zeros([n, n], dtype=int)
degree = np.zeros(n, dtype=int)
# generate initial connected network with one edge per added node. (m0-1 edges)
#this is to have a connected graph
for i in range(1, m0):
target = np.random.choice(i, 1)
adjacency[i, target] = adjacency[target, i] = 1
degree[i] += 1
degree[target] += 1
# Grow network
for i in range(m0, n):
# Preferential attachment: probability that the new node connects to node i
dist = degree[:i] / np.sum(degree[:i])
# Choose m links without replacement with given probability distribution
targets = np.random.choice(i, m, replace=False, p=dist)
adjacency[i,targets] = adjacency[targets, i] = 1
degree[i] += m
degree[targets] += 1
# sanity check
assert np.array_equal(degree, np.sum(adjacency, axis=0))
return adjacency
ba = barabasi_albert(5, 1, 2, 9087)
plt.spy(ba)
plt.title('Barabasi-Albert (5, 1)')
ba = barabasi_albert(10, 2, 3, 8708)
plt.spy(ba)
plt.title('Barabasi-Albert (10, 2)')
# ### Question 4
#
# Use the function to create a random Barabási-Albert graph. Choose the parameters such that number of nodes is the same as in your graph, and the number of edges similar. You don't need to set the random seed. Comment on your choice of parameters.
m0 = 25 # this needs to be bigger than m
m = int((n_edges - m0 +1) / (n_nodes - m0)) #to have similar number of edges than in our graph
randomBA = barabasi_albert(n_nodes, m, m0, 8708)
plt.spy(randomBA)
plt.title('Barabasi-Albert ({}, {}, {})'.format(n_nodes, m, m0))
# We computed the number of edges that should be added for each node (the value m) so that the total number of edges would be similar to the one of our wikipedia graph, depending on the number of initial nodes (m0) in the BA process.
# ### Question 5
#
# Compare the number of edges in all three networks (your real network, the Erdős–Rényi network, and the Barabási-Albert netowk).
# +
m_ER = int(np.sum(randomER)/2)
m_BA = int(np.sum(randomBA)/2)
m_wiki = n_edges
print('The number of edges in the Erdos-Renyi network is ', m_ER)
print('The number of edges in the Barabási-Albert network is ', m_BA)
print('The number of edges in our wiki network is ', m_wiki)
# -
# The number of edges cannot be controlled precisely, we have fixed number of nodes. However, it is close enough.
# ### Question 6
#
# Implement a function that computes the [Kullback–Leibler (KL) divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between two probability distributions.
# We'll use it to compare the degree distributions of networks.
def kl_divergence(p, q):
"""Compute the KL divergence between probability distributions of degrees of two networks.
Parameters
----------
p: np.array
Probability distribution of degrees of the 1st graph.
q: np.array
Probability distribution of degrees of the 2nd graph.
Returns
-------
kl
The KL divergence between the two distributions.
"""
# select the number of degrees that are occuring in the network
idx_nonzero_p = np.nonzero(p)
idx_nonzero_q = np.nonzero(q)
idx_nonzero = np.intersect1d(idx_nonzero_p, idx_nonzero_q)
# now only select those indices
p = p[idx_nonzero]
q = q[idx_nonzero]
# now normalise the values so they sum up to 1 and rebecome probability distributions
p = p / p.sum()
q = q / q.sum()
kl= np.dot(p, np.log(p/q))
return kl
p_test = np.array([0.2, 0.2, 0.2, 0.4])
q_test = np.array([0.3, 0.3, 0.1, 0.3])
round(kl_divergence(p_test, q_test),4)
# same result as wikipedia examle
p_test = np.array([0.36,0.48,0.16])
q_test = np.array([0.333,0.333,0.333])
round(kl_divergence( q_test, p_test),4)
# introduce 0
round(kl_divergence( np.array([0.1, 0, 0.7, 0.2]), np.array([0, 0.6, 0.2, 0.2])),4)
# same distrib
round(kl_divergence(np.array([0, 1]), np.array([0,1])),4)
# ### Question 7:
#
# Compare the degree distribution of your network to each of the two synthetic ones, in terms of KL divergence.
#
# **Hint:** Make sure you normalise your degree distributions to make them valid probability distributions.
#
# **Hint:** Make sure none of the graphs have disconnected nodes, as KL divergence will not be defined in that case. If that happens with one of the randomly generated networks, you can regenerate it and keep the seed that gives you no disconnected nodes.
def plot_distribution(degree, network_type):
'''
degree list: the list of node degrees
network_type: string used for plotting the title
'''
fig = plt.figure()
ax = plt.gca()
bins = min(int(np.max(degree) - np.min(degree)), 100)
a = plt.hist(degree, log = True, bins=bins, density=True)
plt.xlabel('Degree')
plt.ylabel('Probability of node having degree k')
plt.title('Degree distribution for '+ network_type)
# this returns a tuple formed out of two arraysL array1 is the degree distribution, array2 is the degree number
def return_hist(degrees, sequence = None):
'''
degrees: degree distribution of our graph
sequence: if not None, it defines the bin edges of the histogram
'''
if sequence is None:
max_degree = max(degrees)
sequence = np.arange(max_degree+2)
return np.histogram(degrees, sequence, density=True)
# +
#degree distribution of our network
degree_wiki=np.sum(adjacency, axis=0)
degree_distribution_wiki= return_hist(degree_wiki)[0]
#compute degree distribution Erdos Renyi Graph
degree_ER=np.sum(randomER, axis =0)
degree_distribution_ER= return_hist(degree_ER)[0]
#degree distribution Barabási-Albert
degree_BA=np.sum(randomBA, axis =0)
degree_distribution_BA= return_hist(degree_BA)[0]
# -
# First we compute the kl divergence on the degree distributions without binning them first
print('The kl divergence between the degree distribution of our network and ER is ',round(kl_divergence(degree_distribution_wiki, degree_distribution_ER),3))
print('The kl divergence between the degree distribution of our network and BA is ', round(kl_divergence(degree_distribution_wiki, degree_distribution_BA),3))
# The degree distribution of BA is closer to our network than ER in terms of KL divergence
# #### Because there are many zeroes, we can bin the degree distributions and compare them.
#
# one binning model would be logarithm, and that would make more sense because the smaller degrees would be binner into smaller bins and larger degrees into larger bins
rightmost_edge = np.max(np.array([max(degree_BA), max(degree_ER),max(degree_wiki) ]))
binning = np.unique(np.ceil(np.geomspace(1, rightmost_edge)))
degree_wiki_log = return_hist(degree_wiki, binning)[0]
degree_BA_log = return_hist(degree_BA, binning)[0]
degree_ER_log = return_hist(degree_ER, binning)[0]
binning
print('The kl divergence between the degree distribution of our network and ER is ', round(kl_divergence(degree_wiki_log, degree_ER_log),3))
print('The kl divergence between the degree distribution of our network and BA is ', round(kl_divergence(degree_wiki_log, degree_BA_log),3))
# It is even more clear that BA degree distribution is closer that ER distribution to our network.
#
# That is because the KL metric zeros out degrees that have probability 0 in either of the distributions. As we increase the degree, the distribution is more and more sparse, so it is unlikely the similarity of higher degrees is captured.
# Applying logarithmic binning diminishes this issue
# When considering the build up process of Wikipeda it resembles a lot the Barabasi-Albert graph construction.
# You start with an initial set of articles and new articles are added which link to existing articles with a higher probability to link to a popular article (preferential attachment).
# ### Question 8
#
# Plot the degree distribution historgrams for all three networks. Are they consistent with the KL divergence results? Explain.
plot_distribution(degree_ER, 'Erdos-Renyi')
plot_distribution(degree_BA, 'Barabási-Albert')
plot_distribution(degree_wiki, 'Wikipedia')
# The plots show that the degree distribution of our network is more similar to the degree distribution of Barabasi-Albert, which is coherent with the results indicated by the K-L divergence.
#
# ### Question 9
#
# Imagine you got equal degree distributions. Would that guarantee you got the same graph? Explain.
# Not necessarily, we can prove by a counter-example. The following graphs are not isomorphic, but have the same degree distribution
G1 = np.array([
[0, 1, 0, 1, 0],
[1, 0, 1, 0, 0],
[0, 1, 0, 0, 0],
[1, 0, 0, 0, 1],
[0, 0, 0, 1, 0]
])
plt.spy(G1)
plt.title('G1')
G2 = np.array([
[0, 1, 0, 1, 0],
[1, 0, 0, 1, 0],
[0, 0, 0, 0, 1],
[1, 1, 0, 0, 0],
[0, 0, 1, 0, 0]
])
plt.spy(G2)
plt.title('G2')
plt.hist(np.sum(G1, axis=0))
plt.hist(np.sum(G2, axis=0))
# ## Part 2
#
# **You are allowed to use any additional library here (e.g., NetworkX, PyGSP, etc.).** Be careful not to include something here and use it in part 1!
# ### Question 10
#
# Choose a random network model that fits you network well. Explain your choice.
#
# **Hint:** Check lecture notes for different network models and their properties. Your choice should be made based on at least one property you'd expect to be similar.
import networkx as nx
from scipy import sparse
# We compared several models stated in the slides with our graph using different properties (number of nodes, edges, clustering coefficient, diameter, degree distribution) and we found that the BA model is the one that fits our graph the best. However, since it is not allowed to use the BA model again, we checked the networkX database for new models and we found one that would fit in theory even better: powerlaw_cluster_graph.
# The BA model was pretty close to our graph (number of nodes and edges, similar diameter, similar type of degree distribution) but the clustering coefficients were very different from ours. This algorithm includes a variable p (see below) that can return higher average clustering coefficient if p is large enough. Hence, we expect the clustering coefficient and the degree distribution to be similar to our graph (or at least better than with BA and ER models). We are also going to test the basic properties stated above.
# ### Question 11
#
# Explain (in short) how the chosen model works.
# The powerlaw_cluster_graph is an improvement of the BA model that takes into account a "Probability of adding a triangle after adding a random edge" which is clearly related to the clustering coefficient (the higher this probability is, the more it increases clustering coefficient in the graph). The variables for the function are the number of nodes n, the number of new edges m to add at each iteration and the probability p stated above.
#
# It starts with a graph of m nodes and no links. At each iteration, one node and m edges are added. The m edges are attached with preferential attachment (higher degree nodes will tend to have more edges) and with respect to a "clustering step": if possible and according to a probability p, new triangles will be created around the nodes, increasing the clustering coefficient. It stops when the number of nodes in the graph is n.
# ### Question 12
#
# Create a random graph from that model, such that the number of nodes is the same as in your graph.
# we create a temp graph with networkX to find the average clustering coefficient
G_wiki = nx.from_numpy_array(adjacency)
average_cluster_coeff_wiki = nx.algorithms.average_clustering(G_wiki)
print('The average clustering coeff of our wikipedia network is ', round(average_cluster_coeff_wiki, 3))
G_clustcoeff=nx.powerlaw_cluster_graph(n_nodes, m,average_cluster_coeff_wiki , seed=42)
assert nx.is_connected(G_clustcoeff)
adjG_clustcoeff=nx.to_numpy_array(G_clustcoeff)
plt.spy(adjG_clustcoeff)
plt.title('Power law cluster graph with p=average clustering coefficient\n')
print('The average clustering coeff of the synthetic power_law_cluster_graph network is ', round(nx.algorithms.average_clustering(G_clustcoeff), 3))
# This value is still very low compared to our average clustering coefficient. Hence, we played with the value p and found out that p=1 is the best value to have similar clustering coefficient: we created a second model:
G_p1=nx.powerlaw_cluster_graph(n_nodes, m, 1 , seed=42)
assert nx.is_connected(G_p1)
adjG_p1=nx.to_numpy_array(G_p1)
plt.spy(adjG_p1)
plt.title('Power law cluster graph with p=1\n')
nx.algorithms.average_clustering(G_p1)
# This is way more similar to the average clustering coefficient that we would like.
# ### Question 13
#
# Check the properties you expected to be similar, and compare to your network.
#Compute the number of edges and nodes in G
edgesG_p1=len(G_p1.edges())
print('The number of edges in the random graph is {} and the one in our graph is {}'.format(edgesG_p1,n_edges))
#Compute the average clustering coefficient of the random graph
average_cluster_coeff_G_p1=nx.algorithms.average_clustering(G_p1)
print('The average clustering coefficient of our wikipedia network is {:.5f} and \nthe average clustering coefficient of the random network G is {:.5f}'.format(average_cluster_coeff_wiki,average_cluster_coeff_G_p1))
# The average clustering coefficient of the random graph we are considering now with the new model with p=1 is much closer to the average clustering coefficient of the wikipedia network, which is what we were trying to aim (see conclusion at the end).
#compute the degrees in the Graphs
degree_G_p1=np.sum(adjG_p1, axis=0)
print('The average degree in our wikipedia network is {:.2f}'.format(np.mean(degree_wiki)))
print('The average degree in the random network with p = 1 is {:.2f}'.format(np.mean(degree_G_p1)))
# The two average degree are very similar, as we expected.
plot_distribution(degree_G_p1, 'Random graph with p=1')
# +
degree_distribution_G_p1 = return_hist(degree_G_p1)[0]
#compare distributions with KL divergence
print(kl_divergence(degree_distribution_wiki,degree_distribution_G_p1))
# -
# This value is larger than when comparing with BA model. However, it is still lower than the ER model.
d_wiki = nx.diameter(G_wiki)
d_G_p1 = nx.diameter(G_p1)
print('The diameter of the wikipedia network is {}'.format(d_wiki))
print('The diameter of the synthetic network is {}'.format(d_G_p1))
# Are the results what you expected? Explain.
# **Your answer here.**
#
# The number of nodes are the same (obviously) and the number of edges are similar (we cannot have exactly the same number of edges but they are close enough).
#
# The diameters are the same. However, it is uncelar how much this is a coincidence due to random network generation.
#
# We expected the clustering coefficients of the model using p=average clustering coefficient to be higher and closer to the ones of our graph. We were surprised, but when we realized that p=1 was a great value we decided to keep the analysis with this one. A higher value for p increases the clustering coefficient (as it increases the probability of creating triangles). Choosing p=1 was hence a good fit because wikipedia is a really dense graph, in the sense that hubs have very high clustering coefficient. The value p=1 sort of force nodes that have almost triangles formed around them to "finish" the triangles, and hence get really high clustering coefficient.
#
# Comparing the degree distribution with KL divergence, it looks similar, but not as good as the BA results we got in part 1.
| milestone2/2_network_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analisador de sentimentos
#
# Def.:
# > "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
#
# (<NAME>) *[1][2]*
#
# Por essa definição, temos um grande leque de problemas que podem ser resolvidos por algoritmos de machine learning. Um sistema analisador de sentimentos é um subproblema de classificação em machine learning bem definido, pois temos a tarefa "classificar um texto como positivo ou negativo" com uma medida de performance sendo, por exemplo, a acurácia - quantidade de predições corretas, isto é, 1-((#FN+#FP)/total) ou (#TP+#TN)/total), onde TN = true negative, TP true positive, FN = false negative e FP = false positive, essa notação e métricas de performance além da acurácia podem ser vistas em [5] - e experiênca sendo o histórico conhecido de análises (podendo ser realimentada com feedbacks futuros, a fim de melhorar o modelo). Dessa forma, temos um sistema de machine learning de classificador binário (pois temos duas classes possíveis: positivo e negativo).
#
# Um classificador desse tipo pode ser implementado com vários algoritmos, por exemplo, com perceptron que separa linearmente o espaço, isto é, baseado em uma função linear que divide todo o espaço em 2 classes possíveis, é possível decidir em favor de uma classe ou de outra, dependendo de onde se encontra geometricamente algum ponto (um vetor de caracteristicas aplicado na mesma função), usando a variação do perceptron chamada de regressão logística é possível saber qual a probabilidade de uma entrada nova pertencer a uma classe ou outra de forma direta (não é um algoritmo probabilístico, ainda é um separador linear de espaços), nessa categoria é possível também aplicar algoritmos mais robustos como SVM que, caso o espaço seja linearmente separável, cria o melhor plano que divide as classes como um problema de otimização que maximiza a chamada "margem" entre o plano separador e os chamados "vetores de suporte" de cada classe (intuitivamente: cria a melhor divisão de espaços possível para uma generalização do problema, isto é, para amostras desconhecidas), o SVM também pode ter uma boa performance em espaços não separáveis com a ideia de kerneis que mapeiam as entradas para espaços de altas dimensões e aplicam a mesma ideia generalizada (é possível usar kerneis bem conhecidos ou criar/adaptar kerneis dependendo do problema).
#
# Apesar das diversas formas não probabilísticas disponíveis para abordar esse tipo de tarefa, um dos algoritmos probabilisticos mais simples, o classificador Naïve Bayes, geralmente é mais utilizado *[3]*.
#
# Classificadores probabilísticos criam uma distribuição de probabilidade sobre todas as classes possíveis, no caso específico do problema, para as classes "positivo" e "negativo", isto é, se x é um vetor de entrada a ser classificado, usando a notação latex para "\in"="pertence", P(x \in "positivo") \in [0, 1], P(x \in "negativo") \in [0, 1], P(x \in "positivo") + P(x \in "negativo") = 1, e o classificador escolhe a classe com maior pro probabilidade como a predição para x.
#
# Para o caso de classificação de texto, o classificador Naïve Bayes calcula a probabilidade a posteriori de uma classe baseada na distribuição das palavras no documento inteiro. O vetor de entrada, nesse caso, é conhecido como bag of words, que basicamente é um vetor de contagem de palavras (ou algumas variações na mesma ideia, por exemplo normalizando a frequência das palavras).
#
# Naïve Bayes recebe esse nome pois usa fortemente o teorema de Bayes com suposição (dita ingênua ou burra) de independência entre as componentes do vetor de entrada, isto é, afirma que P(Classe_k | vetor de entrada) = (P(vetor de entrada | Classe_k) * P(Classe_k)) / P(vetor de entrada), para qualquer classe k, dessa forma o denominador é uma constante para o cálculo (se repete para o cálculo de qualquer classe) - o que facilita computacionalmente as coisas -, assumindo que a ocorrência e quantidade de uma palavra específica é independente da ocorrência e quantidade de uma outra palavra específica - simplificando o cálculo desse algoritmo mais ainda, pois P(vetor de entrada | Classe_k) = P(componente 1 do vetor de entrada | Classe_k)*P(componente 2 do vetor de entrada | Classe_k)*...*P(componente do vetor de entrada | Classe_k). Logo esse algoritmo é computacionalmente simples e tem uma boa performance em relação a outros algoritmos possíveis.
#
# Um bom algoritmo para a tarefa de análise de sentimentos tem performance de pelo menos 70% de acurácia [6], visto que esta tarefa é complexa mesmo para seres humanos (essa mesma acurácia é esperada mesmo para seres humanos), pois há muitas variáveis envolvidas, tais como detecção de sarcásmo, termos neutros, entre outros... então, a acuárácia esperada será em torno desses valores.
#
# Nesse projeto será utilizado, portanto, o algoritmo Naïve Bayes. A distribuição de probabilidade utilizada para os cálculos será a multinomial, pois tem uma performance superior para tarefas de classificação de textos [4]. Os seguintes passos serão seguidos para a conclusão do projeto:
#
# Nesse texto, seria interessante ignorar as bibliotecas que serão utilizadas (sys para leitura da entrada padrão, numpy para facilitar em tarefas como randomização da entrada, sklearn (naive_bayes para o algoritmo em si, metrics para avaliação da performance e CountVectorizer que será o responsável por transformar a entrada em um vetor de características para algoritmo), bem como a validação da entrada padrão, que basicamente apenas verifica se há alguma frase para ser testada, esse código ficará disponível no código em python completo juntamente com esse texto no GitHub. Mas o Juptyter interpreta o código linha a linha, então esse passo será mostrado na abaixo.
# +
import sys
import numpy as np
from sklearn import naive_bayes
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
if len(sys.argv) < 2 or len(sys.argv) > 2:
# sys.exit(1) para o Jupter rodar, nesse caso será adicionada uma sentenca negativa
testInput = 'its a bad film'
else:
testInput = sys.argv[1]
print("frase que sera testada ao final do algoritmo: %s" % testInput) # apenas para o Jupyter
# -
# * Carregar os dados
allReviews = [] #vetor de sentencas, contera todas as entradas (negativas ou positivas)
# * O arquivo rt-polarity.neg será carregado como vetores de reviews negativos, definindo o label 0 para cada review.
# +
with open("rt-polarity.neg", 'r') as file:
for line in file:
allReviews.append([line, 0])
# -
# * O arquivo rt-polarity.pos será carregado como vetores de reviews positivos, definindo o label 1 para cada review.
#
with open("rt-polarity.pos", 'r') as file:
for line in file:
allReviews.append([line, 1])
# * Separar os dados para treinamento dos dados de teste.
# * Randomizar a entrada para diminuir erros amostrais;
#
np.random.shuffle(allReviews)
# * usar 90% para treinamento e 10% para teste. Nota: há diversas formas comuns para essa separação, por exemplo: (N-1) / 1, 80%/20%, Cross-validation Exaustiva (todas as variações possíveis), 50%/50%, entre outras... foi escolhido essa mais comum, por simplicidade, com uma amostra de 10 mil+.
#
trainReviews = allReviews[:int(len(allReviews)*0.9)]
testReviews = allReviews[int(len(allReviews)*0.9):]
# * Transformar as palavras em "bag of words", ou vetor de contagem de frequências de 'tokens'.
# * Nessa etapa é possível tratar a entrada, por exemplo, retirar palavras julgadas desnecessárias ou que atrapalhem a tarefa, manter radicais das palavras, fazer conversões, etc... Para a proposta desse projeto, será feita tokenização com retirada de "stop words" do idioma inglês, caracteres não unicode e será ignorados termos que aparecem apenas uma vez.
# +
vectorizer = CountVectorizer(stop_words='english', strip_accents='unicode', min_df = 2)
train_features = vectorizer.fit_transform([r[0] for r in trainReviews])
test_features = vectorizer.transform([r[0] for r in testReviews])
# -
# * Treinar o sistema com os dados separados para treinamento no item 2.2.
# * Como dito anteriormente, usaremos o modelo multinomial do classificador de Naïve Bayes, que criará um modelo de inferência da seguinte maneira: seja C_k a classe (negativa|positiva), x o vetor de características (vetor de frequência de palavras), o modelo de bayes calcula P( C_k | x ) = (P( x | C_k )P( C_k ))/P( x ) = pela hipótese ingênua = (P( x1 | C )P( x2 | C )...P( xn | C_k )P( C_k ))/P( x ) e, como para qualquer classe o denominador é irrelevante para a escolha da classe que maximiza o valor (é constante para toda classe), basta que o modelo calcule (P( x1 | C_k )P( x2 | C_k )...P( xn | C_k )P( C_k )) para cada classe (negativo|positivo). Usando as contagens de cada componente de x (cada token), e o número de sentenças de cada classe, o modelo está pronto. Por exemplo, P( x1 | C_k ) = número de ocorrências do token na primeira posição do vetor x dividido pelo número total de sentenças da classe C_k no conjunto de treinamento, e P( C_k ) = número de ocorrencias de sentenças da classe C dividido pelo total de sentenças no conjunto de treinamento. (se a ideia fosse implementar o algoritmo sem uso de biblioteca, haveria um problema técnico pensando nas multiplicaçoes das frequências diretamente como estimador, pois, caso uma palavra não exista no conjunto dessa classe, esse termo na produtória zeraria toda a produtória, para resolver isso, usa-se o estimador de Laplace, que basicamente adiciona 1 para cada termo possível e soma ao denominador o total de termos para que a fórmula nunca passe de 1... mas esse problema já é resolvido usando a implementação naive_bayes da biblioteca sklearn).
nb = naive_bayes.MultinomialNB()
nb.fit(train_features, [r[1] for r in trainReviews])
# * Nesse modelo, a saída é definida como a classe que maximiza a fórmula, ou seja, para o caso específico, onde as classes são 0 (negativo) e 1 (positivo), se P( C_0 | x ) > P( C_1 | x ), o modelo infere que a saída é 0, caso contrário, 1.
# * Avaliar a acurácia do sistema com os dados separados para teste no item 2.2.
# * Basta realizar a predição, isto é, aplicar cada vetor de características (frequência dos tokens) do conjunto de teste no modelo criado no item 4 e ver os resultados das predições feitas para o conjunto de dados de teste, quantos foram preditos corretamente, isto é, quantos eram negativos (ou positivos) e foram avaliados como negativos (ou positivos) e dividir pelo total de predições feitas (total do conjunto de testes).
#
# +
predictions = nb.predict(test_features)
testLabels = [r[1] for r in testReviews]
acuracy = metrics.accuracy_score(testLabels, predictions)
print("Acuracia do modelo: %0.5f" % acuracy)
# -
# * Classificar uma entrada nova, como teste.
# * Para testar o modelo com entradas novas, será usada a entrada padrão, então bastará executar o programa passando uma sentença em inglês, para que o sistema diga se a mesma é uma sentença com sentimento positivo ou negativo.
#
inputPrediction = nb.predict( vectorizer.transform( [testInput] ) )[0]
if( inputPrediction == 0 ):
print('Your input (%s) is a negative sentence' % testInput)
else:
print('Your input (%s) is a positive sentence' % testInput)
# [1] *Quote Catalog (https://quotecatalog.com/quote/tom-mitchell-a-computer-prog-81zBYB1)*
#
# [2] *Machine Learning, <NAME> (http://dl.acm.org/citation.cfm?id=541177)*
#
# [3] *Sentiment analysis algorithms and applications: A survey (http://www.sciencedirect.com/science/article/pii/S2090447914000550)*
#
# [4] *A Comparison of Event Models for Naive Bayes Text Classification (http://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf)*
#
# [5] *Confusion Matrix Terminology http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/*
#
# [6] *The Problem With Automated Sentiment Analysis http://www.freshminds.net/2010/05/the-problem-with-automated-sentiment-analysis/*
| Elo7 - Analisador de sentimentos.ipynb |